31,139 results on '"Boyd P"'
Search Results
2. Neutrino Experiments at the Large Hadron Collider
- Author
-
Ariga, Akitaka, Boyd, Jamie, Kling, Felix, and De Roeck, Albert
- Subjects
High Energy Physics - Experiment ,High Energy Physics - Phenomenology - Abstract
The proton-proton collisions at the Large Hadron Collider (LHC) produce an intense, high-energy beam of neutrinos of all flavors, collimated in the forward direction. Recently two dedicated neutrino experiments, FASER and SND@LHC, have started operating to take advantage of the TeV energy LHC neutrino beam, with first results released in 2023 and further results released in 2024. The first detection of neutrinos produced at a particle collider opens up a new avenue of research, allowing to study the highest energy neutrinos produced in a controlled laboratory environment, with an associated broad and rich physics program. Neutrino measurements at the LHC will provide important contributions to QCD, neutrino and BSM physics, with impactful implications for astro-particle physics. This review article summarizes the physics motivation, status and plans of, present and future neutrino experiments at the LHC., Comment: the article has been submitted to the Annual Review of Nuclear and Particle Science
- Published
- 2025
- Full Text
- View/download PDF
3. Towards Human-Guided, Data-Centric LLM Co-Pilots
- Author
-
Saveliev, Evgeny, Liu, Jiashuo, Seedat, Nabeel, Boyd, Anders, and van der Schaar, Mihaela
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Machine learning (ML) has the potential to revolutionize healthcare, but its adoption is often hindered by the disconnect between the needs of domain experts and translating these needs into robust and valid ML tools. Despite recent advances in LLM-based co-pilots to democratize ML for non-technical domain experts, these systems remain predominantly focused on model-centric aspects while overlooking critical data-centric challenges. This limitation is problematic in complex real-world settings where raw data often contains complex issues, such as missing values, label noise, and domain-specific nuances requiring tailored handling. To address this we introduce CliMB-DC, a human-guided, data-centric framework for LLM co-pilots that combines advanced data-centric tools with LLM-driven reasoning to enable robust, context-aware data processing. At its core, CliMB-DC introduces a novel, multi-agent reasoning system that combines a strategic coordinator for dynamic planning and adaptation with a specialized worker agent for precise execution. Domain expertise is then systematically incorporated to guide the reasoning process using a human-in-the-loop approach. To guide development, we formalize a taxonomy of key data-centric challenges that co-pilots must address. Thereafter, to address the dimensions of the taxonomy, we integrate state-of-the-art data-centric tools into an extensible, open-source architecture, facilitating the addition of new tools from the research community. Empirically, using real-world healthcare datasets we demonstrate CliMB-DC's ability to transform uncurated datasets into ML-ready formats, significantly outperforming existing co-pilot baselines for handling data-centric challenges. CliMB-DC promises to empower domain experts from diverse domains -- healthcare, finance, social sciences and more -- to actively participate in driving real-world impact using ML., Comment: Saveliev, Liu & Seedat contributed equally
- Published
- 2025
4. The Theater Stage as Laboratory: Review of Real-Time Comedy LLM Systems for Live Performance
- Author
-
Mirowski, Piotr Wojciech, Branch, Boyd, and Mathewson, Kory Wallace
- Subjects
Computer Science - Computation and Language - Abstract
In this position paper, we review the eclectic recent history of academic and artistic works involving computational systems for humor generation, and focus specifically on live performance. We make the case that AI comedy should be evaluated in live conditions, in front of audiences sharing either physical or online spaces, and under real-time constraints. We further suggest that improvised comedy is therefore the perfect substrate for deploying and assessing computational humor systems. Using examples of successful AI-infused shows, we demonstrate that live performance raises three sets of challenges for computational humor generation: 1) questions around robotic embodiment, anthropomorphism and competition between humans and machines, 2) questions around comedic timing and the nature of audience interaction, and 3) questions about the human interpretation of seemingly absurd AI-generated humor. We argue that these questions impact the choice of methodologies for evaluating computational humor, as any such method needs to work around the constraints of live audiences and performance spaces. These interrogations also highlight different types of collaborative relationship of human comedians towards AI tools., Comment: 8 pages, 1st Workshop on Computational Humor (CHum), COLING 2025
- Published
- 2025
5. Paper Fortune Tellers in the combinatorial dynamics of some generalized McMullen maps with both critical orbits bounded
- Author
-
Boyd, Suzanne and Brouwer, Kelsey
- Subjects
Mathematics - Dynamical Systems ,37F10 (Primay) 37F12, 37F20 (Secondary) - Abstract
For the family of complex rational functions known as "Generalized McMullen maps", F(z) = z^n + a/z^n+b, for complex parameters a and b, with a nonzero, and any integer n at least 3 fixed, we reveal, and provide a combinatorial model for, some new dynamical behavior. In particular, we describe a large class of maps whose Julia sets contain both infinitely many homeomorphic copies of quadratic Julia sets and infinitely many subsets homeomorphic to a set which is obtained by starting with a quadratic Julia set, then changing a finite number of pairs of external ray landing point identifications, following an algorithm we will describe., Comment: 26 pages, 17 figures, 25 images
- Published
- 2025
6. Detecting LHC Neutrinos at Surface Level
- Author
-
Ariga, Akitaka, Barwick, Steven, Boyd, Jamie, Fieg, Max, Kling, Felix, Mäkelä, Toni, Vendeuvre, Camille, and Weyer, Benjamin
- Subjects
High Energy Physics - Experiment ,High Energy Physics - Phenomenology - Abstract
The first direct detection of neutrinos at the LHC not only marks the beginning of a novel collider neutrino program at CERN but also motivates considering additional neutrino detectors to fully exploit the associated physics potential. We investigate the feasibility and physics potential of neutrino experiments located at the surface-level. A topographic desk study was performed to identify all points at which the LHC's neutrino beams exit the earth. The closest location lies about 9 km east of the CMS interaction point, at the bottom of Lake Geneva. Several detectors to be placed at this location are considered, including a water Cherenkov detector and an emulsion detector. The detector concepts are introduced, and projections for their contribution to the LHC forward neutrino program and searches for dark sector particles are presented. However, the dilution of the neutrino flux over distance reduces the neutrino yield significantly, limiting the physics potential of surface-level detectors compared to ones closer to the interaction point, including the proposed FPF., Comment: 21 pages, 13 figures
- Published
- 2025
7. Development of SQUID Array Amplifiers for the LiteBIRD CMB Satellite
- Author
-
Boyd, S. T. P. and de Haan, Tijmen
- Subjects
Physics - Instrumentation and Detectors ,Astrophysics - Instrumentation and Methods for Astrophysics - Abstract
LiteBIRD is an upcoming JAXA-led mission that aims to measure primordial gravitational waves in the B-mode polarization of the cosmic microwave background. It is set to launch in 2032. The LiteBIRD detector array consists of around 5000 TES detectors which are read out using digital frequency multiplexing over a bandwidth of 1-6 MHz. The multiplexing factor ranges from 58x to 68x. We are presently developing single-stage SQUID array amplifiers for LiteBIRD readout. Due to the reduced complexity and cost, and greater heritage from ground-based experiments such as the South Pole Telescope and Simons Array, single-stage SQUID array amplification is preferable for the first-stage amplification, as long as it can meet the requirements. The LiteBIRD single-stage SQUID Array is required to have high transimpedance amplification while maintaining a low input inductance and low dynamic resistance. In addition, the input-referred current noise must be very low, and the power dissipation must remain below about 100 nW. These requirements have non-trivial interactions. To maximize performance within these requirements we have performed lumped-element SQUID simulation. We find that by optimizing SQUID internal damping elements and inductive loading, good single-stage SQUID array performance can be obtained for LiteBIRD, including significant engineering margin., Comment: 7 pages, 4 figures
- Published
- 2025
8. Time Symmetries of Quantum Memory Improve Thermodynamic Efficiency
- Author
-
Boyd, Alexander B. and Riechers, Paul M.
- Subjects
Condensed Matter - Statistical Mechanics ,Quantum Physics - Abstract
Classical computations inherently require energy dissipation that increases significantly as the reliability of the computation improves. This dissipation arises when transitions between memory states are not balanced by their time-reversed counterparts. While classical memories exhibit a discrete set of possible time-reversal symmetries, quantum memory offers a continuum. This continuum enables the design of quantum memories that minimize irreversibility. As a result, quantum memory reduces energy dissipation several orders of magnitude below classical memory.
- Published
- 2025
9. Unifying the Extremes: Developing a Unified Model for Detecting and Predicting Extremist Traits and Radicalization
- Author
-
Lahnala, Allison, Varadarajan, Vasudha, Flek, Lucie, Schwartz, H. Andrew, and Boyd, Ryan L.
- Subjects
Computer Science - Social and Information Networks ,Computer Science - Computation and Language ,Computer Science - Computers and Society - Abstract
The proliferation of ideological movements into extremist factions via social media has become a global concern. While radicalization has been studied extensively within the context of specific ideologies, our ability to accurately characterize extremism in more generalizable terms remains underdeveloped. In this paper, we propose a novel method for extracting and analyzing extremist discourse across a range of online community forums. By focusing on verbal behavioral signatures of extremist traits, we develop a framework for quantifying extremism at both user and community levels. Our research identifies 11 distinct factors, which we term ``The Extremist Eleven,'' as a generalized psychosocial model of extremism. Applying our method to various online communities, we demonstrate an ability to characterize ideologically diverse communities across the 11 extremist traits. We demonstrate the power of this method by analyzing user histories from members of the incel community. We find that our framework accurately predicts which users join the incel community up to 10 months before their actual entry with an AUC of $>0.6$, steadily increasing to AUC ~0.9 three to four months before the event. Further, we find that upon entry into an extremist forum, the users tend to maintain their level of extremism within the community, while still remaining distinguishable from the general online discourse. Our findings contribute to the study of extremism by introducing a more holistic, cross-ideological approach that transcends traditional, trait-specific models., Comment: 17 pages, 7 figures, 4 tables
- Published
- 2025
10. Time Series Language Model for Descriptive Caption Generation
- Author
-
Trabelsi, Mohamed, Boyd, Aidan, Cao, Jin, and Uzunalioglu, Huseyin
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
The automatic generation of representative natural language descriptions for observable patterns in time series data enhances interpretability, simplifies analysis and increases cross-domain utility of temporal data. While pre-trained foundation models have made considerable progress in natural language processing (NLP) and computer vision (CV), their application to time series analysis has been hindered by data scarcity. Although several large language model (LLM)-based methods have been proposed for time series forecasting, time series captioning is under-explored in the context of LLMs. In this paper, we introduce TSLM, a novel time series language model designed specifically for time series captioning. TSLM operates as an encoder-decoder model, leveraging both text prompts and time series data representations to capture subtle temporal patterns across multiple phases and generate precise textual descriptions of time series inputs. TSLM addresses the data scarcity problem in time series captioning by first leveraging an in-context prompting synthetic data generation, and second denoising the generated data via a novel cross-modal dense retrieval scoring applied to time series-caption pairs. Experimental findings on various time series captioning datasets demonstrate that TSLM outperforms existing state-of-the-art approaches from multiple data modalities by a significant margin.
- Published
- 2025
11. Deep Linear Hawkes Processes
- Author
-
Chang, Yuxin, Boyd, Alex, Xiao, Cao, Kass-Hout, Taha, Bhatia, Parminder, Smyth, Padhraic, and Warrington, Andrew
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
Marked temporal point processes (MTPPs) are used to model sequences of different types of events with irregular arrival times, with broad applications ranging from healthcare and social networks to finance. We address shortcomings in existing point process models by drawing connections between modern deep state-space models (SSMs) and linear Hawkes processes (LHPs), culminating in an MTPP that we call the deep linear Hawkes process (DLHP). The DLHP modifies the linear differential equations in deep SSMs to be stochastic jump differential equations, akin to LHPs. After discretizing, the resulting recurrence can be implemented efficiently using a parallel scan. This brings parallelism and linear scaling to MTPP models. This contrasts with attention-based MTPPs, which scale quadratically, and RNN-based MTPPs, which do not parallelize across the sequence length. We show empirically that DLHPs match or outperform existing models across a broad range of metrics on eight real-world datasets. Our proposed DLHP model is the first instance of the unique architectural capabilities of SSMs being leveraged to construct a new class of MTPP models.
- Published
- 2024
12. CuClarabel: GPU Acceleration for a Conic Optimization Solver
- Author
-
Chen, Yuwen, Tse, Danny, Nobel, Parth, Goulart, Paul, and Boyd, Stephen
- Subjects
Mathematics - Optimization and Control - Abstract
We present the GPU implementation of the general-purpose interior-point solver Clarabel for convex optimization problems with conic constraints. We introduce a mixed parallel computing strategy that processes linear constraints first, then handles other conic constraints in parallel. This mixed parallel computing strategy currently supports linear, second-order cone, exponential cone, and power cone constraints. We demonstrate that integrating a mixed parallel computing strategy with GPU-based direct linear system solvers enhances the performance of GPU-based conic solvers, surpassing their CPU-based counterparts across a wide range of conic optimization problems. We also show that employing mixed-precision linear system solvers can potentially achieve additional acceleration without compromising solution accuracy.
- Published
- 2024
13. PsychAdapter: Adapting LLM Transformers to Reflect Traits, Personality and Mental Health
- Author
-
Vu, Huy, Nguyen, Huy Anh, Ganesan, Adithya V, Juhng, Swanie, Kjell, Oscar N. E., Sedoc, Joao, Kern, Margaret L., Boyd, Ryan L., Ungar, Lyle, Schwartz, H. Andrew, and Eichstaedt, Johannes C.
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
Artificial intelligence-based language generators are now a part of most people's lives. However, by default, they tend to generate "average" language without reflecting the ways in which people differ. Here, we propose a lightweight modification to the standard language model transformer architecture - "PsychAdapter" - that uses empirically derived trait-language patterns to generate natural language for specified personality, demographic, and mental health characteristics (with or without prompting). We applied PsychAdapters to modify OpenAI's GPT-2, Google's Gemma, and Meta's Llama 3 and found generated text to reflect the desired traits. For example, expert raters evaluated PsychAdapter's generated text output and found it matched intended trait levels with 87.3% average accuracy for Big Five personalities, and 96.7% for depression and life satisfaction. PsychAdapter is a novel method to introduce psychological behavior patterns into language models at the foundation level, independent of prompting, by influencing every transformer layer. This approach can create chatbots with specific personality profiles, clinical training tools that mirror language associated with psychological conditionals, and machine translations that match an authors reading or education level without taking up LLM context windows. PsychAdapter also allows for the exploration psychological constructs through natural language expression, extending the natural language processing toolkit to study human psychology.
- Published
- 2024
14. OpenAI o1 System Card
- Author
-
OpenAI, Jaech, Aaron, Kalai, Adam, Lerer, Adam, Richardson, Adam, El-Kishky, Ahmed, Low, Aiden, Helyar, Alec, Madry, Aleksander, Beutel, Alex, Carney, Alex, Iftimie, Alex, Karpenko, Alex, Passos, Alex Tachard, Neitz, Alexander, Prokofiev, Alexander, Wei, Alexander, Tam, Allison, Bennett, Ally, Kumar, Ananya, Saraiva, Andre, Vallone, Andrea, Duberstein, Andrew, Kondrich, Andrew, Mishchenko, Andrey, Applebaum, Andy, Jiang, Angela, Nair, Ashvin, Zoph, Barret, Ghorbani, Behrooz, Rossen, Ben, Sokolowsky, Benjamin, Barak, Boaz, McGrew, Bob, Minaiev, Borys, Hao, Botao, Baker, Bowen, Houghton, Brandon, McKinzie, Brandon, Eastman, Brydon, Lugaresi, Camillo, Bassin, Cary, Hudson, Cary, Li, Chak Ming, de Bourcy, Charles, Voss, Chelsea, Shen, Chen, Zhang, Chong, Koch, Chris, Orsinger, Chris, Hesse, Christopher, Fischer, Claudia, Chan, Clive, Roberts, Dan, Kappler, Daniel, Levy, Daniel, Selsam, Daniel, Dohan, David, Farhi, David, Mely, David, Robinson, David, Tsipras, Dimitris, Li, Doug, Oprica, Dragos, Freeman, Eben, Zhang, Eddie, Wong, Edmund, Proehl, Elizabeth, Cheung, Enoch, Mitchell, Eric, Wallace, Eric, Ritter, Erik, Mays, Evan, Wang, Fan, Such, Felipe Petroski, Raso, Filippo, Leoni, Florencia, Tsimpourlas, Foivos, Song, Francis, von Lohmann, Fred, Sulit, Freddie, Salmon, Geoff, Parascandolo, Giambattista, Chabot, Gildas, Zhao, Grace, Brockman, Greg, Leclerc, Guillaume, Salman, Hadi, Bao, Haiming, Sheng, Hao, Andrin, Hart, Bagherinezhad, Hessam, Ren, Hongyu, Lightman, Hunter, Chung, Hyung Won, Kivlichan, Ian, O'Connell, Ian, Osband, Ian, Gilaberte, Ignasi Clavera, Akkaya, Ilge, Kostrikov, Ilya, Sutskever, Ilya, Kofman, Irina, Pachocki, Jakub, Lennon, James, Wei, Jason, Harb, Jean, Twore, Jerry, Feng, Jiacheng, Yu, Jiahui, Weng, Jiayi, Tang, Jie, Yu, Jieqi, Candela, Joaquin Quiñonero, Palermo, Joe, Parish, Joel, Heidecke, Johannes, Hallman, John, Rizzo, John, Gordon, Jonathan, Uesato, Jonathan, Ward, Jonathan, Huizinga, Joost, Wang, Julie, Chen, Kai, Xiao, Kai, Singhal, Karan, Nguyen, Karina, Cobbe, Karl, Shi, Katy, Wood, Kayla, Rimbach, Kendra, Gu-Lemberg, Keren, Liu, Kevin, Lu, Kevin, Stone, Kevin, Yu, Kevin, Ahmad, Lama, Yang, Lauren, Liu, Leo, Maksin, Leon, Ho, Leyton, Fedus, Liam, Weng, Lilian, Li, Linden, McCallum, Lindsay, Held, Lindsey, Kuhn, Lorenz, Kondraciuk, Lukas, Kaiser, Lukasz, Metz, Luke, Boyd, Madelaine, Trebacz, Maja, Joglekar, Manas, Chen, Mark, Tintor, Marko, Meyer, Mason, Jones, Matt, Kaufer, Matt, Schwarzer, Max, Shah, Meghan, Yatbaz, Mehmet, Guan, Melody Y., Xu, Mengyuan, Yan, Mengyuan, Glaese, Mia, Chen, Mianna, Lampe, Michael, Malek, Michael, Wang, Michele, Fradin, Michelle, McClay, Mike, Pavlov, Mikhail, Wang, Miles, Wang, Mingxuan, Murati, Mira, Bavarian, Mo, Rohaninejad, Mostafa, McAleese, Nat, Chowdhury, Neil, Ryder, Nick, Tezak, Nikolas, Brown, Noam, Nachum, Ofir, Boiko, Oleg, Murk, Oleg, Watkins, Olivia, Chao, Patrick, Ashbourne, Paul, Izmailov, Pavel, Zhokhov, Peter, Dias, Rachel, Arora, Rahul, Lin, Randall, Lopes, Rapha Gontijo, Gaon, Raz, Miyara, Reah, Leike, Reimar, Hwang, Renny, Garg, Rhythm, Brown, Robin, James, Roshan, Shu, Rui, Cheu, Ryan, Greene, Ryan, Jain, Saachi, Altman, Sam, Toizer, Sam, Toyer, Sam, Miserendino, Samuel, Agarwal, Sandhini, Hernandez, Santiago, Baker, Sasha, McKinney, Scott, Yan, Scottie, Zhao, Shengjia, Hu, Shengli, Santurkar, Shibani, Chaudhuri, Shraman Ray, Zhang, Shuyuan, Fu, Siyuan, Papay, Spencer, Lin, Steph, Balaji, Suchir, Sanjeev, Suvansh, Sidor, Szymon, Broda, Tal, Clark, Aidan, Wang, Tao, Gordon, Taylor, Sanders, Ted, Patwardhan, Tejal, Sottiaux, Thibault, Degry, Thomas, Dimson, Thomas, Zheng, Tianhao, Garipov, Timur, Stasi, Tom, Bansal, Trapit, Creech, Trevor, Peterson, Troy, Eloundou, Tyna, Qi, Valerie, Kosaraju, Vineet, Monaco, Vinnie, Pong, Vitchyr, Fomenko, Vlad, Zheng, Weiyi, Zhou, Wenda, McCabe, Wes, Zaremba, Wojciech, Dubois, Yann, Lu, Yinghai, Chen, Yining, Cha, Young, Bai, Yu, He, Yuchen, Zhang, Yuchen, Wang, Yunyun, Shao, Zheng, and Li, Zhuohan
- Subjects
Computer Science - Artificial Intelligence - Abstract
The o1 model series is trained with large-scale reinforcement learning to reason using chain of thought. These advanced reasoning capabilities provide new avenues for improving the safety and robustness of our models. In particular, our models can reason about our safety policies in context when responding to potentially unsafe prompts, through deliberative alignment. This leads to state-of-the-art performance on certain benchmarks for risks such as generating illicit advice, choosing stereotyped responses, and succumbing to known jailbreaks. Training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits, while also increasing potential risks that stem from heightened intelligence. Our results underscore the need for building robust alignment methods, extensively stress-testing their efficacy, and maintaining meticulous risk management protocols. This report outlines the safety work carried out for the OpenAI o1 and OpenAI o1-mini models, including safety evaluations, external red teaming, and Preparedness Framework evaluations.
- Published
- 2024
15. Memory-minimal quantum generation of stochastic processes: spectral invariants of quantum hidden Markov models
- Author
-
Zonnios, Magdalini, Boyd, Alec, and Binder, Felix C.
- Subjects
Quantum Physics - Abstract
Stochastic processes abound in nature and accurately modeling them is essential across the quantitative sciences. They can be described by hidden Markov models (HMMs) or by their quantum extensions (QHMMs). These models explain and give rise to process outputs in terms of an observed system interacting with an unobserved memory. Although there are infinitely many models that can generate a given process, they can vary greatly in their memory requirements. It is therefore of great fundamental and practical importance to identify memory-minimal models. This task is complicated due to both the number of generating models, and the lack of invariant features that determine elements of the set. In general, it is forbiddingly difficult to ascertain that a given model is minimal. Addressing this challenge, we here identify spectral invariants of a process that can be calculated from any model that generates it. This allows us to determine strict bounds on the quantum generative complexity of the process -- its minimal memory requirement. We then show that the bound is raised quadratically when we restrict to classical operations. This is an entirely quantum-coherent effect, as we express precisely, using the resource theory of coherence. Finally, we demonstrate that the classical bound can be violated by quantum models.
- Published
- 2024
16. DAmodel: Hierarchical Bayesian Modelling of DA White Dwarfs for Spectrophotometric Calibration
- Author
-
Boyd, Benjamin M., Narayan, Gautham, Mandel, Kaisey S., Grayling, Matthew, Berres, Aidan, Li, Mai, Do, Aaron, Saha, Abhijit, Axelrod, Tim, Matheson, Thomas, Olszewski, Edward W., Bohlin, Ralph C., Calamida, Annalisa, Holberg, Jay B., Hubeny, Ivan, Mackenty, John W., Rest, Armin, Sabbi, Elena, and Stubbs, Christopher W.
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Solar and Stellar Astrophysics ,Statistics - Applications - Abstract
We use hierarchical Bayesian modelling to calibrate a network of 32 all-sky faint DA white dwarf (DA WD) spectrophotometric standards ($16.5 < V < 19.5$) alongside the three CALSPEC standards, from 912 \r{A} to 32 $\mu$m. The framework is the first of its kind to jointly infer photometric zeropoints and WD parameters ($\log g$, $T_{\text{eff}}$, $A_V$, $R_V$) by simultaneously modelling both photometric and spectroscopic data. We model panchromatic HST/WFC3 UVIS and IR fluxes, HST/STIS UV spectroscopy and ground-based optical spectroscopy to sub-percent precision. Photometric residuals for the sample are the lowest yet yielding $<0.004$ mag RMS on average from the UV to the NIR, achieved by jointly inferring time-dependent changes in system sensitivity and WFC3/IR count-rate nonlinearity. Our GPU-accelerated implementation enables efficient sampling via Hamiltonian Monte Carlo, critical for exploring the high-dimensional posterior space. The hierarchical nature of the model enables population analysis of intrinsic WD and dust parameters. Inferred SEDs from this model will be essential for calibrating the James Webb Space Telescope as well as next-generation surveys, including Vera Rubin Observatory's Legacy Survey of Space and Time, and the Nancy Grace Roman Space Telescope., Comment: 32 pages, 24 figures, 5 tables, submitted to MNRAS
- Published
- 2024
17. 3D Convective Urca Process in a Simmering White Dwarf
- Author
-
Boyd, Brendan, Calder, Alan, Townsley, Dean, and Zingale, Michael
- Subjects
Astrophysics - Solar and Stellar Astrophysics ,Astrophysics - High Energy Astrophysical Phenomena - Abstract
A proposed setting for thermonuclear (Type Ia) supernovae is a white dwarf that has gained mass from a companion to the point of carbon ignition in the core. In the early stages of carbon burning, called the simmering phase, energy released by the reactions in the core drive the formation and growth of a core convection zone. One aspect of this phase is the convective Urca process, a linking of weak nuclear reactions to convection, which may alter the composition and structure of the white dwarf. The convective Urca process is not well understood and requires 3D fluid simulations to properly model the turbulent convection, an inherently 3D process. Because the neutron excess of the fluid both sets and is set by the extent of the convection zone, the realistic steady state can only be determined in simulations with real 3D mixing processes. Additionally, the convection is relatively slow (Mach number less than 0.005) and thus a low Mach number method is needed to model the flow over many convective turnovers. Using the MAESTROeX low Mach number hydrodynamic software, we present the first full star 3D simulations of the A=23 convective Urca process, spanning hundreds of convective turnover times. Our findings on the extent of mixing across the Urca shell, the characteristic velocities of the flow, the energy loss rates due to neutrino emission, and the structure of the convective boundary can be used to inform 1D stellar models that track the longer-timescale evolution., Comment: 18 pages, 17 figures, Accepted to the Astrophysical Journal
- Published
- 2024
18. Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
- Author
-
Cooper, A. Feder, Choquette-Choo, Christopher A., Bogen, Miranda, Jagielski, Matthew, Filippova, Katja, Liu, Ken Ziyu, Chouldechova, Alexandra, Hayes, Jamie, Huang, Yangsibo, Mireshghallah, Niloofar, Shumailov, Ilia, Triantafillou, Eleni, Kairouz, Peter, Mitchell, Nicole, Liang, Percy, Ho, Daniel E., Choi, Yejin, Koyejo, Sanmi, Delgado, Fernando, Grimmelmann, James, Shmatikov, Vitaly, De Sa, Christopher, Barocas, Solon, Cyphert, Amy, Lemley, Mark, boyd, danah, Vaughan, Jennifer Wortman, Brundage, Miles, Bau, David, Neel, Seth, Jacobs, Abigail Z., Terzis, Andreas, Wallach, Hanna, Papernot, Nicolas, and Lee, Katherine
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computers and Society - Abstract
We articulate fundamental mismatches between technical methods for machine unlearning in Generative AI, and documented aspirations for broader impact that these methods could have for law and policy. These aspirations are both numerous and varied, motivated by issues that pertain to privacy, copyright, safety, and more. For example, unlearning is often invoked as a solution for removing the effects of targeted information from a generative-AI model's parameters, e.g., a particular individual's personal data or in-copyright expression of Spiderman that was included in the model's training data. Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs, e.g., generations that closely resemble a particular individual's data or reflect the concept of "Spiderman." Both of these goals--the targeted removal of information from a model and the targeted suppression of information from a model's outputs--present various technical and substantive challenges. We provide a framework for thinking rigorously about these challenges, which enables us to be clear about why unlearning is not a general-purpose solution for circumscribing generative-AI model behavior in service of broader positive impact. We aim for conceptual clarity and to encourage more thoughtful communication among machine learning (ML), law, and policy experts who seek to develop and apply technical methods for compliance with policy objectives., Comment: Presented at the 2nd Workshop on Generative AI and Law at ICML (July 2024)
- Published
- 2024
19. Discrete-Time Distribution Steering using Monte Carlo Tree Search
- Author
-
Tzikas, Alexandros E., Kruse, Liam A., Arief, Mansur, Kochenderfer, Mykel J., and Boyd, Stephen
- Subjects
Electrical Engineering and Systems Science - Systems and Control ,Computer Science - Robotics ,I.2.9 ,G.3 - Abstract
Optimal control problems with state distribution constraints have attracted interest for their expressivity, but solutions rely on linear approximations. We approach the problem of driving the state of a dynamical system in distribution from a sequential decision-making perspective. We formulate the optimal control problem as an appropriate Markov decision process (MDP), where the actions correspond to the state-feedback control policies. We then solve the MDP using Monte Carlo tree search (MCTS). This renders our method suitable for any dynamics model. A key component of our approach is a novel, easy to compute, distance metric in the distribution space that allows our algorithm to guide the distribution of the state. We experimentally test our algorithm under both linear and nonlinear dynamics., Comment: Submitted to the IEEE Robotics and Automation Letters for possible publication
- Published
- 2024
20. First Measurement of the Muon Neutrino Interaction Cross Section and Flux as a Function of Energy at the LHC with FASER
- Author
-
FASER Collaboration, Abraham, Roshan Mammen, Ai, Xiaocong, Anders, John, Antel, Claire, Ariga, Akitaka, Ariga, Tomoko, Atkinson, Jeremy, Bernlochner, Florian U., Boeckh, Tobias, Boyd, Jamie, Brenner, Lydia, Burger, Angela, Cadoux, Franck, Cardella, Roberto, Casper, David W., Cavanagh, Charlotte, Chen, Xin, Chouhan, Dhruv, Coccaro, Andrea, Débieux, Stephane, D'Onofrio, Monica, Desai, Ansh, Dmitrievsky, Sergey, Dobre, Radu, Eley, Sinead, Favre, Yannick, Fellers, Deion, Feng, Jonathan L., Fenoglio, Carlo Alberto, Ferrere, Didier, Fieg, Max, Filali, Wissal, Firu, Elena, Garabaglu, Ali, Gibson, Stephen, Gonzalez-Sevilla, Sergio, Gornushkin, Yuri, Gwilliam, Carl, Hayakawa, Daiki, Holzbock, Michael, Hsu, Shih-Chieh, Hu, Zhen, Iacobucci, Giuseppe, Inada, Tomohiro, Iodice, Luca, Jakobsen, Sune, Joos, Hans, Kajomovitz, Enrique, Kawahara, Hiroaki, Keyken, Alex, Kling, Felix, Köck, Daniela, Kontaxakis, Pantelis, Kose, Umut, Kotitsa, Rafaella, Kuehn, Susanne, Kugathasan, Thanushan, Levinson, Lorne, Li, Ke, Liu, Jinfeng, Liu, Yi, Lutz, Margaret S., MacDonald, Jack, Magliocca, Chiara, Mäkelä, Toni, McCoy, Lawson, McFayden, Josh, Medina, Andrea Pizarro, Milanesio, Matteo, Moretti, Théo, Nakamura, Mitsuhiro, Nakano, Toshiyuki, Nevay, Laurie, Ohashi, Ken, Otono, Hidetoshi, Pang, Hao, Paolozzi, Lorenzo, Pawan, Pawan, Petersen, Brian, Preda, Titi, Prim, Markus, Queitsch-Maitland, Michaela, Rokujo, Hiroki, Rubbia, André, Sabater-Iglesias, Jorge, Sato, Osamu, Scampoli, Paola, Schmieden, Kristof, Schott, Matthias, Sfyrla, Anna, Sgalaberna, Davide, Shamim, Mansoora, Shively, Savannah, Takubo, Yosuke, Tarannum, Noshin, Theiner, Ondrej, Torrence, Eric, Martinez, Oscar Ivan Valdes, Vasina, Svetlana, Vormwald, Benedikt, Wang, Di, Wang, Yuxiao, Welch, Eli, Wielers, Monika, Xu, Yue, Zahorec, Samuel, Zambito, Stefano, and Zhang, Shunliang
- Subjects
High Energy Physics - Experiment ,High Energy Physics - Phenomenology - Abstract
This letter presents the measurement of the energy-dependent neutrino-nucleon cross section in tungsten and the differential flux of muon neutrinos and anti-neutrinos. The analysis is performed using proton-proton collision data at a center-of-mass energy of $13.6 \, {\rm TeV}$ and corresponding to an integrated luminosity of $(65.6 \pm 1.4) \, \mathrm{fb^{-1}}$. Using the active electronic components of the FASER detector, $338.1 \pm 21.0$ charged current muon neutrino interaction events are identified, with backgrounds from other processes subtracted. We unfold the neutrino events into a fiducial volume corresponding to the sensitive regions of the FASER detector and interpret the results in two ways: We use the expected neutrino flux to measure the cross section, and we use the predicted cross section to measure the neutrino flux. Both results are presented in six bins of neutrino energy, achieving the first differential measurement in the TeV range. The observed distributions align with Standard Model predictions. Using this differential data, we extract the contributions of neutrinos from pion and kaon decays.
- Published
- 2024
21. A Markowitz Approach to Managing a Dynamic Basket of Moving-Band Statistical Arbitrages
- Author
-
Johansson, Kasper, Schmelzer, Thomas, and Boyd, Stephen
- Subjects
Economics - Econometrics - Abstract
We consider the problem of managing a portfolio of moving-band statistical arbitrages (MBSAs), inspired by the Markowitz optimization framework. We show how to manage a dynamic basket of MBSAs, and illustrate the method on recent historical data, showing that it can perform very well in terms of risk-adjusted return, essentially uncorrelated with the market.
- Published
- 2024
22. Simple and Effective Portfolio Construction with Crypto Assets
- Author
-
Johansson, Kasper and Boyd, Stephen
- Subjects
Economics - Econometrics - Abstract
We consider the problem of constructing a portfolio that combines traditional financial assets with crypto assets. We show that despite the documented attributes of crypto assets, such as high volatility, heavy tails, excess kurtosis, and skewness, a simple extension of traditional risk allocation provides robust solutions for integrating these emerging assets into broader investment strategies. Examination of the risk allocation holdings suggests an even simpler method, analogous to the traditional 60/40 stocks/bonds allocation, involving a fixed allocation to crypto and traditional assets, dynamically diluted with cash to achieve a target risk level.
- Published
- 2024
23. The Multimodal Universe: Enabling Large-Scale Machine Learning with 100TB of Astronomical Scientific Data
- Author
-
The Multimodal Universe Collaboration, Audenaert, Jeroen, Bowles, Micah, Boyd, Benjamin M., Chemaly, David, Cherinka, Brian, Ciucă, Ioana, Cranmer, Miles, Do, Aaron, Grayling, Matthew, Hayes, Erin E., Hehir, Tom, Ho, Shirley, Huertas-Company, Marc, Iyer, Kartheik G., Jablonska, Maja, Lanusse, Francois, Leung, Henry W., Mandel, Kaisey, Martínez-Galarza, Juan Rafael, Melchior, Peter, Meyer, Lucas, Parker, Liam H., Qu, Helen, Shen, Jeff, Smith, Michael J., Stone, Connor, Walmsley, Mike, and Wu, John F.
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Astrophysics of Galaxies ,Astrophysics - Solar and Stellar Astrophysics - Abstract
We present the MULTIMODAL UNIVERSE, a large-scale multimodal dataset of scientific astronomical data, compiled specifically to facilitate machine learning research. Overall, the MULTIMODAL UNIVERSE contains hundreds of millions of astronomical observations, constituting 100\,TB of multi-channel and hyper-spectral images, spectra, multivariate time series, as well as a wide variety of associated scientific measurements and "metadata". In addition, we include a range of benchmark tasks representative of standard practices for machine learning methods in astrophysics. This massive dataset will enable the development of large multi-modal models specifically targeted towards scientific applications. All codes used to compile the MULTIMODAL UNIVERSE and a description of how to access the data is available at https://github.com/MultimodalUniverse/MultimodalUniverse, Comment: Accepted at NeurIPS Datasets and Benchmarks track
- Published
- 2024
24. Explaining GPT-4's Schema of Depression Using Machine Behavior Analysis
- Author
-
Ganesan, Adithya V, Varadarajan, Vasudha, Lal, Yash Kumar, Eijsbroek, Veerle C., Kjell, Katarina, Kjell, Oscar N. E., Dhanasekaran, Tanuja, Stade, Elizabeth C., Eichstaedt, Johannes C., Boyd, Ryan L., Schwartz, H. Andrew, and Flek, Lucie
- Subjects
Computer Science - Computation and Language - Abstract
Use of large language models such as ChatGPT (GPT-4) for mental health support has grown rapidly, emerging as a promising route to assess and help people with mood disorders, like depression. However, we have a limited understanding of GPT-4's schema of mental disorders, that is, how it internally associates and interprets symptoms. In this work, we leveraged contemporary measurement theory to decode how GPT-4 interrelates depressive symptoms to inform both clinical utility and theoretical understanding. We found GPT-4's assessment of depression: (a) had high overall convergent validity (r = .71 with self-report on 955 samples, and r = .81 with experts judgments on 209 samples); (b) had moderately high internal consistency (symptom inter-correlates r = .23 to .78 ) that largely aligned with literature and self-report; except that GPT-4 (c) underemphasized suicidality's -- and overemphasized psychomotor's -- relationship with other symptoms, and (d) had symptom inference patterns that suggest nuanced hypotheses (e.g. sleep and fatigue are influenced by most other symptoms while feelings of worthlessness/guilt is mostly influenced by depressed mood)., Comment: 21 pages, 3 tables, 6 figures, 1 supplementary table, 83 references
- Published
- 2024
25. BioNeMo Framework: a modular, high-performance library for AI model development in drug discovery
- Author
-
John, Peter St., Lin, Dejun, Binder, Polina, Greaves, Malcolm, Shah, Vega, John, John St., Lange, Adrian, Hsu, Patrick, Illango, Rajesh, Ramanathan, Arvind, Anandkumar, Anima, Brookes, David H, Busia, Akosua, Mahajan, Abhishaike, Malina, Stephen, Prasad, Neha, Sinai, Sam, Edwards, Lindsay, Gaudelet, Thomas, Regep, Cristian, Steinegger, Martin, Rost, Burkhard, Brace, Alexander, Hippe, Kyle, Naef, Luca, Kamata, Keisuke, Armstrong, George, Boyd, Kevin, Cao, Zhonglin, Chou, Han-Yi, Chu, Simon, Costa, Allan dos Santos, Darabi, Sajad, Dawson, Eric, Didi, Kieran, Fu, Cong, Geiger, Mario, Gill, Michelle, Hsu, Darren, Kaushik, Gagan, Korshunova, Maria, Kothen-Hill, Steven, Lee, Youhan, Liu, Meng, Livne, Micha, McClure, Zachary, Mitchell, Jonathan, Moradzadeh, Alireza, Mosafi, Ohad, Nashed, Youssef, Paliwal, Saee, Peng, Yuxing, Rabhi, Sara, Ramezanghorbani, Farhad, Reidenbach, Danny, Ricketts, Camir, Roland, Brian, Shah, Kushal, Shimko, Tyler, Sirelkhatim, Hassan, Srinivasan, Savitha, Stern, Abraham C, Toczydlowska, Dorota, Veccham, Srimukh Prasad, Venanzi, Niccolò Alberto Elia, Vorontsov, Anton, Wilber, Jared, Wilkinson, Isabel, Wong, Wei Jing, Xue, Eva, Ye, Cory, Yu, Xin, Zhang, Yang, Zhou, Guoqing, Zandstein, Becca, Dallago, Christian, Trentini, Bruno, Kucukbenli, Emine, Rvachov, Timur, Calleja, Eddie, Israeli, Johnny, Clifford, Harry, Haukioja, Risto, Haemel, Nicholas, Tretina, Kyle, Tadimeti, Neha, and Costa, Anthony B
- Subjects
Computer Science - Machine Learning ,Quantitative Biology - Biomolecules - Abstract
Artificial Intelligence models encoding biology and chemistry are opening new routes to high-throughput and high-quality in-silico drug development. However, their training increasingly relies on computational scale, with recent protein language models (pLM) training on hundreds of graphical processing units (GPUs). We introduce the BioNeMo Framework to facilitate the training of computational biology and chemistry AI models across hundreds of GPUs. Its modular design allows the integration of individual components, such as data loaders, into existing workflows and is open to community contributions. We detail technical features of the BioNeMo Framework through use cases such as pLM pre-training and fine-tuning. On 256 NVIDIA A100s, BioNeMo Framework trains a three billion parameter BERT-based pLM on over one trillion tokens in 4.2 days. The BioNeMo Framework is open-source and free for everyone to use.
- Published
- 2024
26. Personalized Help for Optimizing Low-Skilled Users' Strategy
- Author
-
Gu, Feng, Wongkamjan, Wichayaporn, Boyd-Graber, Jordan Lee, Kummerfeld, Jonathan K., Peskoff, Denis, and May, Jonathan
- Subjects
Computer Science - Computation and Language - Abstract
AIs can beat humans in game environments; however, how helpful those agents are to human remains understudied. We augment CICERO, a natural language agent that demonstrates superhuman performance in Diplomacy, to generate both move and message advice based on player intentions. A dozen Diplomacy games with novice and experienced players, with varying advice settings, show that some of the generated advice is beneficial. It helps novices compete with experienced players and in some instances even surpass them. The mere presence of advice can be advantageous, even if players do not follow it., Comment: 9 pages, 3 figures
- Published
- 2024
27. Ten Pillars for Data Meshes
- Author
-
Grossman, Robert L., Boyd, Ceilyn, Do, Nhan, Elbers, Danne C., Fitzsimons, Michael S., Giger, Maryellen L., Juehne, Anthony, Larrick, Brienna, Lee, Jerry S. H., Lin, Dawei, Lukowski, Michael, Myers, James D., Schumm, L. Philip, and Venkat, Aarti
- Subjects
Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
Over the past few years, a growing number of data platforms have emerged, including data commons, data repositories, and databases containing biomedical, environmental, social determinants of health and other data relevant to improving health outcomes. With the growing number of data platforms, interoperating multiple data platforms to form data meshes, data fabrics and other types of data ecosystems reduces data silos, expands data use, and increases the potential for new discoveries. In this paper, we introduce ten principles, which we call pillars, for data meshes. The goals of the principles are 1) to make it easier, faster, and more uniform to set up a data mesh from multiple data platforms; and, 2) to make it easier, faster, and more uniform, for a data platform to join one or more data meshes. The hope is that the greater availability of data through data meshes will accelerate research and that the greater uniformity of meshes will lower the cost of developing meshes and connecting a data platform to them., Comment: 10 pages, 1 figure
- Published
- 2024
28. Baby Mandelbrot sets and Spines in some one-dimensional subspaces of the parameter space for generalized McMullen Maps
- Author
-
Boyd, Suzanne and Hoeppner, Matthew
- Subjects
Mathematics - Dynamical Systems ,37F10, 32A20 (Primary) 32A19 (Secondary) - Abstract
For the family of complex rational functions of the form $R_{n,c,a}(z) = z^n + \dfrac{a}{z^n}+c$, known as ``Generalized McMullen maps'', for $a\neq 0$ and $n \geq 3$ fixed, we study the boundedness locus in some one-dimensional slices of the $(a,c)$-parameter space, by fixing a parameter or imposing a relation. First, if we fix $c$ with $|c|\geq 6$ while allowing $a$ to vary, assuming a modest lower bound on $n$ in terms of $|c|$, we establish the location in the $a$-plane of $n$ ``baby" Mandelbrot sets, that is, homeomorphic copies of the original Mandelbrot set. We use polynomial-like maps, introduced by Douady and Hubbard and applied for the subfamily $R_{n,a,0}$ by Devaney. Second, for slices in which $c=ta$, we again observe what look like baby Mandelbrot sets within these slices, and begin the study of this subfamily by establishing a neighborhood containing the boundedness locus., Comment: 38 pages, 13 figures with 32 subfigures
- Published
- 2024
29. Science and Project Planning for the Forward Physics Facility in Preparation for the 2024-2026 European Particle Physics Strategy Update
- Author
-
Adhikary, Jyotismita, Anchordoqui, Luis A., Ariga, Akitaka, Ariga, Tomoko, Barr, Alan J., Batell, Brian, Bian, Jianming, Boyd, Jamie, Citron, Matthew, De Roeck, Albert, Diwan, Milind V., Feng, Jonathan L., Hill, Christopher S., Jeong, Yu Seon, Kling, Felix, Linden, Steven, Mäkelä, Toni, Mavrokoridis, Kostas, McFayden, Josh, Otono, Hidetoshi, Rojo, Juan, Soldin, Dennis, Stasto, Anna, Trojanowski, Sebastian, Vicenzi, Matteo, and Wu, Wenjie
- Subjects
High Energy Physics - Experiment ,High Energy Physics - Phenomenology ,Physics - Instrumentation and Detectors - Abstract
The recent direct detection of neutrinos at the LHC has opened a new window on high-energy particle physics and highlighted the potential of forward physics for groundbreaking discoveries. In the last year, the physics case for forward physics has continued to grow, and there has been extensive work on defining the Forward Physics Facility and its experiments to realize this physics potential in a timely and cost-effective manner. Following a 2-page Executive Summary, we present the status of the FPF, beginning with the FPF's unique potential to shed light on dark matter, new particles, neutrino physics, QCD, and astroparticle physics. We summarize the current designs for the Facility and its experiments, FASER2, FASER$\nu$2, FORMOSA, and FLArE, and conclude by discussing international partnerships and organization, and the FPF's schedule, budget, and technical coordination., Comment: 32 pages
- Published
- 2024
30. MuCol Milestone Report No. 5: Preliminary Parameters
- Author
-
Accettura, Carlotta, Adrian, Simon, Agarwal, Rohit, Ahdida, Claudia, Aimé, Chiara, Aksoy, Avni, Alberghi, Gian Luigi, Alden, Siobhan, Alfonso, Luca, Amapane, Nicola, Amorim, David, Andreetto, Paolo, Anulli, Fabio, Appleby, Rob, Apresyan, Artur, Asadi, Pouya, Mahmoud, Mohammed Attia, Auchmann, Bernhard, Back, John, Badea, Anthony, Bae, Kyu Jung, Bahng, E. J., Balconi, Lorenzo, Balli, Fabrice, Bandiera, Laura, Barbagallo, Carmelo, Barlow, Roger, Bartoli, Camilla, Bartosik, Nazar, Barzi, Emanuela, Batsch, Fabian, Bauce, Matteo, Begel, Michael, Berg, J. Scott, Bersani, Andrea, Bertarelli, Alessandro, Bertinelli, Francesco, Bertolin, Alessandro, Bhat, Pushpalatha, Bianchi, Clarissa, Bianco, Michele, Bishop, William, Black, Kevin, Boattini, Fulvio, Bogacz, Alex, Bonesini, Maurizio, Bordini, Bernardo, de Sousa, Patricia Borges, Bottaro, Salvatore, Bottura, Luca, Boyd, Steven, Breschi, Marco, Broggi, Francesco, Brunoldi, Matteo, Buffat, Xavier, Buonincontri, Laura, Burrows, Philip Nicholas, Burt, Graeme Campbell, Buttazzo, Dario, Caiffi, Barbara, Calatroni, Sergio, Calviani, Marco, Calzaferri, Simone, Calzolari, Daniele, Cantone, Claudio, Capdevilla, Rodolfo, Carli, Christian, Carrelli, Carlo, Casaburo, Fausto, Casarsa, Massimo, Castelli, Luca, Catanesi, Maria Gabriella, Cavallucci, Lorenzo, Cavoto, Gianluca, Celiberto, Francesco Giovanni, Celona, Luigi, Cemmi, Alessia, Ceravolo, Sergio, Cerri, Alessandro, Cerutti, Francesco, Cesarini, Gianmario, Cesarotti, Cari, Chancé, Antoine, Charitonidis, Nikolaos, Chiesa, Mauro, Chiggiato, Paolo, Ciccarella, Vittoria Ludovica, Puviani, Pietro Cioli, Colaleo, Anna, Colao, Francesco, Collamati, Francesco, Costa, Marco, Craig, Nathaniel, Curtin, David, Damerau, Heiko, Da Molin, Giacomo, D'Angelo, Laura, Dasu, Sridhara, de Blas, Jorge, De Curtis, Stefania, De Gersem, Herbert, Delahaye, Jean-Pierre, Del Moro, Tommaso, Denisov, Dmitri, Denizli, Haluk, Dermisek, Radovan, Valdor, Paula Desiré, Desponds, Charlotte, Di Luzio, Luca, Di Meco, Elisa, Diociaiuti, Eleonora, Di Petrillo, Karri Folan, Di Sarcina, Ilaria, Dorigo, Tommaso, Dreimanis, Karlis, Pree, Tristan du, Yildiz, Hatice Duran, Edgecock, Thomas, Fabbri, Siara, Fabbrichesi, Marco, Farinon, Stefania, Ferrand, Guillaume, Somoza, Jose Antonio Ferreira, Fieg, Max, Filthaut, Frank, Fox, Patrick, Franceschini, Roberto, Ximenes, Rui Franqueira, Gallinaro, Michele, Garcia-Sciveres, Maurice, Garcia-Tabares, Luis, Gargiulo, Ruben, Garion, Cedric, Garzelli, Maria Vittoria, Gast, Marco, Generoso, Lisa, Gerber, Cecilia E., Giambastiani, Luca, Gianelle, Alessio, Gianfelice-Wendt, Eliana, Gibson, Stephen, Gilardoni, Simone, Giove, Dario Augusto, Giovinco, Valentina, Giraldin, Carlo, Glioti, Alfredo, Gorzawski, Arkadiusz, Greco, Mario, Grojean, Christophe, Grudiev, Alexej, Gschwendtner, Edda, Gueli, Emanuele, Guilhaudin, Nicolas, Han, Chengcheng, Han, Tao, Hauptman, John Michael, Herndon, Matthew, Hillier, Adrian D, Hillman, Micah, Holmes, Tova Ray, Homiller, Samuel, Jana, Sudip, Jindariani, Sergo, Johannesson, Sofia, Johnson, Benjamin, Jones, Owain Rhodri, Jurj, Paul-Bogdan, Kahn, Yonatan, Kamath, Rohan, Kario, Anna, Karpov, Ivan, Kelliher, David, Kilian, Wolfgang, Kitano, Ryuichiro, Kling, Felix, Kolehmainen, Antti, Kong, K. C., Kosse, Jaap, Krintiras, Georgios, Krizka, Karol, Kumar, Nilanjana, Kvikne, Erik, Kyle, Robert, Laface, Emanuele, Lane, Kenneth, Latina, Andrea, Lechner, Anton, Lee, Junghyun, Lee, Lawrence, Lee, Seh Wook, Lefevre, Thibaut, Leonardi, Emanuele, Lerner, Giuseppe, Li, Peiran, Li, Qiang, Li, Tong, Li, Wei, Lindroos, Mats, Lipton, Ronald, Liu, Da, Liu, Miaoyuan, Liu, Zhen, Voti, Roberto Li, Lombardi, Alessandra, Lomte, Shivani, Long, Kenneth, Longo, Luigi, Lorenzo, José, Losito, Roberto, Low, Ian, Lu, Xianguo, Lucchesi, Donatella, Luo, Tianhuan, Lupato, Anna, Ma, Yang, Machida, Shinji, Madlener, Thomas, Magaletti, Lorenzo, Maggi, Marcello, Durand, Helene Mainaud, Maltoni, Fabio, Manczak, Jerzy Mikolaj, Mandurrino, Marco, Marchand, Claude, Mariani, Francesco, Marin, Stefano, Mariotto, Samuele, Martin-Haugh, Stewart, Masullo, Maria Rosaria, Mauro, Giorgio Sebastiano, Mazzolari, Andrea, Mękała, Krzysztof, Mele, Barbara, Meloni, Federico, Meng, Xiangwei, Mentink, Matthias, Métral, Elias, Miceli, Rebecca, Milas, Natalia, Mohammadi, Abdollah, Moll, Dominik, Montella, Alessandro, Morandin, Mauro, Morrone, Marco, Mulder, Tim, Musenich, Riccardo, Nardecchia, Marco, Nardi, Federico, Nenna, Felice, Neuffer, David, Newbold, David, Novelli, Daniel, Olvegård, Maja, Onel, Yasar, Orestano, Domizia, Osborne, John, Otten, Simon, Torres, Yohan Mauricio Oviedo, Paesani, Daniele, Griso, Simone Pagan, Pagani, Davide, Pal, Kincso, Palmer, Mark, Pampaloni, Alessandra, Panci, Paolo, Pani, Priscilla, Papaphilippou, Yannis, Paparella, Rocco, Paradisi, Paride, Passeri, Antonio, Pasternak, Jaroslaw, Pastrone, Nadia, Pellecchia, Antonello, Piccinini, Fulvio, Piekarz, Henryk, Pieloni, Tatiana, Plouin, Juliette, Portone, Alfredo, Potamianos, Karolos, Potdevin, Joséphine, Prestemon, Soren, Puig, Teresa, Qiang, Ji, Quettier, Lionel, Rabemananjara, Tanjona Radonirina, Radicioni, Emilio, Radogna, Raffaella, Rago, Ilaria Carmela, Ratkus, Andris, Resseguie, Elodie, Reuter, Juergen, Ribani, Pier Luigi, Riccardi, Cristina, Ricciardi, Stefania, Robens, Tania, Robert, Youri, Rogers, Chris, Rojo, Juan, Romagnoni, Marco, Ronald, Kevin, Rosser, Benjamin, Rossi, Carlo, Rossi, Lucio, Rozanov, Leo, Ruhdorfer, Maximilian, Ruiz, Richard, Saini, Saurabh, Sala, Filippo, Salierno, Claudia, Salmi, Tiina, Salvini, Paola, Salvioni, Ennio, Sammut, Nicholas, Santini, Carlo, Saputi, Alessandro, Sarra, Ivano, Scarantino, Giuseppe, Schneider-Muntau, Hans, Schulte, Daniel, Scifo, Jessica, Sen, Tanaji, Senatore, Carmine, Senol, Abdulkadir, Sertore, Daniele, Sestini, Lorenzo, Rêgo, Ricardo César Silva, Simone, Federica Maria, Skoufaris, Kyriacos, Sorbello, Gino, Sorbi, Massimo, Sorti, Stefano, Soubirou, Lisa, Spataro, David, Queiroz, Farinaldo S., Stamerra, Anna, Stapnes, Steinar, Stark, Giordon, Statera, Marco, Stechauner, Bernd Michael, Su, Shufang, Su, Wei, Sun, Xiaohu, Sytov, Alexei, Tang, Jian, Tang, Jingyu, Taylor, Rebecca, Kate, Herman Ten, Testoni, Pietro, Thiele, Leonard Sebastian, Garcia, Rogelio Tomas, Topp-Mugglestone, Max, Torims, Toms, Torre, Riccardo, Tortora, Luca, Tortora, Ludovico, Trifinopoulos, Sokratis, Udongwo, Sosoho-Abasi, Vai, Ilaria, Valente, Riccardo Umberto, van Rienen, Ursula, Van Weelderen, Rob, Vanwelde, Marion, Velev, Gueorgui, Venditti, Rosamaria, Vendrasco, Adam, Verna, Adriano, Vernassa, Gianluca, Verweij, Arjan, Verwilligen, Piet, Villamizar, Yoxara, Vittorio, Ludovico, Vitulo, Paolo, Vojskovic, Isabella, Wang, Dayong, Wang, Lian-Tao, Wang, Xing, Wendt, Manfred, Widorski, Markus, Wozniak, Mariusz, Wu, Yongcheng, Wulzer, Andrea, Xie, Keping, Yang, Yifeng, Yap, Yee Chinn, Yonehara, Katsuya, Yoo, Hwi Dong, You, Zhengyun, Zanetti, Marco, Zaza, Angela, Zhang, Liang, Zhu, Ruihu, Zlobin, Alexander, Zuliani, Davide, and Zurita, José Francisco
- Subjects
Physics - Accelerator Physics - Abstract
This document is comprised of a collection of updated preliminary parameters for the key parts of the muon collider. The updated preliminary parameters follow on from the October 2023 Tentative Parameters Report. Particular attention has been given to regions of the facility that are believed to hold greater technical uncertainty in their design and that have a strong impact on the cost and power consumption of the facility. The data is collected from a collaborative spreadsheet and transferred to overleaf.
- Published
- 2024
- Full Text
- View/download PDF
31. Optimization Algorithm Design via Electric Circuits
- Author
-
Boyd, Stephen P., Parshakova, Tetiana, Ryu, Ernest K., and Suh, Jaewook J.
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,47H05, 90C25, 37M15 - Abstract
We present a novel methodology for convex optimization algorithm design using ideas from electric RLC circuits. Given an optimization problem, the first stage of the methodology is to design an appropriate electric circuit whose continuous-time dynamics converge to the solution of the optimization problem at hand. Then, the second stage is an automated, computer-assisted discretization of the continuous-time dynamics, yielding a provably convergent discrete-time algorithm. Our methodology recovers many classical (distributed) optimization algorithms and enables users to quickly design and explore a wide range of new algorithms with convergence guarantees.
- Published
- 2024
32. Analytical Expressions for Effective Indices of Modes of Optical Fibers Near and Beyond Cutoff
- Author
-
Antikainen, Aku and Boyd, Robert W.
- Subjects
Physics - Optics ,Mathematical Physics - Abstract
We derive an analytical expression for the effective indices of modes of circular step-index fibers valid near their cutoff wavelengths. The approximation, being a first-order Taylor series of a smooth function, is also valid for the real part of the effective index beyond cutoff where the modes become lossy. The approximation is used to derive certain previously unknown mode properties. For example, it is shown that for non-dispersive materials the EH-mode group index at cutoff, surprisingly, does not depend on wavelength, core radius, or even radial mode order.
- Published
- 2024
33. Youth with Specific Learning Disorders: Attitudes and Clinical Decision-Making among Mental Health Trainees
- Author
-
Nola Freeman, Deborah J. Ebener, Jacob Cryderman, and Maegan H. Boyd
- Abstract
Individuals with disabilities often face discrimination due to negative attitudes from others around them. This is true for youth with specific learning disorders (SLD), whose experiences of discrimination can increase the risk for developing mental health concerns. The current study explored whether the presence of an SLD comorbid with mental health concerns and attitudes toward SLD may have an association with clinical decision-making in counselor trainees. The study additionally investigated the role of contact and experience in attitudes and decision-making patterns. Seventy graduate students enrolled in mental health-related programs at a public university in the southern United States participated in the survey study. Findings showed that SLD had an association with clinical decision-making, with counselor trainees rating a vignette depicting a youth with SLD as having more severe mental health concerns than a vignette without an SLD.
- Published
- 2024
- Full Text
- View/download PDF
34. Gestational SARS-CoV-2 Infection in a Ugandan Birth Cohort: High Incidence, Mild Maternal Disease, and Evidence of Association with Transient Infant Stunting.
- Author
-
Jacobson, Karen, Röltgen, Katharina, Lam, Brandon, Nayebare, Patience, Kakuru, Abel, Kizza, Jimmy, Aguti, Miriam, Nankya, Felistas, Briggs, Jessica, Takahashi, Saki, Greenhouse, Bryan, Rodriguez-Barraquer, Isabel, van der Ploeg, Kattria, Wohlstadter, Jacob, Sigal, George, Roh, Michelle, Nankabirwa, Joaniter, Cuu, Gloria, Gaw, Stephanie, Rosenthal, Philip, Kamya, Moses, Ssewanyana, Isaac, Dorsey, Grant, Boyd, Scott, and Jagannathan, Prasanna
- Subjects
Humans ,Female ,COVID-19 ,Pregnancy ,Uganda ,SARS-CoV-2 ,Pregnancy Complications ,Infectious ,Adult ,Immunoglobulin G ,Immunoglobulin M ,Infant ,Newborn ,Antibodies ,Viral ,Growth Disorders ,Incidence ,Birth Cohort ,Infant ,Young Adult - Abstract
Many questions remain about the prevalence and effects of SARS-CoV-2 infection in malaria-endemic African countries like Uganda, particularly in vulnerable groups such as pregnant women. We describe SARS-CoV-2 immunoglobulin (Ig)G and IgM antibody responses and clinical outcomes in mother-infant dyads enrolled in malaria chemoprevention trials in Uganda. From December 2020-February 2022, among 400 unvaccinated pregnant women enrolled at 12-20 weeks gestation and followed through delivery, 128 (32%) were seronegative for anti-SARS-CoV-2 IgG and IgM at enrollment and delivery, 80 (20%) were infected prior to or early in pregnancy, and 192 (48%) were infected or re-infected with SARS-CoV-2 during pregnancy. We observed preferential binding of plasma IgG to Wuhan-Hu-1-like antigens in individuals seroconverting up to early 2021, and to Delta variant antigens in a subset of individuals in mid-2021. Breadth of IgG binding to all variants improved over time, consistent with affinity maturation of the antibody response in the cohort. No women experienced severe respiratory illness during the study. SARS-CoV-2 infection in early pregnancy was associated with lower median length-for-age Z-score at age 3 months compared with no infection or late pregnancy infect (-1.54 versus -0.37 and -0.51, P = 0.009). These findings suggest that pregnant Ugandan women experienced high levels of SARS-CoV-2 infection without severe respiratory illness. Variant-specific serology testing demonstrated evidence of antibody affinity maturation at the population level. Early gestational SARS-CoV-2 infection was associated with transient shorter stature in early infancy. Further research should explore the significance of this finding and define targeted measures to prevent infection in pregnancy.
- Published
- 2024
35. Brain age identification from diffusion MRI synergistically predicts neurodegenerative disease
- Author
-
Gao, Chenyu, Kim, Michael E., Ramadass, Karthik, Kanakaraj, Praitayini, Krishnan, Aravind R., Saunders, Adam M., Newlin, Nancy R., Lee, Ho Hin, Yang, Qi, Taylor, Warren D., Boyd, Brian D., Beason-Held, Lori L., Resnick, Susan M., Barnes, Lisa L., Bennett, David A., Van Schaik, Katherine D., Archer, Derek B., Hohman, Timothy J., Jefferson, Angela L., Išgum, Ivana, Moyer, Daniel, Huo, Yuankai, Schilling, Kurt G., Zuo, Lianrui, Bao, Shunxing, Khairi, Nazirah Mohd, Li, Zhiyuan, Davatzikos, Christos, and Landman, Bennett A.
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Estimated brain age from magnetic resonance image (MRI) and its deviation from chronological age can provide early insights into potential neurodegenerative diseases, supporting early detection and implementation of prevention strategies. Diffusion MRI (dMRI), a widely used modality for brain age estimation, presents an opportunity to build an earlier biomarker for neurodegenerative disease prediction because it captures subtle microstructural changes that precede more perceptible macrostructural changes. However, the coexistence of macro- and micro-structural information in dMRI raises the question of whether current dMRI-based brain age estimation models are leveraging the intended microstructural information or if they inadvertently rely on the macrostructural information. To develop a microstructure-specific brain age, we propose a method for brain age identification from dMRI that minimizes the model's use of macrostructural information by non-rigidly registering all images to a standard template. Imaging data from 13,398 participants across 12 datasets were used for the training and evaluation. We compare our brain age models, trained with and without macrostructural information minimized, with an architecturally similar T1-weighted (T1w) MRI-based brain age model and two state-of-the-art T1w MRI-based brain age models that primarily use macrostructural information. We observe difference between our dMRI-based brain age and T1w MRI-based brain age across stages of neurodegeneration, with dMRI-based brain age being older than T1w MRI-based brain age in participants transitioning from cognitively normal (CN) to mild cognitive impairment (MCI), but younger in participants already diagnosed with Alzheimer's disease (AD). Approximately 4 years before MCI diagnosis, dMRI-based brain age yields better performance than T1w MRI-based brain ages in predicting transition from CN to MCI.
- Published
- 2024
36. Impact of vaporization on drop aerobreakup
- Author
-
Boyd, Bradley, Becker, Sid, and Ling, Yue
- Subjects
Physics - Fluid Dynamics - Abstract
Aerodynamic breakup of vaporizing drops is commonly seen in many spray applications. While it is well known that vaporization can modulate interfacial instabilities, the impact of vaporization on drop aerobreakup is poorly understood. Detailed interface-resolved simulations were performed to systematically study the effect of vaporization, characterized by the Stefan number, on the drop breakup and acceleration for different Weber numbers and density ratios. It is observed that the resulting asymmetric vaporization rates and strengths of Stefan flow on the windward and leeward sides of the drop hinder bag development and prevent drop breakup. The critical Weber number thus generally increases with the Stefan number. The modulation of the boundary layer also contributes to a significant increase of drag coefficient. Numerical experiments were performed to affirm that the drop volume reduction plays a negligible role and the Stefan flow is the dominant reason for the breakup suppression and drag enhancement observed.
- Published
- 2024
37. GPT-4o System Card
- Author
-
OpenAI, Hurst, Aaron, Lerer, Adam, Goucher, Adam P., Perelman, Adam, Ramesh, Aditya, Clark, Aidan, Ostrow, AJ, Welihinda, Akila, Hayes, Alan, Radford, Alec, Mądry, Aleksander, Baker-Whitcomb, Alex, Beutel, Alex, Borzunov, Alex, Carney, Alex, Chow, Alex, Kirillov, Alex, Nichol, Alex, Paino, Alex, Renzin, Alex, Passos, Alex Tachard, Kirillov, Alexander, Christakis, Alexi, Conneau, Alexis, Kamali, Ali, Jabri, Allan, Moyer, Allison, Tam, Allison, Crookes, Amadou, Tootoochian, Amin, Tootoonchian, Amin, Kumar, Ananya, Vallone, Andrea, Karpathy, Andrej, Braunstein, Andrew, Cann, Andrew, Codispoti, Andrew, Galu, Andrew, Kondrich, Andrew, Tulloch, Andrew, Mishchenko, Andrey, Baek, Angela, Jiang, Angela, Pelisse, Antoine, Woodford, Antonia, Gosalia, Anuj, Dhar, Arka, Pantuliano, Ashley, Nayak, Avi, Oliver, Avital, Zoph, Barret, Ghorbani, Behrooz, Leimberger, Ben, Rossen, Ben, Sokolowsky, Ben, Wang, Ben, Zweig, Benjamin, Hoover, Beth, Samic, Blake, McGrew, Bob, Spero, Bobby, Giertler, Bogo, Cheng, Bowen, Lightcap, Brad, Walkin, Brandon, Quinn, Brendan, Guarraci, Brian, Hsu, Brian, Kellogg, Bright, Eastman, Brydon, Lugaresi, Camillo, Wainwright, Carroll, Bassin, Cary, Hudson, Cary, Chu, Casey, Nelson, Chad, Li, Chak, Shern, Chan Jun, Conger, Channing, Barette, Charlotte, Voss, Chelsea, Ding, Chen, Lu, Cheng, Zhang, Chong, Beaumont, Chris, Hallacy, Chris, Koch, Chris, Gibson, Christian, Kim, Christina, Choi, Christine, McLeavey, Christine, Hesse, Christopher, Fischer, Claudia, Winter, Clemens, Czarnecki, Coley, Jarvis, Colin, Wei, Colin, Koumouzelis, Constantin, Sherburn, Dane, Kappler, Daniel, Levin, Daniel, Levy, Daniel, Carr, David, Farhi, David, Mely, David, Robinson, David, Sasaki, David, Jin, Denny, Valladares, Dev, Tsipras, Dimitris, Li, Doug, Nguyen, Duc Phong, Findlay, Duncan, Oiwoh, Edede, Wong, Edmund, Asdar, Ehsan, Proehl, Elizabeth, Yang, Elizabeth, Antonow, Eric, Kramer, Eric, Peterson, Eric, Sigler, Eric, Wallace, Eric, Brevdo, Eugene, Mays, Evan, Khorasani, Farzad, Such, Felipe Petroski, Raso, Filippo, Zhang, Francis, von Lohmann, Fred, Sulit, Freddie, Goh, Gabriel, Oden, Gene, Salmon, Geoff, Starace, Giulio, Brockman, Greg, Salman, Hadi, Bao, Haiming, Hu, Haitang, Wong, Hannah, Wang, Haoyu, Schmidt, Heather, Whitney, Heather, Jun, Heewoo, Kirchner, Hendrik, Pinto, Henrique Ponde de Oliveira, Ren, Hongyu, Chang, Huiwen, Chung, Hyung Won, Kivlichan, Ian, O'Connell, Ian, Osband, Ian, Silber, Ian, Sohl, Ian, Okuyucu, Ibrahim, Lan, Ikai, Kostrikov, Ilya, Sutskever, Ilya, Kanitscheider, Ingmar, Gulrajani, Ishaan, Coxon, Jacob, Menick, Jacob, Pachocki, Jakub, Aung, James, Betker, James, Crooks, James, Lennon, James, Kiros, Jamie, Leike, Jan, Park, Jane, Kwon, Jason, Phang, Jason, Teplitz, Jason, Wei, Jason, Wolfe, Jason, Chen, Jay, Harris, Jeff, Varavva, Jenia, Lee, Jessica Gan, Shieh, Jessica, Lin, Ji, Yu, Jiahui, Weng, Jiayi, Tang, Jie, Yu, Jieqi, Jang, Joanne, Candela, Joaquin Quinonero, Beutler, Joe, Landers, Joe, Parish, Joel, Heidecke, Johannes, Schulman, John, Lachman, Jonathan, McKay, Jonathan, Uesato, Jonathan, Ward, Jonathan, Kim, Jong Wook, Huizinga, Joost, Sitkin, Jordan, Kraaijeveld, Jos, Gross, Josh, Kaplan, Josh, Snyder, Josh, Achiam, Joshua, Jiao, Joy, Lee, Joyce, Zhuang, Juntang, Harriman, Justyn, Fricke, Kai, Hayashi, Kai, Singhal, Karan, Shi, Katy, Karthik, Kavin, Wood, Kayla, Rimbach, Kendra, Hsu, Kenny, Nguyen, Kenny, Gu-Lemberg, Keren, Button, Kevin, Liu, Kevin, Howe, Kiel, Muthukumar, Krithika, Luther, Kyle, Ahmad, Lama, Kai, Larry, Itow, Lauren, Workman, Lauren, Pathak, Leher, Chen, Leo, Jing, Li, Guy, Lia, Fedus, Liam, Zhou, Liang, Mamitsuka, Lien, Weng, Lilian, McCallum, Lindsay, Held, Lindsey, Ouyang, Long, Feuvrier, Louis, Zhang, Lu, Kondraciuk, Lukas, Kaiser, Lukasz, Hewitt, Luke, Metz, Luke, Doshi, Lyric, Aflak, Mada, Simens, Maddie, Boyd, Madelaine, Thompson, Madeleine, Dukhan, Marat, Chen, Mark, Gray, Mark, Hudnall, Mark, Zhang, Marvin, Aljubeh, Marwan, Litwin, Mateusz, Zeng, Matthew, Johnson, Max, Shetty, Maya, Gupta, Mayank, Shah, Meghan, Yatbaz, Mehmet, Yang, Meng Jia, Zhong, Mengchao, Glaese, Mia, Chen, Mianna, Janner, Michael, Lampe, Michael, Petrov, Michael, Wu, Michael, Wang, Michele, Fradin, Michelle, Pokrass, Michelle, Castro, Miguel, de Castro, Miguel Oom Temudo, Pavlov, Mikhail, Brundage, Miles, Wang, Miles, Khan, Minal, Murati, Mira, Bavarian, Mo, Lin, Molly, Yesildal, Murat, Soto, Nacho, Gimelshein, Natalia, Cone, Natalie, Staudacher, Natalie, Summers, Natalie, LaFontaine, Natan, Chowdhury, Neil, Ryder, Nick, Stathas, Nick, Turley, Nick, Tezak, Nik, Felix, Niko, Kudige, Nithanth, Keskar, Nitish, Deutsch, Noah, Bundick, Noel, Puckett, Nora, Nachum, Ofir, Okelola, Ola, Boiko, Oleg, Murk, Oleg, Jaffe, Oliver, Watkins, Olivia, Godement, Olivier, Campbell-Moore, Owen, Chao, Patrick, McMillan, Paul, Belov, Pavel, Su, Peng, Bak, Peter, Bakkum, Peter, Deng, Peter, Dolan, Peter, Hoeschele, Peter, Welinder, Peter, Tillet, Phil, Pronin, Philip, Tillet, Philippe, Dhariwal, Prafulla, Yuan, Qiming, Dias, Rachel, Lim, Rachel, Arora, Rahul, Troll, Rajan, Lin, Randall, Lopes, Rapha Gontijo, Puri, Raul, Miyara, Reah, Leike, Reimar, Gaubert, Renaud, Zamani, Reza, Wang, Ricky, Donnelly, Rob, Honsby, Rob, Smith, Rocky, Sahai, Rohan, Ramchandani, Rohit, Huet, Romain, Carmichael, Rory, Zellers, Rowan, Chen, Roy, Chen, Ruby, Nigmatullin, Ruslan, Cheu, Ryan, Jain, Saachi, Altman, Sam, Schoenholz, Sam, Toizer, Sam, Miserendino, Samuel, Agarwal, Sandhini, Culver, Sara, Ethersmith, Scott, Gray, Scott, Grove, Sean, Metzger, Sean, Hermani, Shamez, Jain, Shantanu, Zhao, Shengjia, Wu, Sherwin, Jomoto, Shino, Wu, Shirong, Shuaiqi, Xia, Phene, Sonia, Papay, Spencer, Narayanan, Srinivas, Coffey, Steve, Lee, Steve, Hall, Stewart, Balaji, Suchir, Broda, Tal, Stramer, Tal, Xu, Tao, Gogineni, Tarun, Christianson, Taya, Sanders, Ted, Patwardhan, Tejal, Cunninghman, Thomas, Degry, Thomas, Dimson, Thomas, Raoux, Thomas, Shadwell, Thomas, Zheng, Tianhao, Underwood, Todd, Markov, Todor, Sherbakov, Toki, Rubin, Tom, Stasi, Tom, Kaftan, Tomer, Heywood, Tristan, Peterson, Troy, Walters, Tyce, Eloundou, Tyna, Qi, Valerie, Moeller, Veit, Monaco, Vinnie, Kuo, Vishal, Fomenko, Vlad, Chang, Wayne, Zheng, Weiyi, Zhou, Wenda, Manassra, Wesam, Sheu, Will, Zaremba, Wojciech, Patil, Yash, Qian, Yilei, Kim, Yongjik, Cheng, Youlong, Zhang, Yu, He, Yuchen, Zhang, Yuchen, Jin, Yujia, Dai, Yunxing, and Malkov, Yury
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Computers and Society ,Computer Science - Machine Learning ,Computer Science - Sound ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
GPT-4o is an autoregressive omni model that accepts as input any combination of text, audio, image, and video, and generates any combination of text, audio, and image outputs. It's trained end-to-end across text, vision, and audio, meaning all inputs and outputs are processed by the same neural network. GPT-4o can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50\% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models. In line with our commitment to building AI safely and consistent with our voluntary commitments to the White House, we are sharing the GPT-4o System Card, which includes our Preparedness Framework evaluations. In this System Card, we provide a detailed look at GPT-4o's capabilities, limitations, and safety evaluations across multiple categories, focusing on speech-to-speech while also evaluating text and image capabilities, and measures we've implemented to ensure the model is safe and aligned. We also include third-party assessments on dangerous capabilities, as well as discussion of potential societal impacts of GPT-4o's text and vision capabilities.
- Published
- 2024
38. Training Better Deep Learning Models Using Human Saliency
- Author
-
Boyd, Aidan, Tinsley, Patrick, Bowyer, Kevin W., and Czajka, Adam
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
This work explores how human judgement about salient regions of an image can be introduced into deep convolutional neural network (DCNN) training. Traditionally, training of DCNNs is purely data-driven. This often results in learning features of the data that are only coincidentally correlated with class labels. Human saliency can guide network training using our proposed new component of the loss function that ConveYs Brain Oversight to Raise Generalization (CYBORG) and penalizes the model for using non-salient regions. This mechanism produces DCNNs achieving higher accuracy and generalization compared to using the same training data without human salience. Experimental results demonstrate that CYBORG applies across multiple network architectures and problem domains (detection of synthetic faces, iris presentation attacks and anomalies in chest X-rays), while requiring significantly less data than training without human saliency guidance. Visualizations show that CYBORG-trained models' saliency is more consistent across independent training runs than traditionally-trained models, and also in better agreement with humans. To lower the cost of collecting human annotations, we also explore using deep learning to provide automated annotations. CYBORG training of CNNs addresses important issues such as reducing the appetite for large training sets, increasing interpretability, and reducing fragility by generalizing better to new types of data.
- Published
- 2024
39. Increasing Interpretability of Neural Networks By Approximating Human Visual Saliency
- Author
-
Boyd, Aidan, Trabelsi, Mohamed, Uzunalioglu, Huseyin, and Kushnir, Dan
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Understanding specifically where a model focuses on within an image is critical for human interpretability of the decision-making process. Deep learning-based solutions are prone to learning coincidental correlations in training datasets, causing over-fitting and reducing the explainability. Recent advances have shown that guiding models to human-defined regions of saliency within individual images significantly increases performance and interpretability. Human-guided models also exhibit greater generalization capabilities, as coincidental dataset features are avoided. Results show that models trained with saliency incorporation display an increase in interpretability of up to 30% over models trained without saliency information. The collection of this saliency information, however, can be costly, laborious and in some cases infeasible. To address this limitation, we propose a combination strategy of saliency incorporation and active learning to reduce the human annotation data required by 80% while maintaining the interpretability and performance increase from human saliency. Extensive experimentation outlines the effectiveness of the proposed approach across five public datasets and six active learning criteria.
- Published
- 2024
40. Reverse Question Answering: Can an LLM Write a Question so Hard (or Bad) that it Can't Answer?
- Author
-
Balepur, Nishant, Gu, Feng, Ravichander, Abhilasha, Feng, Shi, Boyd-Graber, Jordan, and Rudinger, Rachel
- Subjects
Computer Science - Computation and Language - Abstract
Question answering (QA)-producing correct answers for input questions-is popular, but we test a reverse question answering (RQA) task: given an input answer, generate a question with that answer. Past work tests QA and RQA separately, but we test them jointly, comparing their difficulty, aiding benchmark design, and assessing reasoning consistency. 16 LLMs run QA and RQA with trivia questions/answers, showing: 1) Versus QA, LLMs are much less accurate in RQA for numerical answers, but slightly more accurate in RQA for textual answers; 2) LLMs often answer their own invalid questions from RQA accurately in QA, so RQA errors are not from knowledge gaps alone; 3) RQA errors correlate with question difficulty and inversely correlate with answer frequencies in the Dolma corpus; and 4) LLMs struggle to give valid multi-hop questions. By finding question and answer types yielding RQA errors, we suggest improvements for LLM RQA reasoning., Comment: In-progress preprint
- Published
- 2024
41. Shining Light on the Dark Sector: Search for Axion-like Particles and Other New Physics in Photonic Final States with FASER
- Author
-
FASER collaboration, Abraham, Roshan Mammen, Ai, Xiaocong, Anders, John, Antel, Claire, Ariga, Akitaka, Ariga, Tomoko, Atkinson, Jeremy, Bernlochner, Florian U., Bianchi, Emma, Boeckh, Tobias, Boyd, Jamie, Brenner, Lydia, Burger, Angela, Cadoux, Franck, Cardella, Roberto, Casper, David W., Cavanagh, Charlotte, Chen, Xin, Cho, Eunhyung, Chouhan, Dhruv, Coccaro, Andrea, Débieux, Stephane, D'Onofrio, Monica, Desai, Ansh, Dmitrievsky, Sergey, Dobre, Radu, Eley, Sinead, Favre, Yannick, Fellers, Deion, Feng, Jonathan L., Fenoglio, Carlo Alberto, Ferrere, Didier, Fieg, Max, Filali, Wissal, Firu, Elena, Galantay, Edward, Garabaglu, Ali, Gibson, Stephen, Gonzalez-Sevilla, Sergio, Gornushkin, Yuri, Gwilliam, Carl, Hayakawa, Daiki, Holzbock, Michael, Hsu, Shih-Chieh, Hu, Zhen, Iacobucci, Giuseppe, Inada, Tomohiro, Iodice, Luca, Jakobsen, Sune, Joos, Hans, Kajomovitz, Enrique, Kawahara, Hiroaki, Keyken, Alex, Kling, Felix, Köck, Daniela, Kontaxakis, Pantelis, Kose, Umut, Kotitsa, Rafaella, Kuehn, Susanne, Kugathasan, Thanushan, Levinson, Lorne, Li, Ke, Liu, Jinfeng, Liu, Yi, Lutz, Margaret S., MacDonald, Jack, Magliocca, Chiara, Mäkelä, Toni, McCoy, Lawson, McFayden, Josh, Medina, Andrea Pizarro, Milanesio, Matteo, Moretti, Théo, Nakamura, Mitsuhiro, Nakano, Toshiyuki, Nevay, Laurie, Ohashi, Ken, Otono, Hidetoshi, Paolozzi, Lorenzo, Petersen, Brian, Preda, Titi, Prim, Markus, Queitsch-Maitland, Michaela, Rokujo, Hiroki, Rubbia, André, Sabater-Iglesias, Jorge, Sato, Osamu, Scampoli, Paola, Schmieden, Kristof, Schott, Matthias, Sfyrla, Anna, Sgalaberna, Davide, Shamim, Mansoora, Shively, Savannah, Takubo, Yosuke, Tarannum, Noshin, Theiner, Ondrej, Torrence, Eric, Martinez, Oscar Ivan Valdes, Vasina, Svetlana, Vormwald, Benedikt, Wang, Di, Wang, Yuxiao, Welch, Eli, Xu, Yue, Zahorec, Samuel, Zambito, Stefano, and Zhang, Shunliang
- Subjects
High Energy Physics - Experiment - Abstract
The first FASER search for a light, long-lived particle decaying into a pair of photons is reported. The search uses LHC proton-proton collision data at $\sqrt{s}=13.6~\text{TeV}$ collected in 2022 and 2023, corresponding to an integrated luminosity of $57.7\text{fb}^{-1}$. A model with axion-like particles (ALPs) dominantly coupled to weak gauge bosons is the primary target. Signal events are characterised by high-energy deposits in the electromagnetic calorimeter and no signal in the veto scintillators. One event is observed, compared to a background expectation of $0.44 \pm 0.39$ events, which is entirely dominated by neutrino interactions. World-leading constraints on ALPs are obtained for masses up to $300~\text{MeV}$ and couplings to the Standard Model W gauge boson, $g_{aWW}$, around $10^{-4}$ GeV$^{-1}$, testing a previously unexplored region of parameter space. Other new particle models that lead to the same experimental signature, including ALPs coupled to gluons or photons, U(1)$_B$ gauge bosons, up-philic scalars, and a Type-I two-Higgs doublet model, are also considered for interpretation, and new constraints on previously viable parameter space are presented in this paper., Comment: 37 pages, 22 figures
- Published
- 2024
42. Do great minds think alike? Investigating Human-AI Complementarity in Question Answering with CAIMIRA
- Author
-
Gor, Maharshi, Daumé III, Hal, Zhou, Tianyi, and Boyd-Graber, Jordan
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Recent advancements of large language models (LLMs) have led to claims of AI surpassing humans in natural language processing (NLP) tasks such as textual understanding and reasoning. This work investigates these assertions by introducing CAIMIRA, a novel framework rooted in item response theory (IRT) that enables quantitative assessment and comparison of problem-solving abilities of question-answering (QA) agents: humans and AI systems. Through analysis of over 300,000 responses from ~70 AI systems and 155 humans across thousands of quiz questions, CAIMIRA uncovers distinct proficiency patterns in knowledge domains and reasoning skills. Humans outperform AI systems in knowledge-grounded abductive and conceptual reasoning, while state-of-the-art LLMs like GPT-4 and LLaMA show superior performance on targeted information retrieval and fact-based reasoning, particularly when information gaps are well-defined and addressable through pattern matching or data retrieval. These findings highlight the need for future QA tasks to focus on questions that challenge not only higher-order reasoning and scientific thinking, but also demand nuanced linguistic interpretation and cross-contextual knowledge application, helping advance AI developments that better emulate or complement human cognitive abilities in real-world problem-solving., Comment: To appear at EMNLP 2024 (Main)
- Published
- 2024
43. Large-Scale GNSS Spreading Code Optimization
- Author
-
Yang, Alan, Mina, Tara, Boyd, Stephen, and Gao, Grace
- Subjects
Electrical Engineering and Systems Science - Signal Processing - Abstract
We propose a bit-flip descent method for optimizing binary spreading codes with large family sizes and long lengths, addressing the challenges of large-scale code design in GNSS and emerging PNT applications. The method iteratively flips code bits to improve the codes' auto- and cross-correlation properties. In our proposed method, bits are selected by sampling a small set of candidate bits and choosing the one that offers the best improvement in performance. The method leverages the fact that incremental impact of a bit flip on the auto- and cross-correlation may be efficiently computed without recalculating the entire function. We apply this method to two code design problems modeled after the GPS L1 C/A and Galileo E1 codes, demonstrating rapid convergence to low-correlation codes. The proposed approach offers a powerful tool for developing spreading codes that meet the demanding requirements of modern and future satellite navigation systems.
- Published
- 2024
44. Efficient characterization of spatial Schmidt modes of multiphoton entangled states produced from high-gain parametric down-conversion
- Author
-
Amooei, Mahtab, Kulkarni, Girish, Upham, Jeremy, and Boyd, Robert W.
- Subjects
Quantum Physics ,Physics - Optics - Abstract
The ability to efficiently characterize the spatial correlations of entangled states of light is critical for applications of many quantum technologies such as quantum imaging. Here, we demonstrate highly efficient theoretical and experimental characterization of the spatial Schmidt modes and the Schmidt spectrum of bright multiphoton entangled states of light produced from high-gain parametric down-conversion. In contrast to previous studies, we exploit the approximate quasihomogeneity and isotropy of the signal field and dramatically reduce the numerical computations involved in the experimental and theoretical characterization procedures. In our particular case where our experimental data sets consist of 5000 single-shot images of 256*256 pixels each, our method reduced the overall computation time by 2 orders of magnitude. This speed-up would be even more dramatic for larger input sizes. Consequently, we are able to rapidly characterize the Schmidt modes and Schmidt spectrum for a range of pump amplitudes and study their variation with increasing gain. Our results clearly reveal the broadening of the Schmidt modes and narrowing of the Schmidt spectrum for increasing gain with good agreement between theory and experiment., Comment: 8 pages, 4 figures
- Published
- 2024
45. Search for proton decay via $p\rightarrow{e^+\eta}$ and $p\rightarrow{\mu^+\eta}$ with a 0.37 Mton-year exposure of Super-Kamiokande
- Author
-
Collaboration, Super-Kamiokande, Taniuchi, N., Abe, K., Abe, S., Asaoka, Y., Bronner, C., Harada, M., Hayato, Y., Hiraide, K., Hosokawa, K., Ieki, K., Ikeda, M., Kameda, J., Kanemura, Y., Kaneshima, R., Kashiwagi, Y., Kataoka, Y., Miki, S., Mine, S., Miura, M., Moriyama, S., Nakahata, M., Nakayama, S., Noguchi, Y., Pronost, G., Okamoto, K., Sato, K., Sekiya, H., Shiba, H., Shimizu, K., Shiozawa, M., Sonoda, Y., Suzuki, Y., Takeda, A., Takemoto, Y., Takenaka, A., Tanaka, H., Watanabe, S., Yano, T., Kajita, T., Okumura, K., Tashiro, T., Tomiya, T., Wang, X., Yoshida, S., Megias, G. D., Fernandez, P., Labarga, L., Ospina, N., Zaldivar, B., Pointon, B. W., Kearns, E., Mirabito, J., Raaf, J. L., Wan, L., Wester, T., Bian, J., Griskevich, N. J., Kropp, W. R., Locke, S., Smy, M. B., Sobel, H. W., Takhistov, V., Yankelevich, A., Hill, J., Jang, M. C., Kim, J. Y., Lee, S. H., Lim, I. T., Moon, D. H., Park, R. G., Yang, B. S., Bodur, B., Scholberg, K., Walter, C. W., Beauchêne, A., Bernard, L., Coffani, A., Drapier, O., Hedri, S. El, Giampaolo, A., Mueller, Th. A., Santos, A. D., Paganini, P., Rogly, R., Nakamura, T., Jang, J. S., Machado, L. N., Learned, J. G., Choi, K., Iovine, N., Cao, S., Anthony, L. H. V., Martin, D., Prouse, N. W., Scott, M., Sztuc, A. A., Uchida, Y., Berardi, V., Calabria, N. F., Catanesi, M. G., Radicioni, E., Langella, A., De Rosa, G., Collazuol, G., Feltre, M., Iacob, F., Lamoureux, M., Mattiazzi, M., Ludovici, L., Gonin, M., Périssé, L., Quilain, B., Fujisawa, C., Horiuchi, S., Kobayashi, M., Liu, Y. M., Maekawa, Y., Nishimura, Y., Okazaki, R., Akutsu, R., Friend, M., Hasegawa, T., Ishida, T., Kobayashi, T., Jakkapu, M., Matsubara, T., Nakadaira, T., Nakamura, K., Oyama, Y., Yrey, A. Portocarrero, Sakashita, K., Sekiguchi, T., Tsukamoto, T., Bhuiyan, N., Boschi, T., Burton, G. T., Di Lodovico, F., Gao, J., Goldsack, A., Katori, T., Migenda, J., Ramsden, R. M., Taani, M., Xie, Z., Zsoldos, S., Kotsar, Y., Ozaki, H., Suzuki, A. T., Takagi, Y., Takeuchi, Y., Yamamoto, S., Zhong, H., Feng, J., Feng, L., Han, S., Hu, J. R., Hu, Z., Kawaue, M., Kikawa, T., Mori, M., Nakaya, T., Wendell, R. A., Yasutome, K., Jenkins, S. J., McCauley, N., Mehta, P., Tarrant, A., Wilking, M. J., Fukuda, Y., Itow, Y., Menjo, H., Ninomiya, K., Yoshioka, Y., Lagoda, J., Mandal, M., Mijakowski, P., Prabhu, Y. S., Zalipska, J., Jia, M., Jiang, J., Jung, C. K., Shi, W., Yanagisawa, C., Hino, Y., Ishino, H., Ito, S., Kitagawa, H., Koshio, Y., Ma, W., Nakanishi, F., Sakai, S., Tada, T., Tano, T., Ishizuka, T., Barr, G., Barrow, D., Cook, L., Samani, S., Wark, D., Holin, A., Nova, F., Jung, S., Yang, J. Y., Yoo, J., Fannon, J. E. P., Kneale, L., Malek, M., McElwee, J. M., Stone, O., Stowell, P., Thiesse, M. D., Thompson, L. F., Wilson, S. T., Okazawa, H., Lakshmi, S. M., Kim, S. B., Kwon, E., Lee, M. W., Seo, J. W., Yu, I., Ichikawa, A. K., Nakamura, K. D., Tairafune, S., Nishijima, K., Koshiba, M., Eguchi, A., Goto, S., Iwamoto, K., Mizuno, Y., Muro, T., Nakagiri, K., Nakajima, Y., Shima, S., Watanabe, E., Yokoyama, M., de Perio, P., Fujita, S., Jesús-Valls, C., Martens, K., Marti, Ll., Tsui, K. M., Vagins, M. R., Xia, J., Izumiyama, S., Kuze, M., Matsumoto, R., Terada, K., Asaka, R., Inomoto, M., Ishitsuka, M., Ito, H., Kinoshita, T., Ommura, Y., Shigeta, N., Shinoki, M., Suganuma, T., Yamauchi, K., Yoshida, T., Nakano, Y., Martin, J. F., Tanaka, H. A., Towstego, T., Gaur, R., Gousy-Leblanc, V., Hartz, M., Konaka, A., Li, X., Chen, S., Wu, Y., Xu, B. D., Zhang, A. Q., Zhang, B., Posiadala-Zezula, M., Boyd, S. B., Edwards, R., Hadley, D., Nicholson, M., O'Flaherty, M., Richards, B., Ali, A., Jamieson, B., Amanai, S., Minamino, A., Pintaudi, G., Sano, S., Sasaki, R., Shibayama, R., Shimamura, R., Suzuki, S., and Wada, K.
- Subjects
High Energy Physics - Experiment - Abstract
A search for proton decay into $e^+/\mu^+$ and a $\eta$ meson has been performed using data from a 0.373 Mton$\cdot$year exposure (6050.3 live days) of Super-Kamiokande. Compared to previous searches this work introduces an improved model of the intranuclear $\eta$ interaction cross section, resulting in a factor of two reduction in uncertainties from this source and $\sim$10\% increase in signal efficiency. No significant data excess was found above the expected number of atmospheric neutrino background events resulting in no indication of proton decay into either mode. Lower limits on the proton partial lifetime of $1.4\times\mathrm{10^{34}~years}$ for $p\rightarrow e^+\eta$ and $7.3\times\mathrm{10^{33}~years}$ for $p\rightarrow \mu^+\eta$ at the 90$\%$ C.L. were set. These limits are around 1.5 times longer than our previous study and are the most stringent to date.
- Published
- 2024
46. SciDoc2Diagrammer-MAF: Towards Generation of Scientific Diagrams from Documents guided by Multi-Aspect Feedback Refinement
- Author
-
Mondal, Ishani, Li, Zongxia, Hou, Yufang, Natarajan, Anandhavelu, Garimella, Aparna, and Boyd-Graber, Jordan
- Subjects
Computer Science - Computation and Language - Abstract
Automating the creation of scientific diagrams from academic papers can significantly streamline the development of tutorials, presentations, and posters, thereby saving time and accelerating the process. Current text-to-image models struggle with generating accurate and visually appealing diagrams from long-context inputs. We propose SciDoc2Diagram, a task that extracts relevant information from scientific papers and generates diagrams, along with a benchmarking dataset, SciDoc2DiagramBench. We develop a multi-step pipeline SciDoc2Diagrammer that generates diagrams based on user intentions using intermediate code generation. We observed that initial diagram drafts were often incomplete or unfaithful to the source, leading us to develop SciDoc2Diagrammer-Multi-Aspect-Feedback (MAF), a refinement strategy that significantly enhances factual correctness and visual appeal and outperforms existing models on both automatic and human judgement., Comment: Code and data available at https://github.com/Ishani-Mondal/SciDoc2DiagramGeneration
- Published
- 2024
47. Athermal phonon collection efficiency in diamond crystals for low mass dark matter detection
- Author
-
Kim, I., Kurinsky, N. A., Kagan, H., Boyd, S. T. P., and Kim, G. B.
- Subjects
Physics - Instrumentation and Detectors ,High Energy Physics - Experiment ,Nuclear Experiment - Abstract
We explored the efficacy of lab-grown diamonds as potential target materials for the direct detection of sub-GeV dark matter~(DM) using metallic magnetic calorimeters~(MMCs). Diamond, with its excellent phononic properties and the low atomic mass of the constituent carbon, can play a crucial role in detecting low-mass dark matter particles. The relatively long electron-hole pair lifetime inside the crystal may provide discrimination power between the DM-induced nuclear recoil events and the background-induced electron recoil events. Utilizing the the fast response times of the MMCs and their unique geometric versatility, we deployed a novel methodology for quantifying phonon dynamics inside diamond crystals. We demonstrated that lab-grown diamond crystals fabricated via the chemical vapor deposition~(CVD) technique can satisfy the stringent quality requirements for sub-GeV dark matter searches. The high-quality polycrystalline CVD diamond showed a superior athermal phonon collection efficiency compared to that of the reference sapphire crystal, and achieved energy resolution 62.7~eV at the 8.05~keV copper fluorescence line. With this energy resolution, we explored the low-energy range below 100~eV and confirmed the existence of so-called low-energy excess~(LEE) reported by multiple cryogenic experiments.
- Published
- 2024
48. Informative Input Design for Dynamic Mode Decomposition
- Author
-
Ott, Joshua, Kochenderfer, Mykel J., and Boyd, Stephen
- Subjects
Electrical Engineering and Systems Science - Systems and Control - Abstract
Efficiently estimating system dynamics from data is essential for minimizing data collection costs and improving model performance. This work addresses the challenge of designing future control inputs to maximize information gain, thereby improving the efficiency of the system identification process. We propose an approach that integrates informative input design into the Dynamic Mode Decomposition with control (DMDc) framework, which is well-suited for high-dimensional systems. By formulating an approximate convex optimization problem that minimizes the trace of the estimation error covariance matrix, we are able to efficiently reduce uncertainty in the model parameters while respecting constraints on the system states and control inputs. This method outperforms traditional techniques like Pseudo-Random Binary Sequences (PRBS) and orthogonal multisines, which do not adapt to the current system model and often gather redundant information. We validate our approach using aircraft and fluid dynamics simulations to demonstrate the practical applicability and effectiveness of our method. Our results show that strategically planning control inputs based on the current model enhances the accuracy of system identification while requiring less data. Furthermore, we provide our implementation and simulation interfaces as an open-source software package, facilitating further research development and use by industry practitioners.
- Published
- 2024
49. Fitting Multilevel Factor Models
- Author
-
Parshakova, Tetiana, Hastie, Trevor, and Boyd, Stephen
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Computer Science - Mathematical Software ,Statistics - Computation ,62H12 ,G.4 - Abstract
We examine a special case of the multilevel factor model, with covariance given by multilevel low rank (MLR) matrix~\cite{parshakova2023factor}. We develop a novel, fast implementation of the expectation-maximization (EM) algorithm, tailored for multilevel factor models, to maximize the likelihood of the observed data. This method accommodates any hierarchical structure and maintains linear time and storage complexities per iteration. This is achieved through a new efficient technique for computing the inverse of the positive definite MLR matrix. We show that the inverse of an invertible PSD MLR matrix is also an MLR matrix with the same sparsity in factors, and we use the recursive Sherman-Morrison-Woodbury matrix identity to obtain the factors of the inverse. Additionally, we present an algorithm that computes the Cholesky factorization of an expanded matrix with linear time and space complexities, yielding the covariance matrix as its Schur complement. This paper is accompanied by an open-source package that implements the proposed methods.
- Published
- 2024
50. Photometry and spectroscopy of a deep Algol-like minimum of WW Vul in 2016
- Author
-
Boyd, David
- Subjects
Astrophysics - Solar and Stellar Astrophysics - Abstract
We report analysis of photometry and spectroscopy of a deep Algol-like minimum of the pre-main-sequence star WW Vul in July and August 2016. This revealed substantial reddening due to absorption by circumstellar material. After dereddening, our spectra of WW Vul were consistent with spectral type A3V throughout the event. H{\alpha} is normally in emission in WW Vul. During the minimum, H{\alpha} emission dropped by ~30% and FWHM of the H{\alpha} line reduced by ~15%., Comment: 6 pages, 8 figures, accepted for publication in Journal of the AAVSO
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.