491 results on '"Modi, Chirag"'
Search Results
2. Effect of levamisole pretreatment against cyclophosphamide-induced histopathological changes in testes of rats
- Author
-
Chauhan, Juhi M., Patel, Urvesh D., Trangadia, Bhavesh J., Modi, Chirag M., and Patel, Harshad B.
- Published
- 2021
3. Histopathological evaluation of ovary and intestine of adult zebrafish exposed to acrylamide
- Author
-
Radheyshyam, Modi, Chirag M., Kachot, Rajesh L., Patel, Harshad B., Patel, Urvesh D., and Kariya, Mayank H.
- Published
- 2022
4. Anti-inflammatory and antioxidant potential of Azadirachta Indica flower
- Author
-
Modi, Chirag M., Patel, Harshad B., Patel, Urvesh D., Paida, Bhulesh V., Ramchandani, Divyam., Patel, Pavan M., and Patel, Harsh R.
- Published
- 2021
5. EigenVI: score-based variational inference with orthogonal function expansions
- Author
-
Cai, Diana, Modi, Chirag, Margossian, Charles C., Gower, Robert M., Blei, David M., and Saul, Lawrence K.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Statistics - Computation - Abstract
We develop EigenVI, an eigenvalue-based approach for black-box variational inference (BBVI). EigenVI constructs its variational approximations from orthogonal function expansions. For distributions over $\mathbb{R}^D$, the lowest order term in these expansions provides a Gaussian variational approximation, while higher-order terms provide a systematic way to model non-Gaussianity. These approximations are flexible enough to model complex distributions (multimodal, asymmetric), but they are simple enough that one can calculate their low-order moments and draw samples from them. EigenVI can also model other types of random variables (e.g., nonnegative, bounded) by constructing variational approximations from different families of orthogonal functions. Within these families, EigenVI computes the variational approximation that best matches the score function of the target distribution by minimizing a stochastic estimate of the Fisher divergence. Notably, this optimization reduces to solving a minimum eigenvalue problem, so that EigenVI effectively sidesteps the iterative gradient-based optimizations that are required for many other BBVI algorithms. (Gradient-based methods can be sensitive to learning rates, termination criteria, and other tunable hyperparameters.) We use EigenVI to approximate a variety of target distributions, including a benchmark suite of Bayesian models from posteriordb. On these distributions, we find that EigenVI is more accurate than existing methods for Gaussian BBVI., Comment: 25 pages, 9 figures. Advances in Neural Information Processing Systems (NeurIPS), 2024
- Published
- 2024
6. Batch, match, and patch: low-rank approximations for score-based variational inference
- Author
-
Modi, Chirag, Cai, Diana, and Saul, Lawrence K.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Statistics - Computation - Abstract
Black-box variational inference (BBVI) scales poorly to high dimensional problems when it is used to estimate a multivariate Gaussian approximation with a full covariance matrix. In this paper, we extend the batch-and-match (BaM) framework for score-based BBVI to problems where it is prohibitively expensive to store such covariance matrices, let alone to estimate them. Unlike classical algorithms for BBVI, which use gradient descent to minimize the reverse Kullback-Leibler divergence, BaM uses more specialized updates to match the scores of the target density and its Gaussian approximation. We extend the updates for BaM by integrating them with a more compact parameterization of full covariance matrices. In particular, borrowing ideas from factor analysis, we add an extra step to each iteration of BaM -- a patch -- that projects each newly updated covariance matrix into a more efficiently parameterized family of diagonal plus low rank matrices. We evaluate this approach on a variety of synthetic target distributions and real-world problems in high-dimensional inference.
- Published
- 2024
7. ATLAS: Adapting Trajectory Lengths and Step-Size for Hamiltonian Monte Carlo
- Author
-
Modi, Chirag
- Subjects
Statistics - Computation ,Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Hamiltonian Monte-Carlo (HMC) and its auto-tuned variant, the No U-Turn Sampler (NUTS) can struggle to accurately sample distributions with complex geometries, e.g., varying curvature, due to their constant step size for leapfrog integration and fixed mass matrix. In this work, we develop a strategy to locally adapt the step size parameter of HMC at every iteration by evaluating a low-rank approximation of the local Hessian and estimating its largest eigenvalue. We combine it with a strategy to similarly adapt the trajectory length by monitoring the no U-turn condition, resulting in an adaptive sampler, ATLAS: adapting trajectory length and step-size. We further use a delayed rejection framework for making multiple proposals that improves the computational efficiency of ATLAS, and develop an approach for automatically tuning its hyperparameters during warmup. We compare ATLAS with state-of-the-art samplers like NUTS on a suite of synthetic and real world examples, and show that i) unlike NUTS, ATLAS is able to accurately sample difficult distributions with complex geometries, ii) it is computationally competitive to NUTS for simpler distributions, and iii) it is more robust to the tuning of hyperparamters., Comment: Code available at https://github.com/modichirag/AtlasSampler
- Published
- 2024
8. Teaching dark matter simulations to speak the halo language
- Author
-
Pandey, Shivam, Lanusse, Francois, Modi, Chirag, and Wandelt, Benjamin D.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Instrumentation and Methods for Astrophysics - Abstract
We develop a transformer-based conditional generative model for discrete point objects and their properties. We use it to build a model for populating cosmological simulations with gravitationally collapsed structures called dark matter halos. Specifically, we condition our model with dark matter distribution obtained from fast, approximate simulations to recover the correct three-dimensional positions and masses of individual halos. This leads to a first model that can recover the statistical properties of the halos at small scales to better than 3% level using an accelerated dark matter simulation. This trained model can then be applied to simulations with significantly larger volumes which would otherwise be computationally prohibitive with traditional simulations, and also provides a crucial missing link in making end-to-end differentiable cosmological simulations. The code, named GOTHAM (Generative cOnditional Transformer for Halo's Auto-regressive Modeling) is publicly available at \url{https://github.com/shivampcosmo/GOTHAM}., Comment: 6 pages, 2 figures. Accepted by the Structured Probabilistic Inference & Generative Modeling workshop of ICML 2024
- Published
- 2024
9. CHARM: Creating Halos with Auto-Regressive Multi-stage networks
- Author
-
Pandey, Shivam, Modi, Chirag, Wandelt, Benjamin D., Bartlett, Deaglan J., Bayer, Adrian E., Bryan, Greg L., Ho, Matthew, Lavaux, Guilhem, Makinen, T. Lucas, and Villaescusa-Navarro, Francisco
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Astrophysics of Galaxies ,Statistics - Machine Learning - Abstract
To maximize the amount of information extracted from cosmological datasets, simulations that accurately represent these observations are necessary. However, traditional simulations that evolve particles under gravity by estimating particle-particle interactions (N-body simulations) are computationally expensive and prohibitive to scale to the large volumes and resolutions necessary for the upcoming datasets. Moreover, modeling the distribution of galaxies typically involves identifying virialized dark matter halos, which is also a time- and memory-consuming process for large N-body simulations, further exacerbating the computational cost. In this study, we introduce CHARM, a novel method for creating mock halo catalogs by matching the spatial, mass, and velocity statistics of halos directly from the large-scale distribution of the dark matter density field. We develop multi-stage neural spline flow-based networks to learn this mapping at redshift z=0.5 directly with computationally cheaper low-resolution particle mesh simulations instead of relying on the high-resolution N-body simulations. We show that the mock halo catalogs and painted galaxy catalogs have the same statistical properties as obtained from $N$-body simulations in both real space and redshift space. Finally, we use these mock catalogs for cosmological inference using redshift-space galaxy power spectrum, bispectrum, and wavelet-based statistics using simulation-based inference, performing the first inference with accelerated forward model simulations and finding unbiased cosmological constraints with well-calibrated posteriors. The code was developed as part of the Simons Collaboration on Learning the Universe and is publicly available at \url{https://github.com/shivampcosmo/CHARM}., Comment: 12 pages and 8 figures. This is a Learning the Universe Publication
- Published
- 2024
10. Ameliorating potential of combined treatment of quercetin and curcumin against cadmium-induced testicular damage in rats
- Author
-
Makwana, Chandrasinh N., Patel, Urvesh D., Rao, S Shreesha, Ladumor, Vipul C., Modi, Chirag M., Patel, Harshad B., Pandya, Kajal B., and Bhatt, Punit R.
- Published
- 2020
11. Hematological and biochemical profile after repeated dose intravenous administration of marbofloxacin in broiler Chickens
- Author
-
Patel, Harshad B., Patel, Urvesh D., Modi, Chirag M., and Javia, Bhavesh B.
- Published
- 2020
12. Sampling From Multiscale Densities With Delayed Rejection Generalized Hamiltonian Monte Carlo
- Author
-
Turok, Gilad, Modi, Chirag, and Carpenter, Bob
- Subjects
Statistics - Computation - Abstract
With the increasing prevalence of probabilistic programming languages, Hamiltonian Monte Carlo (HMC) has become the mainstay of applied Bayesian inference. However HMC still struggles to sample from densities with multiscale geometry: a large step size is needed to efficiently explore low curvature regions while a small step size is needed to accurately explore high curvature regions. We introduce the delayed rejection generalized HMC (DR-G-HMC) sampler that overcomes this challenge by employing dynamic step size selection, inspired by differential equation solvers. In a single sampling iteration, DR-G-HMC sequentially makes proposals with geometrically decreasing step sizes if necessary. This simulates Hamiltonian dynamics with increasing fidelity that, in high curvature regions, generates proposals with a higher chance of acceptance. DR-G-HMC also makes generalized HMC competitive by decreasing the number of rejections which otherwise cause inefficient backtracking and prevents directed movement. We present experiments to demonstrate that DR-G-HMC (1) correctly samples from multiscale densities, (2) makes generalized HMC methods competitive with the state of the art No-U-Turn sampler, and (3) is robust to tuning parameters., Comment: 9 pages, 5 figures
- Published
- 2024
13. A Parameter-Masked Mock Data Challenge for Beyond-Two-Point Galaxy Clustering Statistics
- Author
-
Collaboration, Beyond-2pt, Krause, Elisabeth, Kobayashi, Yosuke, Salcedo, Andrés N., Ivanov, Mikhail M., Abel, Tom, Akitsu, Kazuyuki, Angulo, Raul E., Cabass, Giovanni, Contarini, Sofia, Cuesta-Lazaro, Carolina, Hahn, ChangHoon, Hamaus, Nico, Jeong, Donghui, Modi, Chirag, Nguyen, Nhat-Minh, Nishimichi, Takahiro, Paillas, Enrique, Ibañez, Marcos Pellejero, Philcox, Oliver H. E., Pisani, Alice, Schmidt, Fabian, Tanaka, Satoshi, Verza, Giovanni, Yuan, Sihan, and Zennaro, Matteo
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
The last few years have seen the emergence of a wide array of novel techniques for analyzing high-precision data from upcoming galaxy surveys, which aim to extend the statistical analysis of galaxy clustering data beyond the linear regime and the canonical two-point (2pt) statistics. We test and benchmark some of these new techniques in a community data challenge "Beyond-2pt", initiated during the Aspen 2022 Summer Program "Large-Scale Structure Cosmology beyond 2-Point Statistics," whose first round of results we present here. The challenge dataset consists of high-precision mock galaxy catalogs for clustering in real space, redshift space, and on a light cone. Participants in the challenge have developed end-to-end pipelines to analyze mock catalogs and extract unknown ("masked") cosmological parameters of the underlying $\Lambda$CDM models with their methods. The methods represented are density-split clustering, nearest neighbor statistics, BACCO power spectrum emulator, void statistics, LEFTfield field-level inference using effective field theory (EFT), and joint power spectrum and bispectrum analyses using both EFT and simulation-based inference. In this work, we review the results of the challenge, focusing on problems solved, lessons learned, and future research needed to perfect the emerging beyond-2pt approaches. The unbiased parameter recovery demonstrated in this challenge by multiple statistics and the associated modeling and inference frameworks supports the credibility of cosmology constraints from these methods. The challenge data set is publicly available and we welcome future submissions from methods that are not yet represented., Comment: New submissions welcome! Challenge data available at https://github.com/ANSalcedo/Beyond2ptMock
- Published
- 2024
14. {\sc SimBIG}: Cosmological Constraints using Simulation-Based Inference of Galaxy Clustering with Marked Power Spectra
- Author
-
Massara, Elena, Hahn, ChangHoon, Eickenberg, Michael, Ho, Shirley, Hou, Jiamin, Lemos, Pablo, Modi, Chirag, Dizgah, Azadeh Moradinezhad, Parker, Liam, and Blancard, Bruno Régaldo-Saint
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We present the first $\Lambda$CDM cosmological analysis performed on a galaxy survey using marked power spectra. The marked power spectrum is the two-point function of a marked field, where galaxies are weighted by a function that depends on their local density. The presence of the mark leads these statistics to contain higher-order information of the original galaxy field, making them a good candidate to exploit the non-Gaussian information of a galaxy catalog. In this work we make use of \simbig, a forward modeling framework for galaxy clustering analyses, and perform simulation-based inference using normalizing flows to infer the posterior distribution of the $\Lambda$CDM cosmological parameters. We consider different mark configurations (ways to weight the galaxy field) and deploy them in the \simbig~pipeline to analyze the corresponding marked power spectra measured from a subset of the BOSS galaxy sample. We analyze the redshift-space mark power spectra decomposed in $\ell = 0, 2, 4$ multipoles and include scales up to the non-linear regime. Among the various mark configurations considered, the ones that give the most stringent cosmological constraints produce posterior median and $68\%$ confidence limits on the growth of structure parameters equal to $\Omega_m=0.273^{+0.040}_{-0.030}$ and $\sigma_8=0.777^{+0.077}_{-0.071}$. Compared to a perturbation theory analysis using the power spectrum of the same dataset, the \simbig~marked power spectra constraints on $\sigma_8$ are up to $1.2\times$ tighter, while no improvement is seen for the other cosmological parameters., Comment: 15 pages, 6 figures
- Published
- 2024
15. Neural Simulation-Based Inference of the Neutron Star Equation of State directly from Telescope Spectra
- Author
-
Brandes, Len, Modi, Chirag, Ghosh, Aishik, Farrell, Delaney, Lindblom, Lee, Heinrich, Lukas, Steiner, Andrew W., Weber, Fridolin, and Whiteson, Daniel
- Subjects
Astrophysics - High Energy Astrophysical Phenomena ,Astrophysics - Instrumentation and Methods for Astrophysics ,General Relativity and Quantum Cosmology ,High Energy Physics - Phenomenology ,Nuclear Theory - Abstract
Neutron stars provide a unique opportunity to study strongly interacting matter under extreme density conditions. The intricacies of matter inside neutron stars and their equation of state are not directly visible, but determine bulk properties, such as mass and radius, which affect the star's thermal X-ray emissions. However, the telescope spectra of these emissions are also affected by the stellar distance, hydrogen column, and effective surface temperature, which are not always well-constrained. Uncertainties on these nuisance parameters must be accounted for when making a robust estimation of the equation of state. In this study, we develop a novel methodology that, for the first time, can infer the full posterior distribution of both the equation of state and nuisance parameters directly from telescope observations. This method relies on the use of neural likelihood estimation, in which normalizing flows use samples of simulated telescope data to learn the likelihood of the neutron star spectra as a function of these parameters, coupled with Hamiltonian Monte Carlo methods to efficiently sample from the corresponding posterior distribution. Our approach surpasses the accuracy of previous methods, improves the interpretability of the results by providing access to the full posterior distribution, and naturally scales to a growing number of neutron star observations expected in the coming years.
- Published
- 2024
- Full Text
- View/download PDF
16. Batch and match: black-box variational inference with a score-based divergence
- Author
-
Cai, Diana, Modi, Chirag, Pillaud-Vivien, Loucas, Margossian, Charles C., Gower, Robert M., Blei, David M., and Saul, Lawrence K.
- Subjects
Statistics - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Statistics - Computation - Abstract
Most leading implementations of black-box variational inference (BBVI) are based on optimizing a stochastic evidence lower bound (ELBO). But such approaches to BBVI often converge slowly due to the high variance of their gradient estimates and their sensitivity to hyperparameters. In this work, we propose batch and match (BaM), an alternative approach to BBVI based on a score-based divergence. Notably, this score-based divergence can be optimized by a closed-form proximal update for Gaussian variational families with full covariance matrices. We analyze the convergence of BaM when the target distribution is Gaussian, and we prove that in the limit of infinite batch size the variational parameter updates converge exponentially quickly to the target mean and covariance. We also evaluate the performance of BaM on Gaussian and non-Gaussian target distributions that arise from posterior inference in hierarchical and deep generative models. In these experiments, we find that BaM typically converges in fewer (and sometimes significantly fewer) gradient evaluations than leading implementations of BBVI based on ELBO maximization., Comment: 49 pages, 14 figures. To appear in the Proceedings of the 41st International Conference on Machine Learning (ICML), 2024
- Published
- 2024
17. LtU-ILI: An All-in-One Framework for Implicit Inference in Astrophysics and Cosmology
- Author
-
Ho, Matthew, Bartlett, Deaglan J., Chartier, Nicolas, Cuesta-Lazaro, Carolina, Ding, Simon, Lapel, Axel, Lemos, Pablo, Lovell, Christopher C., Makinen, T. Lucas, Modi, Chirag, Pandya, Viraj, Pandey, Shivam, Perez, Lucia A., Wandelt, Benjamin, and Bryan, Greg L.
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Astrophysics of Galaxies ,Computer Science - Machine Learning - Abstract
This paper presents the Learning the Universe Implicit Likelihood Inference (LtU-ILI) pipeline, a codebase for rapid, user-friendly, and cutting-edge machine learning (ML) inference in astrophysics and cosmology. The pipeline includes software for implementing various neural architectures, training schemata, priors, and density estimators in a manner easily adaptable to any research workflow. It includes comprehensive validation metrics to assess posterior estimate coverage, enhancing the reliability of inferred results. Additionally, the pipeline is easily parallelizable and is designed for efficient exploration of modeling hyperparameters. To demonstrate its capabilities, we present real applications across a range of astrophysics and cosmology problems, such as: estimating galaxy cluster masses from X-ray photometry; inferring cosmology from matter power spectra and halo point clouds; characterizing progenitors in gravitational wave signals; capturing physical dust parameters from galaxy colors and luminosities; and establishing properties of semi-analytic models of galaxy formation. We also include exhaustive benchmarking and comparisons of all implemented methods as well as discussions about the challenges and pitfalls of ML inference in astronomical sciences. All code and examples are made publicly available at https://github.com/maho3/ltu-ili., Comment: 22 pages, 10 figures, accepted in the Open Journal of Astrophysics. Code available at https://github.com/maho3/ltu-ili
- Published
- 2024
- Full Text
- View/download PDF
18. ${\rm S{\scriptsize IM}BIG}$: Cosmological Constraints from the Redshift-Space Galaxy Skew Spectra
- Author
-
Hou, Jiamin, Dizgah, Azadeh Moradinezhad, Hahn, ChangHoon, Eickenberg, Michael, Ho, Shirley, Lemos, Pablo, Massara, Elena, Modi, Chirag, Parker, Liam, and Blancard, Bruno Régaldo-Saint
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Extracting the non-Gaussian information of the cosmic large-scale structure (LSS) is vital in unlocking the full potential of the rich datasets from the upcoming stage-IV galaxy surveys. Galaxy skew spectra serve as efficient beyond-two-point statistics, encapsulating essential bispectrum information with computational efficiency akin to power spectrum analysis. This paper presents the first cosmological constraints from analyzing the full set of redshift-space galaxy skew spectra of the data from the SDSS-III BOSS, accessing cosmological information down to nonlinear scales. Employing the ${\rm S{\scriptsize IM}BIG}$ forward modeling framework and simulation-based inference via normalizing flows, we analyze the CMASS-SGC sub-sample, which constitute approximately 10\% of the full BOSS data. Analyzing the scales up to $k_{\rm max}=0.5 \, {\rm Mpc}^{-1}h$, we find that the skew spectra improve the constraints on $\Omega_{\rm m}, \Omega_{\rm b}, h$, and $n_s$ by 34\%, 35\%, 18\%, 10\%, respectively, compared to constraints from previous ${\rm S{\scriptsize IM}BIG}$ power spectrum multipoles analysis, yielding $\Omega_{\rm m}=0.288^{+0.024}_{-0.034}$, $\Omega_{\rm b}= 0.043^{+0.005}_{-0.007}$, $h=0.759^{+0.104}_{-0.050}$, $n_{\rm s} = 0.918^{+0.041}_{-0.090}$ (at 68\% confidence limit). On the other hand, the constraints on $\sigma_8$ are weaker than from the power spectrum. Including the Big Bang Nucleosynthesis (BBN) prior on baryon density reduces the uncertainty on the Hubble parameter further, achieving $h=0.750^{+0.034}_{-0.032}$, which is a 38\% improvement over the constraint from the power spectrum with the same prior. Compared to the ${\rm S{\scriptsize IM}BIG}$ bispectrum (monopole) analysis, skew spectra offer comparable constraints on larger scales ($k_{\rm max}<0.3\, {\rm Mpc}^{-1}h$) for most parameters except for $\sigma_8$., Comment: 23 pages, 12 figures, 2 tables
- Published
- 2024
19. Galaxy Clustering Analysis with SimBIG and the Wavelet Scattering Transform
- Author
-
Blancard, Bruno Régaldo-Saint, Hahn, ChangHoon, Ho, Shirley, Hou, Jiamin, Lemos, Pablo, Massara, Elena, Modi, Chirag, Dizgah, Azadeh Moradinezhad, Parker, Liam, Yao, Yuling, and Eickenberg, Michael
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
The non-Gaussisan spatial distribution of galaxies traces the large-scale structure of the Universe and therefore constitutes a prime observable to constrain cosmological parameters. We conduct Bayesian inference of the $\Lambda$CDM parameters $\Omega_m$, $\Omega_b$, $h$, $n_s$, and $\sigma_8$ from the BOSS CMASS galaxy sample by combining the wavelet scattering transform (WST) with a simulation-based inference approach enabled by the ${\rm S{\scriptsize IM}BIG}$ forward model. We design a set of reduced WST statistics that leverage symmetries of redshift-space data. Posterior distributions are estimated with a conditional normalizing flow trained on 20,000 simulated ${\rm S{\scriptsize IM}BIG}$ galaxy catalogs with survey realism. We assess the accuracy of the posterior estimates using simulation-based calibration and quantify generalization and robustness to the change of forward model using a suite of 2,000 test simulations. When probing scales down to $k_{\rm max}=0.5~h/\text{Mpc}$, we are able to derive accurate posterior estimates that are robust to the change of forward model for all parameters, except $\sigma_8$. We mitigate the robustness issues with $\sigma_8$ by removing the WST coefficients that probe scales smaller than $k \sim 0.3~h/\text{Mpc}$. Applied to the BOSS CMASS sample, our WST analysis yields seemingly improved constraints obtained from a standard PT-based power spectrum analysis with $k_{\rm max}=0.25~h/\text{Mpc}$ for all parameters except $h$. However, we still raise concerns on these results. The observational predictions significantly vary across different normalizing flow architectures, which we interpret as a form of model misspecification. This highlights a key challenge for forward modeling approaches when using summary statistics that are sensitive to detailed model-specific or observational imprints on galaxy clustering., Comment: 11+5 pages, 8+2 figures, published in Physical Review D
- Published
- 2023
- Full Text
- View/download PDF
20. SimBIG: Field-level Simulation-Based Inference of Galaxy Clustering
- Author
-
Lemos, Pablo, Parker, Liam, Hahn, ChangHoon, Ho, Shirley, Eickenberg, Michael, Hou, Jiamin, Massara, Elena, Modi, Chirag, Dizgah, Azadeh Moradinezhad, Blancard, Bruno Regaldo-Saint, and Spergel, David
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Computer Science - Machine Learning - Abstract
We present the first simulation-based inference (SBI) of cosmological parameters from field-level analysis of galaxy clustering. Standard galaxy clustering analyses rely on analyzing summary statistics, such as the power spectrum, $P_\ell$, with analytic models based on perturbation theory. Consequently, they do not fully exploit the non-linear and non-Gaussian features of the galaxy distribution. To address these limitations, we use the {\sc SimBIG} forward modelling framework to perform SBI using normalizing flows. We apply SimBIG to a subset of the BOSS CMASS galaxy sample using a convolutional neural network with stochastic weight averaging to perform massive data compression of the galaxy field. We infer constraints on $\Omega_m = 0.267^{+0.033}_{-0.029}$ and $\sigma_8=0.762^{+0.036}_{-0.035}$. While our constraints on $\Omega_m$ are in-line with standard $P_\ell$ analyses, those on $\sigma_8$ are $2.65\times$ tighter. Our analysis also provides constraints on the Hubble constant $H_0=64.5 \pm 3.8 \ {\rm km / s / Mpc}$ from galaxy clustering alone. This higher constraining power comes from additional non-Gaussian cosmological information, inaccessible with $P_\ell$. We demonstrate the robustness of our analysis by showcasing our ability to infer unbiased cosmological constraints from a series of test simulations that are constructed using different forward models than the one used in our training dataset. This work not only presents competitive cosmological constraints but also introduces novel methods for leveraging additional cosmological information in upcoming galaxy surveys like DESI, PFS, and Euclid., Comment: 14 pages, 4 figures. A previous version of the paper was published in the ICML 2023 Workshop on Machine Learning for Astrophysics
- Published
- 2023
21. ${\rm S{\scriptsize IM}BIG}$: The First Cosmological Constraints from Non-Gaussian and Non-Linear Galaxy Clustering
- Author
-
Hahn, ChangHoon, Lemos, Pablo, Parker, Liam, Blancard, Bruno Régaldo-Saint, Eickenberg, Michael, Ho, Shirley, Hou, Jiamin, Massara, Elena, Modi, Chirag, Dizgah, Azadeh Moradinezhad, and Spergel, David
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
The 3D distribution of galaxies encodes detailed cosmological information on the expansion and growth history of the Universe. We present the first cosmological constraints that exploit non-Gaussian cosmological information on non-linear scales from galaxy clustering, inaccessible with current standard analyses. We analyze a subset of the BOSS galaxy survey using ${\rm S{\scriptsize IM}BIG}$, a new framework for cosmological inference that leverages high-fidelity simulations and deep generative models. We use two clustering statistics beyond the standard power spectrum: the bispectrum and a convolutional neural network based summary of the galaxy field. We infer constraints on $\Lambda$CDM parameters, $\Omega_b$, $h$, $n_s$, $\Omega_m$, and $\sigma_8$, that are 1.6, 1.5, 1.7, 1.2, and 2.3$\times$ tighter than power spectrum analyses. With this increased precision, we derive constraints on the Hubble constant, $H_0$, and $S_8 = \sigma_8 \sqrt{\Omega_m/0.3}$ that are competitive with other cosmological probes, even with a sample that only spans 10% of the full BOSS volume. Our $H_0$ constraints, imposing the Big Bang Nucleosynthesis prior on the baryon density, are consistent with the early time constraints from the cosmic microwave background (CMB). Meanwhile, our $S_8$ constraints are consistent with weak lensing experiments and similarly lie below CMB constraints. Lastly, we present forecasts to show that future work extending ${\rm S{\scriptsize IM}BIG}$ to upcoming spectroscopic galaxy surveys (DESI, PFS, Euclid) will produce leading $H_0$ and $S_8$ constraints that bridge the gap between early and late time measurements and shed light on current cosmic tensions., Comment: 13 pages, 5 figures, submitted to Nature Astronomy, comments welcome
- Published
- 2023
22. ${\rm S{\scriptsize IM}BIG}$: The First Cosmological Constraints from the Non-Linear Galaxy Bispectrum
- Author
-
Hahn, ChangHoon, Eickenberg, Michael, Ho, Shirley, Hou, Jiamin, Lemos, Pablo, Massara, Elena, Modi, Chirag, Dizgah, Azadeh Moradinezhad, Parker, Liam, and Blancard, Bruno Régaldo-Saint
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We present the first cosmological constraints from analyzing higher-order galaxy clustering on non-linear scales. We use ${\rm S{\scriptsize IM}BIG}$, a forward modeling framework for galaxy clustering analyses that employs simulation-based inference to perform highly efficient cosmological inference using normalizing flows. It leverages the predictive power of high-fidelity simulations and robustly extracts cosmological information from regimes inaccessible with current standard analyses. In this work, we apply ${\rm S{\scriptsize IM}BIG}$ to a subset of the BOSS galaxy sample and analyze the redshift-space bispectrum monopole, $B_0(k_1, k_2, k_3)$, to $k_{\rm max}=0.5\,h/{\rm Mpc}$. We achieve 1$\sigma$ constraints of $\Omega_m=0.293^{+0.027}_{-0.027}$ and $\sigma_8= 0.783^{+0.040}_{-0.038}$, which are more than 1.2 and 2.4$\times$ tighter than constraints from standard power spectrum analyses of the same dataset. We also derive 1.4, 1.4, 1.7$\times$ tighter constraints on $\Omega_b$, $h$, $n_s$. This improvement comes from additional cosmological information in higher-order clustering on non-linear scales and, for $\sigma_8$, is equivalent to the gain expected from a standard analysis on a $\sim$4$\times$ larger galaxy sample. Even with our BOSS subsample, which only spans 10% of the full BOSS volume, we derive competitive constraints on the growth of structure: $S_8 = 0.774^{+0.056}_{-0.053}$. Our constraint is consistent with results from both cosmic microwave background and weak lensing. Combined with a $\omega_b$ prior from Big Bang Nucleosynthesis, we also derive a constraint on $H_0=67.6^{+2.2}_{-1.8}\,{\rm km\,s^{-1}\,Mpc^{-1}}$ that is consistent with early universe constraints., Comment: 13 pages, 7 figures, submitted to PRD, comments welcome
- Published
- 2023
23. Sensitivity Analysis of Simulation-Based Inference for Galaxy Clustering
- Author
-
Modi, Chirag, Pandey, Shivam, Ho, Matthew, Hahn, ChangHoon, Blancard, Bruno R'egaldo-Saint, and Wandelt, Benjamin
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Simulation-based inference (SBI) is a promising approach to leverage high fidelity cosmological simulations and extract information from the non-Gaussian, non-linear scales that cannot be modeled analytically. However, scaling SBI to the next generation of cosmological surveys faces the computational challenge of requiring a large number of accurate simulations over a wide range of cosmologies, while simultaneously encompassing large cosmological volumes at high resolution. This challenge can potentially be mitigated by balancing the accuracy and computational cost for different components of the the forward model while ensuring robust inference. To guide our steps in this, we perform a sensitivity analysis of SBI for galaxy clustering on various components of the cosmological simulations: gravity model, halo-finder and the galaxy-halo distribution models (halo-occupation distribution, HOD). We infer the $\sigma_8$ and $\Omega_m$ using galaxy power spectrum multipoles and the bispectrum monopole assuming a galaxy number density expected from the luminous red galaxies observed using the Dark Energy Spectroscopy Instrument (DESI). We find that SBI is insensitive to changing gravity model between $N$-body simulations and particle mesh (PM) simulations. However, changing the halo-finder from friends-of-friends (FoF) to Rockstar can lead to biased estimate of $\sigma_8$ based on the bispectrum. For galaxy models, training SBI on more complex HOD leads to consistent inference for less complex HOD models, but SBI trained on simpler HOD models fails when applied to analyze data from a more complex HOD model. Based on our results, we discuss the outlook on cosmological simulations with a focus on applying SBI approaches to future galaxy surveys., Comment: 11 pages, 5 figures. Comments welcome
- Published
- 2023
24. Characterising ultra-high-redshift dark matter halo demographics and assembly histories with the GUREFT simulations
- Author
-
Yung, L. Y. Aaron, Somerville, Rachel S., Nguyen, Tri, Behroozi, Peter, Modi, Chirag, and Gardner, Jonathan P.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Astrophysics of Galaxies - Abstract
Dark matter halo demographics and assembly histories are a manifestation of cosmological structure formation and have profound implications for the formation and evolution of galaxies. In particular, merger trees provide fundamental input for several modelling techniques, such as semi-analytic models (SAMs), sub-halo abundance matching (SHAM), and decorated halo occupation distribution models (HODs). Motivated by the new ultra-high-redshift (z > 10) frontier enabled by JWST, we present a new suite of Gadget at Ultrahigh Redshift with Extra-Fine Timesteps (GUREFT) dark matter-only cosmological simulations that are carefully designed to capture halo merger histories and structural properties in the ultra-z universe. The simulation suite consists of four 1024^3-particle simulations with box sizes of 5, 15, 35, and 90 Mpc h-1, each with 170 snapshots stored between 40 > z > 6. With the unprecedented number of available snapshots and strategically chosen dynamic range covered by these boxes, gureft uncovers the emerging dark matter halo populations and their assembly histories in the earliest epochs of cosmic history. In this work, we present the halo mass functions between z ~ 20 to 6 down to log(Mvir/Msun) ~ 5, and show that at high redshift, these robust halo mass functions can differ substantially from commonly used analytic approximations or older fitting functions in the literature. We also present key physical properties of the ultra-z halo population, such as concentration and spin, as well as their mass growth and merger rates, and again provide updated fitting functions., Comment: 20 pages, 18 figures, accepted for publication in MNRAS
- Published
- 2023
25. Hybrid SBI or How I Learned to Stop Worrying and Learn the Likelihood
- Author
-
Modi, Chirag and Philcox, Oliver H. E.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We propose a new framework for the analysis of current and future cosmological surveys, which combines perturbative methods (PT) on large scales with conditional simulation-based implicit inference (SBI) on small scales. This enables modeling of a wide range of statistics across all scales using only small-volume simulations, drastically reducing computational costs, and avoids the assumption of an explicit small-scale likelihood. As a proof-of-principle for this hybrid simulation-based inference (HySBI) approach, we apply it to dark matter density fields and constrain cosmological parameters using both the power spectrum and wavelet coefficients, finding promising results that significantly outperform classical PT methods. We additionally lay out a roadmap for the next steps necessary to implement HySBI on actual survey data, including consideration of bias, systematics, and customized simulations. Our approach provides a realistic way to scale SBI to future survey volumes, avoiding prohibitive computational costs., Comment: 6 pages, 3 figures
- Published
- 2023
26. FLORAH: A generative model for halo assembly histories
- Author
-
Nguyen, Tri, Modi, Chirag, Yung, L. Y. Aaron, and Somerville, Rachel S.
- Subjects
Astrophysics - Astrophysics of Galaxies ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
The mass assembly history (MAH) of dark matter halos plays a crucial role in shaping the formation and evolution of galaxies. MAHs are used extensively in semi-analytic and empirical models of galaxy formation, yet current analytic methods to generate them are inaccurate and unable to capture their relationship with the halo internal structure and large-scale environment. This paper introduces FLORAH, a machine-learning framework for generating assembly histories of ensembles of dark matter halos. We train FLORAH on the assembly histories from the GUREFT and VSMDPL N-body simulations and demonstrate its ability to recover key properties such as the time evolution of mass and concentration. We obtain similar results for the galaxy stellar mass versus halo mass relation and its residuals when we run the Santa Cruz semi-analytic model on FLORAH-generated assembly histories and halo formation histories extracted from an N-body simulation. We further show that FLORAH also reproduces the dependence of clustering on properties other than mass (assembly bias), which is not captured by other analytic methods. By combining multiple networks trained on a suite of simulations with different redshift ranges and mass resolutions, we are able to construct accurate main progenitor branches (MPBs) with a wide dynamic mass range from $z=0$ up to an ultra-high redshift $z \approx 20$, currently far beyond that of a single N-body simulation. FLORAH is the first step towards a machine learning-based framework for planting full merger trees; this will enable the exploration of different galaxy formation scenarios with great computational efficiency at unprecedented accuracy., Comment: Published in MNRAS; 20 pages, 19 figures, 1 table
- Published
- 2023
- Full Text
- View/download PDF
27. Field-Level Inference with Microcanonical Langevin Monte Carlo
- Author
-
Bayer, Adrian E., Seljak, Uros, and Modi, Chirag
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Instrumentation and Methods for Astrophysics ,Physics - Data Analysis, Statistics and Probability ,Statistics - Computation ,Statistics - Methodology - Abstract
Field-level inference provides a means to optimally extract information from upcoming cosmological surveys, but requires efficient sampling of a high-dimensional parameter space. This work applies Microcanonical Langevin Monte Carlo (MCLMC) to sample the initial conditions of the Universe, as well as the cosmological parameters $\sigma_8$ and $\Omega_m$, from simulations of cosmic structure. MCLMC is shown to be over an order of magnitude more efficient than traditional Hamiltonian Monte Carlo (HMC) for a $\sim 2.6 \times 10^5$ dimensional problem. Moreover, the efficiency of MCLMC compared to HMC greatly increases as the dimensionality increases, suggesting gains of many orders of magnitude for the dimensionalities required by upcoming cosmological surveys., Comment: Accepted at the ICML 2023 Workshop on Machine Learning for Astrophysics. 4 pages, 4 figures
- Published
- 2023
28. Variational Inference with Gaussian Score Matching
- Author
-
Modi, Chirag, Margossian, Charles, Yao, Yuling, Gower, Robert, Blei, David, and Saul, Lawrence
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
Variational inference (VI) is a method to approximate the computationally intractable posterior distributions that arise in Bayesian statistics. Typically, VI fits a simple parametric distribution to the target posterior by minimizing an appropriate objective such as the evidence lower bound (ELBO). In this work, we present a new approach to VI based on the principle of score matching, that if two distributions are equal then their score functions (i.e., gradients of the log density) are equal at every point on their support. With this, we develop score matching VI, an iterative algorithm that seeks to match the scores between the variational approximation and the exact posterior. At each iteration, score matching VI solves an inner optimization, one that minimally adjusts the current variational estimate to match the scores at a newly sampled value of the latent variables. We show that when the variational family is a Gaussian, this inner optimization enjoys a closed form solution, which we call Gaussian score matching VI (GSM-VI). GSM-VI is also a ``black box'' variational algorithm in that it only requires a differentiable joint distribution, and as such it can be applied to a wide class of models. We compare GSM-VI to black box variational inference (BBVI), which has similar requirements but instead optimizes the ELBO. We study how GSM-VI behaves as a function of the problem dimensionality, the condition number of the target covariance matrix (when the target is Gaussian), and the degree of mismatch between the approximating and exact posterior distribution. We also study GSM-VI on a collection of real-world Bayesian inference problems from the posteriorDB database of datasets and models. In all of our studies we find that GSM-VI is faster than BBVI, but without sacrificing accuracy. It requires 10-100x fewer gradient evaluations to obtain a comparable quality of approximation., Comment: A Python code for GSM-VI algorithm is at https://github.com/modichirag/GSM-VI
- Published
- 2023
29. Forecasting the power of Higher Order Weak Lensing Statistics with automatically differentiable simulations
- Author
-
Lanzieri, Denise, Lanusse, François, Modi, Chirag, Horowitz, Benjamin, Harnois-Déraps, Joachim, Starck, Jean-Luc, and Collaboration, The LSST Dark Energy Science
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We present the Differentiable Lensing Lightcone (DLL), a fully differentiable physical model designed for being used as a forward model in Bayesian inference algorithms requiring access to derivatives of lensing observables with respect to cosmological parameters. We extend the public FlowPM N-body code, a particle-mesh N-body solver, simulating lensing lightcones and implementing the Born approximation in the Tensorflow framework. Furthermore, DLL is aimed at achieving high accuracy with low computational costs. As such, it integrates a novel Hybrid Physical-Neural parameterisation able to compensate for the small-scale approximations resulting from particle-mesh schemes for cosmological N-body simulations. We validate our simulations in an LSST setting against high-resolution $\kappa$TNG simulations by comparing both the lensing angular power spectrum and multiscale peak counts. We demonstrate an ability to recover lensing $C_\ell$ up to a 10% accuracy at $\ell=1000$ for sources at redshift 1, with as few as $\sim 0.6$ particles per Mpc/h. As a first use case, we use this tool to investigate the relative constraining power of the angular power spectrum and peak counts statistic in an LSST setting. Such comparisons are typically very costly as they require a large number of simulations, and do not scale well with the increasing number of cosmological parameters. As opposed to forecasts based on finite differences, these statistics can be analytically differentiated with respect to cosmology, or any systematics included in the simulations at the same computational cost of the forward simulation. We find that the peak counts outperform the power spectrum on the cold dark matter parameter $\Omega_c$, on the amplitude of density fluctuations $\sigma_8$, and on the amplitude of the intrinsic alignment signal $A_{IA}$., Comment: Submitted to A&A, 18 pages, 14 figures, comments are welcome
- Published
- 2023
- Full Text
- View/download PDF
30. Joint velocity and density reconstruction of the Universe with nonlinear differentiable forward modeling
- Author
-
Bayer, Adrian E, Modi, Chirag, and Ferraro, Simone
- Subjects
Astronomical Sciences ,Physical Sciences ,Astronomical and Space Sciences ,Atomic ,Molecular ,Nuclear ,Particle and Plasma Physics ,Nuclear & Particles Physics ,Astronomical sciences ,Particle and high energy physics - Abstract
Abstract: Reconstructing the initial conditions of the Universe from late-time observations has the potential to optimally extract cosmological information. Due to the high dimensionality of the parameter space, a differentiable forward model is needed for convergence, and recent advances have made it possible to perform reconstruction with nonlinear models based on galaxy (or halo) positions. In addition to positions, future surveys will provide measurements of galaxies' peculiar velocities through the kinematic Sunyaev-Zel'dovich effect (kSZ), type Ia supernovae, the fundamental plane relation, and the Tully-Fisher relation. Here we develop the formalism for including halo velocities, in addition to halo positions, to enhance the reconstruction of the initial conditions. We show that using velocity information can significantly improve the reconstruction accuracy compared to using only the halo density field. We study this improvement as a function of shot noise, velocity measurement noise, and angle to the line of sight. We also show how halo velocity data can be used to improve the reconstruction of the final nonlinear matter overdensity and velocity fields. We have built our pipeline into the differentiable Particle-Mesh FlowPM package, paving the way to perform field-level cosmological inference with joint velocity and density reconstruction. This is especially useful given the increased ability to measure peculiar velocities in the near future.
- Published
- 2023
31. pmwd: A Differentiable Cosmological Particle-Mesh $N$-body Library
- Author
-
Li, Yin, Lu, Libin, Modi, Chirag, Jamieson, Drew, Zhang, Yucheng, Feng, Yu, Zhou, Wenda, Kwan, Ngai Pok, Lanusse, François, and Greengard, Leslie
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
The formation of the large-scale structure, the evolution and distribution of galaxies, quasars, and dark matter on cosmological scales, requires numerical simulations. Differentiable simulations provide gradients of the cosmological parameters, that can accelerate the extraction of physical information from statistical analyses of observational data. The deep learning revolution has brought not only myriad powerful neural networks, but also breakthroughs including automatic differentiation (AD) tools and computational accelerators like GPUs, facilitating forward modeling of the Universe with differentiable simulations. Because AD needs to save the whole forward evolution history to backpropagate gradients, current differentiable cosmological simulations are limited by memory. Using the adjoint method, with reverse time integration to reconstruct the evolution history, we develop a differentiable cosmological particle-mesh (PM) simulation library pmwd (particle-mesh with derivatives) with a low memory cost. Based on the powerful AD library JAX, pmwd is fully differentiable, and is highly performant on GPUs., Comment: repo at https://github.com/eelregit/pmwd
- Published
- 2022
32. Differentiable Cosmological Simulation with Adjoint Method
- Author
-
Li, Yin, Modi, Chirag, Jamieson, Drew, Zhang, Yucheng, Lu, Libin, Feng, Yu, Lanusse, François, and Greengard, Leslie
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Rapid advances in deep learning have brought not only myriad powerful neural networks, but also breakthroughs that benefit established scientific research. In particular, automatic differentiation (AD) tools and computational accelerators like GPUs have facilitated forward modeling of the Universe with differentiable simulations. Based on analytic or automatic backpropagation, current differentiable cosmological simulations are limited by memory, and thus are subject to a trade-off between time and space/mass resolution, usually sacrificing both. We present a new approach free of such constraints, using the adjoint method and reverse time integration. It enables larger and more accurate forward modeling at the field level, and will improve gradient based optimization and inference. We implement it in an open-source particle-mesh (PM) $N$-body library pmwd (particle-mesh with derivatives). Based on the powerful AD system JAX, pmwd is fully differentiable, and is highly performant on GPUs., Comment: 5 figures + 2 tables; repo at https://github.com/eelregit/pmwd ; v2 matches published version with better typesetting
- Published
- 2022
- Full Text
- View/download PDF
33. Emulating cosmological growth functions with B-Splines
- Author
-
Kwan, Ngai Pok, Modi, Chirag, Li, Yin, and Ho, Shirley
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
In the light of GPU accelerations, sequential operations such as solving ordinary differential equations can be bottlenecks for gradient evaluations and hinder potential speed gains. In this work, we focus on growth functions and their time derivatives in cosmological particle mesh simulations and show that these are the majority time cost when using gradient based inference algorithms. We propose to construct novel conditional B-spline emulators which directly learn an interpolating function for the growth factor as a function of time, conditioned on the cosmology. We demonstrate that these emulators are sufficiently accurate to not bias our results for cosmological inference and can lead to over an order of magnitude gains in time, especially for small to intermediate size simulations.
- Published
- 2022
34. Differentiable Stochastic Halo Occupation Distribution
- Author
-
Horowitz, Benjamin, Hahn, ChangHoon, Lanusse, Francois, Modi, Chirag, and Ferraro, Simone
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Astrophysics of Galaxies - Abstract
In this work, we demonstrate how differentiable stochastic sampling techniques developed in the context of deep Reinforcement Learning can be used to perform efficient parameter inference over stochastic, simulation-based, forward models. As a particular example, we focus on the problem of estimating parameters of Halo Occupancy Distribution (HOD) models which are used to connect galaxies with their dark matter halos. Using a combination of continuous relaxation and gradient parameterization techniques, we can obtain well-defined gradients with respect to HOD parameters through discrete galaxy catalogs realizations. Having access to these gradients allows us to leverage efficient sampling schemes, such as Hamiltonian Monte-Carlo, and greatly speed up parameter inference. We demonstrate our technique on a mock galaxy catalog generated from the Bolshoi simulation using the Zheng et al. 2007 HOD model and find near identical posteriors as standard Markov Chain Monte Carlo techniques with an increase of ~8x in convergence efficiency. Our differentiable HOD model also has broad applications in full forward model approaches to cosmic structure and cosmological analysis., Comment: 10 pages, 6 figures, comments welcome
- Published
- 2022
35. ${\rm S{\scriptsize IM}BIG}$: A Forward Modeling Approach To Analyzing Galaxy Clustering
- Author
-
Hahn, ChangHoon, Eickenberg, Michael, Ho, Shirley, Hou, Jiamin, Lemos, Pablo, Massara, Elena, Modi, Chirag, Dizgah, Azadeh Moradinezhad, Blancard, Bruno Régaldo-Saint, and Abidi, Muntazir M.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We present the first-ever cosmological constraints from a simulation-based inference (SBI) analysis of galaxy clustering from the new ${\rm S{\scriptsize IM}BIG}$ forward modeling framework. ${\rm S{\scriptsize IM}BIG}$ leverages the predictive power of high-fidelity simulations and provides an inference framework that can extract cosmological information on small non-linear scales, inaccessible with standard analyses. In this work, we apply ${\rm S{\scriptsize IM}BIG}$ to the BOSS CMASS galaxy sample and analyze the power spectrum, $P_\ell(k)$, to $k_{\rm max}=0.5\,h/{\rm Mpc}$. We construct 20,000 simulated galaxy samples using our forward model, which is based on high-resolution ${\rm Q{\scriptsize UIJOTE}}$ $N$-body simulations and includes detailed survey realism for a more complete treatment of observational systematics. We then conduct SBI by training normalizing flows using the simulated samples and infer the posterior distribution of $\Lambda$CDM cosmological parameters: $\Omega_m, \Omega_b, h, n_s, \sigma_8$. We derive significant constraints on $\Omega_m$ and $\sigma_8$, which are consistent with previous works. Our constraints on $\sigma_8$ are $27\%$ more precise than standard analyses. This improvement is equivalent to the statistical gain expected from analyzing a galaxy sample that is $\sim60\%$ larger than CMASS with standard methods. It results from additional cosmological information on non-linear scales beyond the limit of current analytic models, $k > 0.25\,h/{\rm Mpc}$. While we focus on $P_\ell$ in this work for validation and comparison to the literature, ${\rm S{\scriptsize IM}BIG}$ provides a framework for analyzing galaxy clustering using any summary statistic. We expect further improvements on cosmological constraints from subsequent ${\rm S{\scriptsize IM}BIG}$ analyses of summary statistics beyond $P_\ell$., Comment: 9 pages, 5 figures
- Published
- 2022
36. ${\rm S{\scriptsize IM}BIG}$: Mock Challenge for a Forward Modeling Approach to Galaxy Clustering
- Author
-
Hahn, ChangHoon, Eickenberg, Michael, Ho, Shirley, Hou, Jiamin, Lemos, Pablo, Massara, Elena, Modi, Chirag, Dizgah, Azadeh Moradinezhad, Blancard, Bruno Régaldo-Saint, and Abidi, Muntazir M.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Simulation-Based Inference of Galaxies (${\rm S{\scriptsize IM}BIG}$) is a forward modeling framework for analyzing galaxy clustering using simulation-based inference. In this work, we present the ${\rm S{\scriptsize IM}BIG}$ forward model, which is designed to match the observed SDSS-III BOSS CMASS galaxy sample. The forward model is based on high-resolution ${\rm Q{\scriptsize UIJOTE}}$ $N$-body simulations and a flexible halo occupation model. It includes full survey realism and models observational systematics such as angular masking and fiber collisions. We present the "mock challenge" for validating the accuracy of posteriors inferred from ${\rm S{\scriptsize IM}BIG}$ using a suite of 1,500 test simulations constructed using forward models with a different $N$-body simulation, halo finder, and halo occupation prescription. As a demonstration of ${\rm S{\scriptsize IM}BIG}$, we analyze the power spectrum multipoles out to $k_{\rm max} = 0.5\,h/{\rm Mpc}$ and infer the posterior of $\Lambda$CDM cosmological and halo occupation parameters. Based on the mock challenge, we find that our constraints on $\Omega_m$ and $\sigma_8$ are unbiased, but conservative. Hence, the mock challenge demonstrates that ${\rm S{\scriptsize IM}BIG}$ provides a robust framework for inferring cosmological parameters from galaxy clustering on non-linear scales and a complete framework for handling observational systematics. In subsequent work, we will use ${\rm S{\scriptsize IM}BIG}$ to analyze summary statistics beyond the power spectrum including the bispectrum, marked power spectrum, skew spectrum, wavelet statistics, and field-level statistics., Comment: 28 pages, 6 figures
- Published
- 2022
- Full Text
- View/download PDF
37. Joint velocity and density reconstruction of the Universe with nonlinear differentiable forward modeling
- Author
-
Bayer, Adrian E., Modi, Chirag, and Ferraro, Simone
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Reconstructing the initial conditions of the Universe from late-time observations has the potential to optimally extract cosmological information. Due to the high dimensionality of the parameter space, a differentiable forward model is needed for convergence, and recent advances have made it possible to perform reconstruction with nonlinear models based on galaxy (or halo) positions. In addition to positions, future surveys will provide measurements of galaxies' peculiar velocities through the kinematic Sunyaev-Zel'dovich effect (kSZ), type Ia supernovae, and the fundamental plane or Tully-Fisher relations. Here we develop the formalism for including halo velocities, in addition to halo positions, to enhance the reconstruction of the initial conditions. We show that using velocity information can significantly improve the reconstruction accuracy compared to using only the halo density field. We study this improvement as a function of shot noise, velocity measurement noise, and angle to the line of sight. We also show how halo velocity data can be used to improve the reconstruction of the final nonlinear matter overdensity and velocity fields. We have built our pipeline into the differentiable Particle-Mesh FlowPM package, paving the way to perform field-level cosmological inference with joint velocity and density reconstruction. This is especially useful given the increased ability to measure peculiar velocities in the near future., Comment: 13+6 pages, 9 figures
- Published
- 2022
- Full Text
- View/download PDF
38. Towards a non-Gaussian Generative Model of large-scale Reionization Maps
- Author
-
Lin, Yu-Heng, Hassan, Sultan, Blancard, Bruno Régaldo-Saint, Eickenberg, Michael, and Modi, Chirag
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
High-dimensional data sets are expected from the next generation of large-scale surveys. These data sets will carry a wealth of information about the early stages of galaxy formation and cosmic reionization. Extracting the maximum amount of information from the these data sets remains a key challenge. Current simulations of cosmic reionization are computationally too expensive to provide enough realizations to enable testing different statistical methods, such as parameter inference. We present a non-Gaussian generative model of reionization maps that is based solely on their summary statistics. We reconstruct large-scale ionization fields (bubble spatial distributions) directly from their power spectra (PS) and Wavelet Phase Harmonics (WPH) coefficients. Using WPH, we show that our model is efficient in generating diverse new examples of large-scale ionization maps from a single realization of a summary statistic. We compare our model with the target ionization maps using the bubble size statistics, and largely find a good agreement. As compared to PS, our results show that WPH provide optimal summary statistics that capture most of information out of a highly non-linear ionization fields., Comment: 7 pages, 3 figures, accept in Machine Learning and the Physical Sciences workshop at NeurIPS 2022
- Published
- 2022
39. Reconstructing the Universe with Variational self-Boosted Sampling
- Author
-
Modi, Chirag, Li, Yin, and Blei, David
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics ,Statistics - Machine Learning - Abstract
Forward modeling approaches in cosmology have made it possible to reconstruct the initial conditions at the beginning of the Universe from the observed survey data. However the high dimensionality of the parameter space still poses a challenge to explore the full posterior, with traditional algorithms such as Hamiltonian Monte Carlo (HMC) being computationally inefficient due to generating correlated samples and the performance of variational inference being highly dependent on the choice of divergence (loss) function. Here we develop a hybrid scheme, called variational self-boosted sampling (VBS) to mitigate the drawbacks of both these algorithms by learning a variational approximation for the proposal distribution of Monte Carlo sampling and combine it with HMC. The variational distribution is parameterized as a normalizing flow and learnt with samples generated on the fly, while proposals drawn from it reduce auto-correlation length in MCMC chains. Our normalizing flow uses Fourier space convolutions and element-wise operations to scale to high dimensions. We show that after a short initial warm-up and training phase, VBS generates better quality of samples than simple VI approaches and reduces the correlation length in the sampling phase by a factor of 10-50 over using only HMC to explore the posterior of initial conditions in 64$^3$ and 128$^3$ dimensional problems, with larger gains for high signal-to-noise data observations., Comment: A shorter version of this paper is accepted for spotlight presentation in Machine Learning for Astrophysics Workshop at ICML, 2022
- Published
- 2022
- Full Text
- View/download PDF
40. The DESI $N$-body Simulation Project -- II. Suppressing sample variance with fast simulations
- Author
-
Ding, Zhejie, Chuang, Chia-Hsun, Yu, Yu, Garrison, Lehman H., Bayer, Adrian E., Feng, Yu, Modi, Chirag, Eisenstein, Daniel J., White, Martin, Variu, Andrei, Zhao, Cheng, Zhang, Hanyu, Rizo, Jennifer Meneses, Brooks, David, Dawson, Kyle, Doel, Peter, Gaztanaga, Enrique, Kehoe, Robert, Krolewski, Alex, Landriau, Martin, Palanque-Delabrouille, Nathalie, and Poppett, Claire
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Dark Energy Spectroscopic Instrument (DESI) will construct a large and precise three-dimensional map of our Universe. The survey effective volume reaches $\sim20\Gpchcube$. It is a great challenge to prepare high-resolution simulations with a much larger volume for validating the DESI analysis pipelines. \textsc{AbacusSummit} is a suite of high-resolution dark-matter-only simulations designed for this purpose, with $200\Gpchcube$ (10 times DESI volume) for the base cosmology. However, further efforts need to be done to provide a more precise analysis of the data and to cover also other cosmologies. Recently, the CARPool method was proposed to use paired accurate and approximate simulations to achieve high statistical precision with a limited number of high-resolution simulations. Relying on this technique, we propose to use fast quasi-$N$-body solvers combined with accurate simulations to produce accurate summary statistics. This enables us to obtain 100 times smaller variance than the expected DESI statistical variance at the scales we are interested in, e.g. $k < 0.3\hMpc$ for the halo power spectrum. In addition, it can significantly suppress the sample variance of the halo bispectrum. We further generalize the method for other cosmologies with only one realization in \textsc{AbacusSummit} suite to extend the effective volume $\sim 20$ times. In summary, our proposed strategy of combining high-fidelity simulations with fast approximate gravity solvers and a series of variance suppression techniques sets the path for a robust cosmological analysis of galaxy survey data., Comment: Matched version accepted by MNRAS, should be clearer
- Published
- 2022
- Full Text
- View/download PDF
41. The DESI N-body Simulation Project – II. Suppressing sample variance with fast simulations
- Author
-
Ding, Zhejie, Chuang, Chia-Hsun, Yu, Yu, Garrison, Lehman H, Bayer, Adrian E, Feng, Yu, Modi, Chirag, Eisenstein, Daniel J, White, Martin, Variu, Andrei, Zhao, Cheng, Zhang, Hanyu, Meneses Rizo, Jennifer, Brooks, David, Dawson, Kyle, Doel, Peter, Gaztanaga, Enrique, Kehoe, Robert, Krolewski, Alex, Landriau, Martin, Palanque-Delabrouille, Nathalie, and Poppett, Claire
- Subjects
Nuclear and Plasma Physics ,Physical Sciences ,Bioengineering ,Affordable and Clean Energy ,methods: statistical ,galaxies: haloes ,cosmology: theory ,large-scale structure of Universe ,Astronomical and Space Sciences ,Astronomy & Astrophysics ,Astronomical sciences ,Particle and high energy physics ,Space sciences - Abstract
Dark Energy Spectroscopic Instrument (DESI) will construct a large and precise three-dimensional map of our Universe. The survey effective volume reaches ∼ 20 h-3, Gpc3. It is a great challenge to prepare high-resolution simulations with a much larger volume for validating the DESI analysis pipelines. AbacusSummit is a suite of high-resolution dark-matter-only simulations designed for this purpose, with 200,-3}, Gpc3 (10 times DESI volume) for the base cosmology. However, further efforts need to be done to provide a more precise analysis of the data and to cover also other cosmologies. Recently, the CARPool method was proposed to use paired accurate and approximate simulations to achieve high statistical precision with a limited number of high-resolution simulations. Relying on this technique, we propose to use fast quasi-N-body solvers combined with accurate simulations to produce accurate summary statistics. This enables us to obtain 100 times smaller variance than the expected DESI statistical variance at the scales we are interested in, e.g. k < 0.3h Mpc-1 for the halo power spectrum. In addition, it can significantly suppress the sample variance of the halo bispectrum. We further generalize the method for other cosmologies with only one realization in AbacusSummit suite to extend the effective volume ∼20 times. In summary, our proposed strategy of combining high-fidelity simulations with fast approximate gravity solvers and a series of variance suppression techniques sets the path for a robust cosmological analysis of galaxy survey data.
- Published
- 2022
42. Delayed rejection Hamiltonian Monte Carlo for sampling multiscale distributions
- Author
-
Modi, Chirag, Barnett, Alex, and Carpenter, Bob
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
The efficiency of Hamiltonian Monte Carlo (HMC) can suffer when sampling a distribution with a wide range of length scales, because the small step sizes needed for stability in high-curvature regions are inefficient elsewhere. To address this we present a delayed rejection variant: if an initial HMC trajectory is rejected, we make one or more subsequent proposals each using a step size geometrically smaller than the last. We extend the standard delayed rejection framework by allowing the probability of a retry to depend on the probability of accepting the previous proposal. We test the scheme in several sampling tasks, including multiscale model distributions such as Neal's funnel, and statistical applications. Delayed rejection enables up to five-fold performance gains over optimally-tuned HMC, as measured by effective sample size per gradient evaluation. Even for simpler distributions, delayed rejection provides increased robustness to step size misspecification. Along the way, we provide an accessible but rigorous review of detailed balance for HMC., Comment: 30 pages, 10 figures
- Published
- 2021
- Full Text
- View/download PDF
43. CosmicRIM : Reconstructing Early Universe by Combining Differentiable Simulations with Recurrent Inference Machines
- Author
-
Modi, Chirag, Lanusse, François, Seljak, Uroš, Spergel, David N., and Perreault-Levasseur, Laurence
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Reconstructing the Gaussian initial conditions at the beginning of the Universe from the survey data in a forward modeling framework is a major challenge in cosmology. This requires solving a high dimensional inverse problem with an expensive, non-linear forward model: a cosmological N-body simulation. While intractable until recently, we propose to solve this inference problem using an automatically differentiable N-body solver, combined with a recurrent networks to learn the inference scheme and obtain the maximum-a-posteriori (MAP) estimate of the initial conditions of the Universe. We demonstrate using realistic cosmological observables that learnt inference is 40 times faster than traditional algorithms such as ADAM and LBFGS, which require specialized annealing schemes, and obtains solution of higher quality., Comment: Published as a workshop paper at ICLR 2021 SimDL Workshop
- Published
- 2021
44. Mind the gap: the power of combining photometric surveys with intensity mapping
- Author
-
Modi, Chirag, White, Martin, Castorina, Emanuele, and Slosar, Anže
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
The long wavelength modes lost to bright foregrounds in the interferometric 21-cm surveys can partially be recovered using a forward modeling approach that exploits the non-linear coupling between small and large scales induced by gravitational evolution. In this work, we build upon this approach by considering how adding external galaxy distribution data can help to fill in these modes. We consider supplementing the 21-cm data at two different redshifts with a spectroscopic sample (good radial resolution but low number density) loosely modeled on DESI-ELG at $z=1$ and a photometric sample (high number density but poor radial resolution) similar to LSST sample at $z=1$ and $z=4$ respectively. We find that both the galaxy samples are able to reconstruct the largest modes better than only using 21-cm data, with the spectroscopic sample performing significantly better than the photometric sample despite much lower number density. We demonstrate the synergies between surveys by showing that the primordial initial density field is reconstructed better with the combination of surveys than using either of them individually. Methodologically, we also explore the importance of smoothing the density field when using bias models to forward model these tracers for reconstruction., Comment: 16 pages, 7 Figures
- Published
- 2021
- Full Text
- View/download PDF
45. A Conceptual Model for Click Fraud Detection and Prevention in Online Advertising Using Blockchain
- Author
-
Jigalur, Rohitkumar, Modi, Chirag, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Rao, Udai Pratap, editor, Alazab, Mamoun, editor, Gohil, Bhavesh N., editor, and Chelliah, Pethuru Raj, editor
- Published
- 2023
- Full Text
- View/download PDF
46. Neurotoxicity of acrylamide in adult zebrafish following short-term and long-term exposure: evaluation of behavior alterations, oxidative stress markers, expression of antioxidant genes, and histological examination of the brain and eyes
- Author
-
Kachot, Rajesh L., Patel, Urvesh D., Patel, Harshad B., Modi, Chirag M., Chauhan, RadheyShyam, Kariya, Mayank H., and Bhadaniya, Amit R.
- Published
- 2023
- Full Text
- View/download PDF
47. FlowPM: Distributed TensorFlow Implementation of the FastPM Cosmological N-body Solver
- Author
-
Modi, Chirag, Lanusse, Francois, and Seljak, Uros
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Instrumentation and Methods for Astrophysics - Abstract
We present FlowPM, a Particle-Mesh (PM) cosmological N-body code implemented in Mesh-TensorFlow for GPU-accelerated, distributed, and differentiable simulations. We implement and validate the accuracy of a novel multi-grid scheme based on multiresolution pyramids to compute large scale forces efficiently on distributed platforms. We explore the scaling of the simulation on large-scale supercomputers and compare it with corresponding python based PM code, finding on an average 10x speed-up in terms of wallclock time. We also demonstrate how this novel tool can be used for efficiently solving large scale cosmological inference problems, in particular reconstruction of cosmological fields in a forward model Bayesian framework with hybrid PM and neural network forward model. We provide skeleton code for these examples and the entire code is publicly available at https://github.com/modichirag/flowpm., Comment: 14 pages, 17 figures. Code provided at https://github.com/modichirag/flowpm
- Published
- 2020
48. Estimating COVID-19 mortality in Italy early in the COVID-19 pandemic
- Author
-
Modi, Chirag, Böhm, Vanessa, Ferraro, Simone, Stein, George, and Seljak, Uroš
- Subjects
Good Health and Well Being ,Adolescent ,Adult ,Aged ,Aged ,80 and over ,COVID-19 ,Child ,Child ,Preschool ,Humans ,Infant ,Infant ,Newborn ,Italy ,Middle Aged ,Pandemics ,SARS-CoV-2 ,Survival Rate ,Young Adult - Abstract
Estimating rates of COVID-19 infection and associated mortality is challenging due to uncertainties in case ascertainment. We perform a counterfactual time series analysis on overall mortality data from towns in Italy, comparing the population mortality in 2020 with previous years, to estimate mortality from COVID-19. We find that the number of COVID-19 deaths in Italy in 2020 until September 9 was 59,000-62,000, compared to the official number of 36,000. The proportion of the population that died was 0.29% in the most affected region, Lombardia, and 0.57% in the most affected province, Bergamo. Combining reported test positive rates from Italy with estimates of infection fatality rates from the Diamond Princess cruise ship, we estimate the infection rate as 29% (95% confidence interval 15-52%) in Lombardy, and 72% (95% confidence interval 36-100%) in Bergamo.
- Published
- 2021
49. Generative Learning of Counterfactual for Synthetic Control Applications in Econometrics
- Author
-
Modi, Chirag and Seljak, Uros
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
A common statistical problem in econometrics is to estimate the impact of a treatment on a treated unit given a control sample with untreated outcomes. Here we develop a generative learning approach to this problem, learning the probability distribution of the data, which can be used for downstream tasks such as post-treatment counterfactual prediction and hypothesis testing. We use control samples to transform the data to a Gaussian and homoschedastic form and then perform Gaussian process analysis in Fourier space, evaluating the optimal Gaussian kernel via non-parametric power spectrum estimation. We combine this Gaussian prior with the data likelihood given by the pre-treatment data of the single unit, to obtain the synthetic prediction of the unit post-treatment, which minimizes the error variance of synthetic prediction. Given the generative model the minimum variance counterfactual is unique, and comes with an associated error covariance matrix. We extend this basic formalism to include correlations of primary variable with other covariates of interest. Given the probabilistic description of generative model we can compare synthetic data prediction with real data to address the question of whether the treatment had a statistically significant impact. For this purpose we develop a hypothesis testing approach and evaluate the Bayes factor. We apply the method to the well studied example of California (CA) tobacco sales tax of 1988. We also perform a placebo analysis using control states to validate our methodology. Our hypothesis testing method suggests 5.8:1 odds in favor of CA tobacco sales tax having an impact on the tobacco sales, a value that is at least three times higher than any of the 38 control states., Comment: 6 pages, 3 figures. Accepted at NeurIPS 2019 Workshop on Causal Machine Learning
- Published
- 2019
50. Lensing corrections on galaxy-lensing cross correlations and galaxy-galaxy auto correlations
- Author
-
Böhm, Vanessa, Modi, Chirag, and Castorina, Emanuele
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We study the impact of lensing corrections on modeling cross correlations between CMB lensing and galaxies, cosmic shear and galaxies, and galaxies in different redshift bins. Estimating the importance of these corrections becomes necessary in the light of anticipated high-accuracy measurements of these observables. While higher order lensing corrections (sometimes also referred to as post Born corrections) have been shown to be negligibly small for lensing auto correlations, they have not been studied for cross correlations. We evaluate the contributing four-point functions without making use of the Limber approximation and compute line-of-sight integrals with the numerically stable and fast FFTlog formalism. We find that the relative size of lensing corrections depends on the respective redshift distributions of the lensing sources and galaxies, but that they are generally small for high signal-to-noise correlations. We point out that a full assessment and judgement of the importance of these corrections requires the inclusion of lensing Jacobian terms on the galaxy side. We identify these additional correction terms, but do not evaluate them due to their large number. We argue that they could be potentially important and suggest that their size should be measured in the future with ray-traced simulations. We make our code publicly available., Comment: 26 pages, 6 figures. Code available at https://github.com/VMBoehm/lensing-corrections. Minor updates in text
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.