72 results on '"Malick A"'
Search Results
2. Observation of quantum oscillations, linear magnetoresistance, and crystalline electric field effect in quasi-two-dimensional PrAgBi$_2$
- Author
-
Malick, Sudip, Świątek, Hanna, Winiarski, Michał J, and Klimczuk, Tomasz
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
We report the magnetic and magnetotransport properties with electronic band structure calculation of the Bi square net system PrAgBi$_2$. The magnetization and heat capacity data confirm the presence of a crystalline electric field (CEF) effect in PrAgBi$_2$. Analysis of the CEF effect using a multilevel energy scheme reveals that the ground state of PrAgBi$_2$ consists of five singlets and two doublets. The de Haas-van Alphen (dHvA) quantum oscillations data show a single frequency with a very small cyclotron effective mass of approximately 0.11 $m_e$. A nontrivial Berry phase is also observed from the quantum oscillations data. The magnetotransport data shows linear and unsaturated magnetoresistance, reaching up to 1060\% at 2 K and 9 T. Notably, there is a crossover from a weak-field quadratic dependence to a high-field linear dependence in the field-dependent magnetoresistance data. The crossover critical field $B^*$ follows the quadratic temperature dependence, indicating the existence of Dirac fermions. The band structure calculation shows several Dirac-like linear band dispersions near the Fermi level and a Dirac point close to the Fermi level, located at the Brillouin zone boundary. \textit{Ab inito} calculations allowed us to ascribe the observed dHvA oscillation frequency to a particular feature of the Fermi surface. Our study suggests layered PrAgBi$_2$ is a plausible candidate for hosting the CEF effect and Dirac fermion in the Bi square net., Comment: 12 pages, 6 figures
- Published
- 2025
- Full Text
- View/download PDF
3. Minimally Deformed Regular Bardeen Black Hole Solutions in Rastall Theory
- Author
-
Sharif, M. and Sallah, and Malick
- Subjects
General Relativity and Quantum Cosmology - Abstract
In this study, we utilize the minimal geometric deformation technique of gravitational decoupling to extend the regular Bardeen black hole, leading to the derivation of new black hole solutions within the framework of Rastall theory. By decoupling the field equations associated with an extended matter source into two subsystems, we address the first subsystem using the metric components of the regular Bardeen black hole. The second subsystem, incorporating the effects of the additional source, is solved through a constraint imposed by a linear equation of state. By linearly combining the solutions of these subsystems, we obtain two extended models. We then explore the distinct physical properties of these models for specific values of the Rastall and decoupling parameters. Our investigations encompass effective thermodynamic variables such as density and anisotropic pressure, asymptotic flatness, energy conditions, and thermodynamic properties including Hawking temperature, entropy, and specific heat. The results reveal that both models violate asymptotic flatness of the resulting spacetimes. The violation of energy conditions indicate the presence of exotic matter, for both models. Nonetheless, the energy density, radial pressure, as well as the Hawking temperature exhibit acceptable behavior, while the specific heat and Hessian matrix suggest thermodynamic stability., Comment: 28 pages, 14 figures
- Published
- 2024
4. Model validation and error attribution for a drifting qubit
- Author
-
Gaye, Malick A., Albrecht, Dylan, Young, Steve, Albash, Tameem, and Jacobson, N. Tobias
- Subjects
Quantum Physics ,Condensed Matter - Mesoscale and Nanoscale Physics - Abstract
Qubit performance is often reported in terms of a variety of single-value metrics, each providing a facet of the underlying noise mechanism limiting performance. However, the value of these metrics may drift over long time-scales, and reporting a single number for qubit performance fails to account for the low-frequency noise processes that give rise to this drift. In this work, we demonstrate how we can use the distribution of these values to validate or invalidate candidate noise models. We focus on the case of randomized benchmarking (RB), where typically a single error rate is reported but this error rate can drift over time when multiple passes of RB are performed. We show that using a statistical test as simple as the Kolmogorov-Smirnov statistic on the distribution of RB error rates can be used to rule out noise models, assuming the experiment is performed over a long enough time interval to capture relevant low frequency noise. With confidence in a noise model, we show how care must be exercised when performing error attribution using the distribution of drifting RB error rate., Comment: 16 pages, 12 figures
- Published
- 2024
5. $\texttt{skwdro}$: a library for Wasserstein distributionally robust machine learning
- Author
-
Vincent, Florian, Azizian, Waïss, Iutzeler, Franck, and Malick, Jérôme
- Subjects
Computer Science - Machine Learning ,Computer Science - Mathematical Software ,Mathematics - Optimization and Control ,90C17, 90C15 ,I.2.6 ,I.2.5 ,G.4 ,G.1.6 - Abstract
We present skwdro, a Python library for training robust machine learning models. The library is based on distributionally robust optimization using optimal transport distances. For ease of use, it features both scikit-learn compatible estimators for popular objectives, as well as a wrapper for PyTorch modules, enabling researchers and practitioners to use it in a wide range of models with minimal code changes. Its implementation relies on an entropic smoothing of the original robust objective in order to ensure maximal model flexibility. The library is available at https://github.com/iutzeler/skwdro, Comment: 6 pages 1 figure
- Published
- 2024
6. Personalization of Dataset Retrieval Results using a Metadata-based Data Valuation Method
- Author
-
Ebiele, Malick, Bendechache, Malika, Clinton, Eamonn, and Brennan, Rob
- Subjects
Computer Science - Information Retrieval ,Computer Science - Databases ,H.1.1 - Abstract
In this paper, we propose a novel data valuation method for a Dataset Retrieval (DR) use case in Ireland's National mapping agency. To the best of our knowledge, data valuation has not yet been applied to Dataset Retrieval. By leveraging metadata and a user's preferences, we estimate the personal value of each dataset to facilitate dataset retrieval and filtering. We then validated the data value-based ranking against the stakeholders' ranking of the datasets. The proposed data valuation method and use case demonstrated that data valuation is promising for dataset retrieval. For instance, the outperforming dataset retrieval based on our approach obtained 0.8207 in terms of NDCG@5 (the truncated Normalized Discounted Cumulative Gain at 5). This study is unique in its exploration of a data valuation-based approach to dataset retrieval and stands out because, unlike most existing methods, our approach is validated using the stakeholders ranking of the datasets., Comment: 5 pages, 1 figure
- Published
- 2024
7. Large magnetoresistance and first-order phase transition in antiferromagnetic single-crystalline EuAg$_4$Sb$_2$
- Author
-
Malick, Sudip, Świątek, Hanna, Bławat, Joanna, Singleton, John, and Klimczuk, Tomasz
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
We present the results of a thorough investigation of the physical properties of EuAg$_4$Sb$_2$ single crystals using magnetization, heat capacity, and electrical resistivity measurements. High-quality single crystals, which crystallize in a trigonal structure with space group $R\bar{3}m$, were grown using a conventional flux method. Temperature-dependent magnetization measurements along different crystallographic orientations confirm two antiferromagnetic phase transitions around $T_{N1}$ = 10.5 K and $T_{N2}$ = 7.5 K. Isothermal magnetization data exhibit several metamagnetic transitions below these transition temperatures. Antiferromagnetic phase transitions in EuAg$_4$Sb$_2$ are further confirmed by two sharp peaks in the temperature-dependent heat capacity data at $T_{N1}$ and $T_{N2}$, which shift to the lower temperature in the presence of an external magnetic field. Our systematic heat capacity measurements utilizing a long-pulse and single-slope analysis technique allow us to detect a first-order phase transition in EuAg$_4$Sb$_2$ at 7.5 K. The temperature-dependent electrical resistivity data also manifest two features associated with magnetic order. The magnetoresistance exhibits a broad hump due to the field-induced metamagnetic transition. Remarkably, the magnetoresistance keeps increasing without showing any tendency to saturate as the applied magnetic field increases, and it reaches $\sim$20000\% at 1.6 K and 60 T. At high magnetic fields, several magnetic quantum oscillations are observed, indicating a complex Fermi surface. A large negative magnetoresistance of about -55\% is also observed near $T_{N1}$. Moreover, the $H$-$T$ phase diagram constructed using magnetization, heat capacity, and magnetotransport data indicates complex magnetic behavior in EuAg$_4$Sb$_2$., Comment: 10 pages, 6 figures
- Published
- 2024
- Full Text
- View/download PDF
8. What is the long-run distribution of stochastic gradient descent? A large deviations analysis
- Author
-
Azizian, Waïss, Iutzeler, Franck, Malick, Jérôme, and Mertikopoulos, Panayotis
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,Mathematics - Probability ,Statistics - Machine Learning ,Primary 90C15, 90C26, 60F10, secondary 90C30, 68Q32 - Abstract
In this paper, we examine the long-run distribution of stochastic gradient descent (SGD) in general, non-convex problems. Specifically, we seek to understand which regions of the problem's state space are more likely to be visited by SGD, and by how much. Using an approach based on the theory of large deviations and randomly perturbed dynamical systems, we show that the long-run distribution of SGD resembles the Boltzmann-Gibbs distribution of equilibrium thermodynamics with temperature equal to the method's step-size and energy levels determined by the problem's objective and the statistics of the noise. In particular, we show that, in the long run, (a) the problem's critical region is visited exponentially more often than any non-critical region; (b) the iterates of SGD are exponentially concentrated around the problem's minimum energy state (which does not always coincide with the global minimum of the objective); (c) all other connected components of critical points are visited with frequency that is exponentially proportional to their energy level; and, finally (d) any component of local maximizers or saddle points is "dominated" by a component of local minimizers which is visited exponentially more often., Comment: 70 pages, 3 figures; presented in ICML 2024
- Published
- 2024
9. Delay-tolerant distributed Bregman proximal algorithms
- Author
-
Chraibi, S., Iutzeler, F., Malick, J., and Rogozin, A.
- Subjects
Mathematics - Optimization and Control - Abstract
Many problems in machine learning write as the minimization of a sum of individual loss functions over the training examples. These functions are usually differentiable but, in some cases, their gradients are not Lipschitz continuous, which compromises the use of (proximal) gradient algorithms. Fortunately, changing the geometry and using Bregman divergences can alleviate this issue in several applications, such as for Poisson linear inverse problems.However, the Bregman operation makes the aggregation of several points and gradients more involved, hindering the distribution of computations for such problems. In this paper, we propose an asynchronous variant of the Bregman proximal-gradient method, able to adapt to any centralized computing system. In particular, we prove that the algorithm copes with arbitrarily long delays and we illustrate its behavior on distributed Poisson inverse problems.
- Published
- 2024
10. Universal generalization guarantees for Wasserstein distributionally robust models
- Author
-
Le, Tam and Malick, Jérôme
- Subjects
Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that generalization guarantees of robust models based on the Wasserstein distance have generalization guarantees that do not suffer from the curse of dimensionality. However, these results are either approximate, obtained in specific cases, or based on assumptions difficult to verify in practice. In contrast, we establish exact generalization guarantees that cover a wide range of cases, with arbitrary transport costs and parametric loss functions, including deep learning objectives with nonsmooth activations. We complete our analysis with an excess bound on the robust objective and an extension to Wasserstein robust models with entropic regularizations.
- Published
- 2024
11. Magnetic, thermodynamic, and magnetotransport properties of CeGaGe and PrGaGe single crystals
- Author
-
Ram, Daloo, Malick, Sudip, Hossain, Zakir, and Kaczorowski, Dariusz
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
We investigate the physical properties of high-quality single crystals CeGaGe and PrGaGe using magnetization, heat capacity, and magnetotransport measurements. Gallium-indium binary flux was used to grow these single crystals that crystallize in a body-centered tetragonal structure. Magnetic susceptibility data reveal a magnetic phase transition around 6.0 and 19.4 K in CeGaGe and PrGaGe, respectively, which is further confirmed by heat capacity and electrical resistivity data. A number of additional anomalies have been observed below the ordering temperature in the magnetic susceptibility data, indicating a complex magnetic structure. The magnetic measurements also reveal a strong magnetocrystalline anisotropy in both compounds. Our detailed analysis of the crystalline electric field (CEF) effect as observed in magnetic susceptibility and heat capacity data suggests that the $J$ = 5/2 multiplet of CeGaGe splits into three doublets, while the $J$ = 4 degenerate ground state of PrGaGe splits into five singlets and two doublets. The estimated energy levels from the CEF analysis are consistent with the magnetic entropy., Comment: 10 pages, 5 figures
- Published
- 2024
- Full Text
- View/download PDF
12. Study of Teacher Coaching Based on Classroom Videos: Impacts on Student Achievement and Teachers' Practices. Appendix. NCEE-2022-006a
- Author
-
National Center for Education Evaluation and Regional Assistance (NCEE) (ED/IES), Mathematica, Clark, Melissa, Max, Jeffrey, James-Burdumy, Susanne, Robles, Silvia, McCullough, Moira, Burkander, Paul, and Malick, Steven
- Abstract
These are the appendices for the report "Study of Teacher Coaching Based on Classroom Videos: Impacts on Student Achievement and Teachers' Practices. Evaluation Report." This report examined a promising strategy for individualized coaching: professional coaches--rather than district or school staff--providing feedback to teachers based on videos of their instruction. Feedback based on videos gives teachers the opportunity to observe and reflect on their own teaching and allows coaches to show teachers specific moments from their teaching when providing feedback. For this study, 107 elementary schools were randomly divided into three groups: one that received fewer highly structured cycles of focused professional coaching during a single school year (five cycles), one that received more (eight cycles), and one that continued with its usual strategies for supporting teachers. The study compared teachers' experiences and student achievement across the three groups to determine the effectiveness of the two versions of the coaching. This document provides additional details on the coaching provided for the study, the approach to carrying out the study, and the findings presented in the report. The following three appendices are included: (1) The Study's Video-Base Coaching for Teachers; (2) Study Design, Data Collection, and Analytic Methods; and (3) Supplemental Exhibits and Information on Study Findings. [For the full report, see ED619739. For the Study Highlights, see ED619740.]
- Published
- 2022
13. Study of Teacher Coaching Based on Classroom Videos: Impacts on Student Achievement and Teachers' Practices. Evaluation Report. NCEE 2022-006r
- Author
-
National Center for Education Evaluation and Regional Assistance (NCEE) (ED/IES), Mathematica, Clark, Melissa, Max, Jeffrey, James-Burdumy, Susanne, Robles, Silvia, McCullough, Moira, Burkander, Paul, and Malick, Steven
- Abstract
Helping teachers become more effective in the classroom is a high priority for educators and policymakers. A growing body of evidence suggests that individualized coaching focused on general teaching practices can improve teachers' instruction and student achievement. However, little is known about the benefits of specific approaches to coaching, including who is doing the coaching, how coaches observe teachers' instruction, and how or how often coaches provide feedback to teachers. This study examined one promising strategy for individualized coaching: professional coaches--rather than district or school staff--providing feedback to teachers based on videos of their instruction. Feedback based on videos gives teachers the opportunity to observe and reflect on their own teaching and allows coaches to show teachers specific moments from their teaching when providing feedback. For this study, about 100 elementary schools were randomly divided into three groups: one that received fewer highly structured cycles of focused professional coaching during a single school year (five cycles), one that received more (eight cycles), and one that continued with its usual strategies for supporting teachers. The study compared teachers' experiences and student achievement across the three groups to determine the effectiveness of the two versions of the coaching. Key findings include: (1) Five coaching cycles based on videos of teachers' instruction improved students' achievement, including for novice teachers and those with weaker classroom practices at the start of the study; (2) Eight cycles of coaching was not effective. Eight cycles of the coaching did not affect student achievement, perhaps because teachers had less time during each cycle to work on the practices being addressed; and (3) The study's coaching changed the type of feedback that teachers received. Compared to those who did not receive the study's coaching, teachers who received the coaching were more likely to report receiving feedback that focused on specific teaching practices, included strategies to use in their classrooms, and provided opportunities to observe and reflect on their teaching. [For the Study Highlights, see ED619740. For the appendix, see ED619742.]
- Published
- 2022
14. Exact Generalization Guarantees for (Regularized) Wasserstein Distributionally Robust Models
- Author
-
Azizian, Waïss, Iutzeler, Franck, and Malick, Jérôme
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Wasserstein distributionally robust estimators have emerged as powerful models for prediction and decision-making under uncertainty. These estimators provide attractive generalization guarantees: the robust objective obtained from the training distribution is an exact upper bound on the true risk with high probability. However, existing guarantees either suffer from the curse of dimensionality, are restricted to specific settings, or lead to spurious error terms. In this paper, we show that these generalization guarantees actually hold on general classes of models, do not suffer from the curse of dimensionality, and can even cover distribution shifts at testing. We also prove that these results carry over to the newly-introduced regularized versions of Wasserstein distributionally robust problems., Comment: 49 pages, 2 figures; to be presented at the 37th Annual Conference on Neural Information Processing Systems (NeurIPS 2023)
- Published
- 2023
15. The rate of convergence of Bregman proximal methods: Local geometry vs. regularity vs. sharpness
- Author
-
Azizian, Waïss, Iutzeler, Franck, Malick, Jérôme, and Mertikopoulos, Panayotis
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,Primary 65K15, 90C33, secondary 68Q25, 68Q32 - Abstract
We examine the last-iterate convergence rate of Bregman proximal methods - from mirror descent to mirror-prox and its optimistic variants - as a function of the local geometry induced by the prox-mapping defining the method. For generality, we focus on local solutions of constrained, non-monotone variational inequalities, and we show that the convergence rate of a given method depends sharply on its associated Legendre exponent, a notion that measures the growth rate of the underlying Bregman function (Euclidean, entropic, or other) near a solution. In particular, we show that boundary solutions exhibit a stark separation of regimes between methods with a zero and non-zero Legendre exponent: the former converge at a linear rate, while the latter converge, in general, sublinearly. This dichotomy becomes even more pronounced in linearly constrained problems where methods with entropic regularization achieve a linear convergence rate along sharp directions, compared to convergence in a finite number of steps under Euclidean regularization., Comment: 30 pages, 3 figures, 2 tables
- Published
- 2022
- Full Text
- View/download PDF
16. Electronic structure and physical properties of EuAuAs single crystal
- Author
-
Malick, S., Singh, J., Laha, A., Kanchana, V., Hossain, Z., and Kaczorowski, D.
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
High-quality single crystals of EuAuAs were studied by means of powder x-ray diffraction, magnetization, magnetic susceptibility, heat capacity, electrical resistivity and magnetoresistance measurements. The compound crystallizes with a hexagonal structure of the ZrSiBe type (space group $P6_3/mmc$). It orders antiferromagnetically below 6 K due to the magnetic moments of divalent Eu ions. The electrical resistivity exhibits metallic behavior down to 40 K, followed by a sharp increase at low temperatures. The magnetotransport isotherms show a distinct metamagnetic-like transition in concert with the magnetization data. The antiferromagnetic ground state in \mbox{EuAuAs} was corroborated in the \textit{ab initio} electronic band structure calculations. Most remarkably, the calculations revealed the presence of nodal line without spin-orbit coupling and Dirac point with inclusion of spin-orbit coupling. The \textit{Z}$_2$ invariants under the effective time reversal and inversion symmetries make this system nontrivial topological material. Our findings, combined with experimental analysis, makes EuAuAs a plausible candidate for an antiferromagnetic topological nodal-line semimetal.
- Published
- 2022
- Full Text
- View/download PDF
17. Weak antilocalization effect and triply degenerate state in Cu-doped CaAuAs
- Author
-
Malick, Sudip, Ghosh, Arup, Barman, Chanchal K., Alam, Aftab, Hossain, Z., Mandal, Prabhat, and Nayak, J.
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
The effect of 50\% Cu doping at the Au site in the topological Dirac semimetal CaAuAs is investigated through electronic band structure calculations, electrical resistivity, and magnetotransport measurements. Electronic structure calculations a suggest broken-symmetry-driven topological phase transition from the Dirac to triple-point state in CaAuAs via alloy engineering. The electrical resistivity of both the CaAuAs and CaAu$_{0.5}$Cu$_{0.5}$As compounds shows metallic behavior. Nonsaturating quasilinear magnetoresistance (MR) behavior is observed in CaAuAs. On the other hand, MR of the doped compound shows a pronounced cusplike feature in the low-field regime. Such behavior of MR in CaAu$_{0.5}$Cu$_{0.5}$As is attributed to the weak antilocalization (WAL) effect. The WAL effect is analyzed using different theoretical models, including the semiclassical $\sim\sqrt{B}$ one which accounts for the three-dimensional WAL and modified Hikami-Larkin-Nagaoka model. Strong WAL effect is also observed in the longitudinal MR, which is well described by the generalized Altshuler-Aronov model. Our study suggests that the WAL effect originates from weak disorder and the spin-orbit coupled bulk state. Interestingly, we have also observed the signature of chiral anomaly in longitudinal MR, when both current and field are applied along the $c$ axis. The Hall resistivity measurements indicate that the charge conduction mechanism in these compounds is dominated by the holes with a concentration $\sim$10$^{20}$ cm$^{-3}$ and mobility $\sim 10^2$ cm$^2$ V$^{-1}$ S$^{-1}$.
- Published
- 2022
- Full Text
- View/download PDF
18. Large nonsaturating magnetoresistance, weak anti-localization and non-trivial topological states in SrAl$_2$Si$_2$
- Author
-
Malick, Sudip, Sarkar, A. B., Laha, Antu, Anas, M., Malik, V. K., Agarwal, Amit, Hossain, Z., and Nayak, J.
- Subjects
Condensed Matter - Strongly Correlated Electrons - Abstract
We explore the electronic and topological properties of single crystal SrAl$_2$Si$_2$ using magnetotransport experiments in conjunction with first-principle calculations. We find that the temperature-dependent resistivity shows a pronounced peak near 50 K. We observe several remarkable features at low temperatures, such as large non-saturating magnetoresistance, Shubnikov-de Haas oscillations and cusp-like magneto-conductivity. The maximum value of magnetoresistance turns out to be 459\% at 2 K and 12 T. The analysis of the cusp-like feature in magneto-conductivity indicates a clear signature of weak anti-localization. Our Hall resistivity measurements confirm the presence of two types of charge carriers in SrAl$_2$Si$_2$, with low carrier density.
- Published
- 2022
- Full Text
- View/download PDF
19. Harnessing structure in composite nonsmooth minimization
- Author
-
Bareilles, Gilles, Iutzeler, Franck, and Malick, Jérôme
- Subjects
Mathematics - Optimization and Control ,65K10, 90C26, 49Q12, 90C55 - Abstract
We consider the problem of minimizing the composition of a nonsmooth function with a smooth mapping in the case where the proximity operator of the nonsmooth function can be explicitly computed. We first show that this proximity operator can provide the exact smooth substructure of minimizers, not only of the nonsmooth function, but also of the full composite function. We then exploit this proximal identification by proposing an algorithm which combines proximal steps with sequential quadratic programming steps. We show that our method locally identifies the optimal smooth substructure and then converges quadratically. We illustrate its behavior on two problems: the minimization of a maximum of quadratic functions and the minimization of the maximal eigenvalue of a parametrized matrix.
- Published
- 2022
- Full Text
- View/download PDF
20. Push--Pull with Device Sampling
- Author
-
Hsieh, Yu-Guan, Laguel, Yassine, Iutzeler, Franck, and Malick, Jérôme
- Subjects
Mathematics - Optimization and Control ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computer Science - Machine Learning ,Computer Science - Multiagent Systems - Abstract
We consider decentralized optimization problems in which a number of agents collaborate to minimize the average of their local functions by exchanging over an underlying communication graph. Specifically, we place ourselves in an asynchronous model where only a random portion of nodes perform computation at each iteration, while the information exchange can be conducted between all the nodes and in an asymmetric fashion. For this setting, we propose an algorithm that combines gradient tracking with a network-level variance reduction (in contrast to variance reduction within each node). This enables each node to track the average of the gradients of the objective functions. Our theoretical analysis shows that the algorithm converges linearly, when the local objective functions are strongly convex, under mild connectivity conditions on the expected mixing matrices. In particular, our result does not require the mixing matrices to be doubly stochastic. In the experiments, we investigate a broadcast mechanism that transmits information from computing nodes to their neighbors, and confirm the linear convergence of our method on both synthetic and real-world datasets., Comment: In IEEE Transactions on Automatic Control
- Published
- 2022
21. Regularization for Wasserstein Distributionally Robust Optimization
- Author
-
Azizian, Waïss, Iutzeler, Franck, and Malick, Jérôme
- Subjects
Mathematics - Optimization and Control - Abstract
Optimal transport has recently proved to be a useful tool in various machine learning applications needing comparisons of probability measures. Among these, applications of distributionally robust optimization naturally involve Wasserstein distances in their models of uncertainty, capturing data shifts or worst-case scenarios. Inspired by the success of the regularization of Wasserstein distances in optimal transport, we study in this paper the regularization of Wasserstein distributionally robust optimization. First, we derive a general strong duality result of regularized Wasserstein distributionally robust problems. Second, we refine this duality result in the case of entropic regularization and provide an approximation result when the regularization parameters vanish., Comment: to appear in ESAIM: Control, Optimization, and Calculus of Variations
- Published
- 2022
22. Teacher Turnover and Access to Effective Teachers in the School District of Philadelphia. REL 2020-037
- Author
-
Regional Educational Laboratory Mid-Atlantic (ED), Mathematica, National Center for Education Evaluation and Regional Assistance (ED), Dillon, Erin, and Malick, Steven
- Abstract
Concerned about the expense of teacher turnover, its disruption to schools and students, and its potential effect on students' access to effective teachers, the School District of Philadelphia partnered with the Regional Educational Laboratory Mid-Atlantic to better understand students' access to effective teachers and the factors related to teacher turnover. This analysis of differences in teacher effectiveness between and within schools in the district found that teachers of economically disadvantaged, Black, and Hispanic students had lower evaluation scores than teachers of non-economically disadvantaged and White students but similar value-added scores (a measure of teacher effectiveness based on student academic growth). The study also found that each year from 2010/11 through 2016/17, an average of 25 percent of the district's teachers left their school and 8 percent left the district. During the first five years of teaching, 77 percent of teachers left their school and 45 percent left the district. Turnover rates were highest for teachers who taught middle school grades, teachers who missed more than 10 days of school a year, teachers who identified as Black, teachers who had previously changed schools, and teachers who had low evaluation ratings. Teacher turnover was higher in schools where teachers had a less positive view of the school climate. School climate mattered more for teachers with higher evaluation ratings than for teachers with lower evaluation ratings. [For the study snapshot, see ED607753; for the appendices, see ED607754.]
- Published
- 2020
23. Superquantile-based learning: a direct approach using gradient-based optimization
- Author
-
Laguel, Yassine, Malick, Jérôme, and Harchaoui, Zaid
- Subjects
Mathematics - Optimization and Control - Abstract
We consider a formulation of supervised learning that endows models with robustness to distributional shifts from training to testing. The formulation hinges upon the superquantile risk measure, also known as the conditional value-at-risk, which has shown promise in recent applications of machine learning and signal processing. We show that, thanks to a direct smoothing of the superquantile function, a superquantile-based learning objective is amenable to gradient-based optimization, using batch optimization algorithms such as gradient descent or quasi-Newton algorithms, or using stochastic optimization algorithms such as stochastic gradient algorithms. A companion software SPQR implements in Python the algorithms described and allows practitioners to experiment with superquantile-based supervised learning.
- Published
- 2022
24. Superquantiles at Work: Machine Learning Applications and Efficient Subgradient Computation
- Author
-
Laguel, Yassine, Pillutla, Krishna, Malick, Jérôme, and Harchaoui, Zaid
- Subjects
Mathematics - Optimization and Control - Abstract
R. Tyrell Rockafellar and collaborators introduced, in a series of works, new regression modeling methods based on the notion of superquantile (or conditional value-at-risk). These methods have been influential in economics, finance, management science, and operations research in general. Recently, they have been the subject of a renewed interest in machine learning, to address issues of distributional robustness and fair allocation. In this paper, we review some of these new applications of the superquantile, with references to recent developments. These applications involve nonsmooth superquantile-based objective functions that admit explicit subgradient calculations. To make these superquantile-based functions amenable to the gradient-based algorithms popular in machine learning, we show how to smooth them by infimal convolution and describe numerical procedures to compute the gradients of the smooth approximations. We put the approach into perspective by comparing it to other smoothing techniques and by illustrating it on toy examples.
- Published
- 2022
25. Federated Learning with Superquantile Aggregation for Heterogeneous Data
- Author
-
Pillutla, Krishna, Laguel, Yassine, Malick, Jérôme, and Harchaoui, Zaid
- Subjects
Computer Science - Machine Learning ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We present a federated learning framework that is designed to robustly deliver good predictive performance across individual clients with heterogeneous data. The proposed approach hinges upon a superquantile-based learning objective that captures the tail statistics of the error distribution over heterogeneous clients. We present a stochastic training algorithm that interleaves differentially private client filtering with federated averaging steps. We prove finite time convergence guarantees for the algorithm: $O(1/\sqrt{T})$ in the nonconvex case in $T$ communication rounds and $O(\exp(-T/\kappa^{3/2}) + \kappa/T)$ in the strongly convex case with local condition number $\kappa$. Experimental results on benchmark datasets for federated learning demonstrate that our approach is competitive with classical ones in terms of average error and outperforms them in terms of tail statistics of the error., Comment: Machine Learning Journal, Special Issue on Safe and Fair Machine Learning (To appear)
- Published
- 2021
- Full Text
- View/download PDF
26. The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities
- Author
-
Azizian, Waïss, Iutzeler, Franck, Malick, Jérôme, and Mertikopoulos, Panayotis
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,65K15, 90C33 (Primary) 68Q25, 68Q32 (Secondary) - Abstract
In this paper, we analyze the local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning. Our analysis reveals an intricate relation between the algorithm's rate of convergence and the local geometry induced by the method's underlying Bregman function. We quantify this relation by means of the Legendre exponent, a notion that we introduce to measure the growth rate of the Bregman divergence relative to the ambient norm near a solution. We show that this exponent determines both the optimal step-size policy of the algorithm and the optimal rates attained, explaining in this way the differences observed for some popular Bregman functions (Euclidean projection, negative entropy, fractional power, etc.)., Comment: 31 pages, 3 figures, 1 table; to be presented at the 34th Annual Conference on Learning Theory (COLT 2021)
- Published
- 2021
27. Optimization in Open Networks via Dual Averaging
- Author
-
Hsieh, Yu-Guan, Iutzeler, Franck, Malick, Jérôme, and Mertikopoulos, Panayotis
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,Computer Science - Multiagent Systems - Abstract
In networks of autonomous agents (e.g., fleets of vehicles, scattered sensors), the problem of minimizing the sum of the agents' local functions has received a lot of interest. We tackle here this distributed optimization problem in the case of open networks when agents can join and leave the network at any time. Leveraging recent online optimization techniques, we propose and analyze the convergence of a decentralized asynchronous optimization method for open networks., Comment: In 60th IEEE Conference on Decision and Control (CDC 2021); 7 pages, 1 figure
- Published
- 2021
28. Chance constrained problems: a bilevel convex optimization perspective
- Author
-
Laguel, Yassine, Malick, Jérôme, and Ackooij, Wim
- Subjects
Mathematics - Optimization and Control - Abstract
Chance constraints are a valuable tool for the design of safe decisions in uncertain environments; they are used to model satisfaction of a constraint with a target probability. However, because of possible non-convexity and non-smoothness, optimizing over a chance constrained set is challenging. In this paper, we establish an exact reformulation of chance constrained problems as a bilevel problems with convex lower-levels. We then derive a tractable penalty approach, where the penalized objective is a difference-of-convex function that we minimize with a suitable bundle algorithm. We release an easy-to-use open-source python toolbox implementing the approach, with a special emphasis on fast computational subroutines.
- Published
- 2021
29. On the Convexity of Level-sets of Probability Functions
- Author
-
Laguel, Yassine, van Ackooij, Wim, Malick, Jérôme, and Ramalho, Guilherme
- Subjects
Mathematics - Optimization and Control - Abstract
In decision-making problems under uncertainty, probabilistic constraints are a valuable tool to express safety of decisions. They result from taking the probability measure of a given set of random inequalities depending on the decision vector. Even if the original set of inequalities is convex, this favourable property is not immediately transferred to the probabilistically constrained feasible set and may in particular depend on the chosen safety level. In this paper, we provide results guaranteeing the convexity of feasible sets to probabilistic constraints when the safety level is greater than a computable threshold. Our results extend all the existing ones and also cover the case where decision vectors belong to Banach spaces. The key idea in our approach is to reveal the level of underlying convexity in the nominal problem data (e.g., concavity of the probability function) by auxiliary transforming functions. We provide several examples illustrating our theoretical developments.
- Published
- 2021
30. Newton acceleration on manifolds identified by proximal-gradient methods
- Author
-
Bareilles, Gilles, Iutzeler, Franck, and Malick, Jérôme
- Subjects
Mathematics - Optimization and Control - Abstract
Proximal methods are known to identify the underlying substructure of nonsmooth optimization problems. Even more, in many interesting situations, the output of a proximity operator comes with its structure at no additional cost, and convergence is improved once it matches the structure of a minimizer. However, it is impossible in general to know whether the current structure is final or not; such highly valuable information has to be exploited adaptively. To do so, we place ourselves in the case where a proximal gradient method can identify manifolds of differentiability of the nonsmooth objective. Leveraging this manifold identification, we show that Riemannian Newton-like methods can be intertwined with the proximal gradient steps to drastically boost the convergence. We prove the superlinear convergence of the algorithm when solving some nondegenerated nonsmooth nonconvex optimization problems. We provide numerical illustrations on optimization problems regularized by $\ell_1$-norm or trace-norm.
- Published
- 2020
- Full Text
- View/download PDF
31. Multi-Agent Online Optimization with Delays: Asynchronicity, Adaptivity, and Optimism
- Author
-
Hsieh, Yu-Guan, Iutzeler, Franck, Malick, Jérôme, and Mertikopoulos, Panayotis
- Subjects
Computer Science - Machine Learning ,Computer Science - Multiagent Systems ,Mathematics - Optimization and Control - Abstract
In this paper, we provide a general framework for studying multi-agent online learning problems in the presence of delays and asynchronicities. Specifically, we propose and analyze a class of adaptive dual averaging schemes in which agents only need to accumulate gradient feedback received from the whole system, without requiring any between-agent coordination. In the single-agent case, the adaptivity of the proposed method allows us to extend a range of existing results to problems with potentially unbounded delays between playing an action and receiving the corresponding feedback. In the multi-agent case, the situation is significantly more complicated because agents may not have access to a global clock to use as a reference point; to overcome this, we focus on the information that is available for producing each prediction rather than the actual delay associated with each feedback. This allows us to derive adaptive learning strategies with optimal regret bounds, even in a fully decentralized, asynchronous environment. Finally, we also analyze an "optimistic" variant of the proposed algorithm which is capable of exploiting the predictability of problems with a slower variation and leads to improved regret bounds., Comment: Accepted by Journal of Machine Learning Research (JMLR)
- Published
- 2020
32. Nonsmoothness in Machine Learning: specific structure, proximal identification, and applications
- Author
-
Iutzeler, Franck and Malick, Jérôme
- Subjects
Mathematics - Optimization and Control ,Electrical Engineering and Systems Science - Signal Processing ,Statistics - Machine Learning - Abstract
Nonsmoothness is often a curse for optimization; but it is sometimes a blessing, in particular for applications in machine learning. In this paper, we present the specific structure of nonsmooth optimization problems appearing in machine learning and illustrate how to leverage this structure in practice, for compression, acceleration, or dimension reduction. We pay a special attention to the presentation to make it concise and easily accessible, with both simple examples and general results.
- Published
- 2020
33. First-order Optimization for Superquantile-based Supervised Learning
- Author
-
Laguel, Yassine, Malick, Jérôme, and Harchaoui, Zaid
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Classical supervised learning via empirical risk (or negative log-likelihood) minimization hinges upon the assumption that the testing distribution coincides with the training distribution. This assumption can be challenged in modern applications of machine learning in which learning machines may operate at prediction time with testing data whose distribution departs from the one of the training data. We revisit the superquantile regression method by proposing a first-order optimization algorithm to minimize a superquantile-based learning objective. The proposed algorithm is based on smoothing the superquantile function by infimal convolution. Promising numerical results illustrate the interest of the approach towards safer supervised learning., Comment: 6 pages, 2 figures, 2 tables, presented at IEEE MLSP
- Published
- 2020
34. Randomized Progressive Hedging methods for Multi-stage Stochastic Programming
- Author
-
Bareilles, Gilles, Laguel, Yassine, Grishchenko, Dmitry, Iutzeler, Franck, and Malick, Jérôme
- Subjects
Computer Science - Distributed, Parallel, and Cluster Computing ,Mathematics - Optimization and Control - Abstract
Progressive Hedging is a popular decomposition algorithm for solving multi-stage stochastic optimization problems. A computational bottleneck of this algorithm is that all scenario subproblems have to be solved at each iteration. In this paper, we introduce randomized versions of the Progressive Hedging algorithm able to produce new iterates as soon as a single scenario subproblem is solved. Building on the relation between Progressive Hedging and monotone operators, we leverage recent results on randomized fixed point methods to derive and analyze the proposed methods. Finally, we release the corresponding code as an easy-to-use Julia toolbox and report computational experiments showing the practical interest of randomized algorithms, notably in a parallel context. Throughout the paper, we pay a special attention to presentation, stressing main ideas, avoiding extra-technicalities, in order to make the randomized methods accessible to a broad audience in the Operations Research community.
- Published
- 2020
35. Proximal Gradient methods with Adaptive Subspace Sampling
- Author
-
Grishchenko, Dmitry, Iutzeler, Franck, and Malick, Jérôme
- Subjects
Mathematics - Optimization and Control - Abstract
Many applications in machine learning or signal processing involve nonsmooth optimization problems. This nonsmoothness brings a low-dimensional structure to the optimal solutions. In this paper, we propose a randomized proximal gradient method harnessing this underlying structure. We introduce two key components: i) a random subspace proximal gradient algorithm; ii) an identification-based sampling of the subspaces. Their interplay brings a significant performance improvement on typical learning problems in terms of dimensions explored.
- Published
- 2020
36. Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling
- Author
-
Hsieh, Yu-Guan, Iutzeler, Franck, Malick, Jérôme, and Mertikopoulos, Panayotis
- Subjects
Mathematics - Optimization and Control ,Computer Science - Computer Science and Game Theory ,Computer Science - Machine Learning ,65K15, 62L20, 90C15, 90C33 - Abstract
Owing to their stability and convergence speed, extragradient methods have become a staple for solving large-scale saddle-point problems in machine learning. The basic premise of these algorithms is the use of an extrapolation step before performing an update; thanks to this exploration step, extra-gradient methods overcome many of the non-convergence issues that plague gradient descent/ascent schemes. On the other hand, as we show in this paper, running vanilla extragradient with stochastic gradients may jeopardize its convergence, even in simple bilinear models. To overcome this failure, we investigate a double stepsize extragradient algorithm where the exploration step evolves at a more aggressive time-scale compared to the update step. We show that this modification allows the method to converge even with stochastic gradients, and we derive sharp convergence rates under an error bound condition., Comment: In Advances in Neural Information Processing Systems 33 (NeurIPS 2020); 29 pages, 5 figures
- Published
- 2020
37. Growth of SiO2 microparticles by using modified Stober method: Effect of ammonia solution concentration and TEOS concentration
- Author
-
Bhattacharya, Shrestha, Malick, Aishik Basu, Dutta, Mrinal, Srivastava, Sanjay K., Prathap, P., and Rauthan, C. M. S.
- Subjects
Physics - Applied Physics ,Condensed Matter - Materials Science - Abstract
The unique structural features and suitability of the SiO2 micro particles in different application areas have mobilized a worldwide interest in the last few decades. In this report a classical method known as the Stober method has been used to synthesize silica micro spheres. These microparticles have been synthesized by the reaction of tetraethyl orthosilicate (Si(OC2H5)4, TEOS) (silica precursor) with water in an alcoholic medium (e.g. ethanol) in the presence of KCl electrolyte and ammonia as a catalyst. It has been observed that the size of the microparticles closely depends on the amount of the TEOS and ammonia. A decrease in the size of micro particles from 2.1 micrometer to 1.7 micrometer has been confirmed as the amount of TEOS increases from 3.5 ml to 6.4 ml respectively. In similar way a decrease in the diameter of the micro particles from 2.1 micrometer to 1.7 micrometer has been observed with increase in the ammonia content from 3 ml to 9 ml., Comment: 4 page, 2 Figures
- Published
- 2019
- Full Text
- View/download PDF
38. On the convergence of single-call stochastic extra-gradient methods
- Author
-
Hsieh, Yu-Guan, Iutzeler, Franck, Malick, Jérôme, and Mertikopoulos, Panayotis
- Subjects
Mathematics - Optimization and Control ,Computer Science - Computer Science and Game Theory ,Computer Science - Machine Learning ,65K15, 62L20, 90C15, 90C33 - Abstract
Variational inequalities have recently attracted considerable interest in machine learning as a flexible paradigm for models that go beyond ordinary loss function minimization (such as generative adversarial networks and related deep learning systems). In this setting, the optimal $\mathcal{O}(1/t)$ convergence rate for solving smooth monotone variational inequalities is achieved by the Extra-Gradient (EG) algorithm and its variants. Aiming to alleviate the cost of an extra gradient step per iteration (which can become quite substantial in deep learning applications), several algorithms have been proposed as surrogates to Extra-Gradient with a \emph{single} oracle call per iteration. In this paper, we develop a synthetic view of such algorithms, and we complement the existing literature by showing that they retain a $\mathcal{O}(1/t)$ ergodic convergence rate in smooth, deterministic problems. Subsequently, beyond the monotone deterministic case, we also show that the last iterate of single-call, \emph{stochastic} extra-gradient methods still enjoys a $\mathcal{O}(1/t)$ local convergence rate to solutions of \emph{non-monotone} variational inequalities that satisfy a second-order sufficient condition., Comment: In Advances in Neural Information Processing Systems 32 (NeurIPS 2019); 24 pages, 3 figures
- Published
- 2019
39. Distributed Learning with Sparse Communications by Identification
- Author
-
Grishchenko, Dmitry, Iutzeler, Franck, Malick, Jérôme, and Amini, Massih-Reza
- Subjects
Mathematics - Optimization and Control ,Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
In distributed optimization for large-scale learning, a major performance limitation comes from the communications between the different entities. When computations are performed by workers on local data while a coordinator machine coordinates their updates to minimize a global loss, we present an asynchronous optimization algorithm that efficiently reduces the communications between the coordinator and workers. This reduction comes from a random sparsification of the local updates. We show that this algorithm converges linearly in the strongly convex case and also identifies optimal strongly sparse solutions. We further exploit this identification to propose an automatic dimension reduction, aptly sparsifying all exchanges between coordinator and workers., Comment: v2 is a significant improvement over v1 (titled "Asynchronous Distributed Learning with Sparse Communications and Identification") with new algorithms, results, and discussions
- Published
- 2018
40. A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm
- Author
-
Mishchenko, Konstantin, Iutzeler, Franck, and Malick, Jérôme
- Subjects
Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
We develop and analyze an asynchronous algorithm for distributed convex optimization when the objective writes a sum of smooth functions, local to each worker, and a non-smooth function. Unlike many existing methods, our distributed algorithm is adjustable to various levels of communication cost, delays, machines computational power, and functions smoothness. A unique feature is that the stepsizes do not depend on communication delays nor number of machines, which is highly desirable for scalability. We prove that the algorithm converges linearly in the strongly convex case, and provide guarantees of convergence for the non-strongly convex case. The obtained rates are the same as the vanilla proximal gradient algorithm over some introduced epoch sequence that subsumes the delays of the system. We provide numerical results on large-scale machine learning problems to demonstrate the merits of the proposed method., Comment: to appear in SIAM Journal on Optimization
- Published
- 2018
41. Model Consistency for Learning with Mirror-Stratifiable Regularizers
- Author
-
Fadili, Jalal, Garrigos, Guillaume, Malick, Jérome, and Peyré, Gabriel
- Subjects
Mathematics - Optimization and Control - Abstract
Low-complexity non-smooth convex regularizers are routinely used to impose some structure (such as sparsity or low-rank) on the coefficients for linear predictors in supervised learning. Model consistency consists then in selecting the correct structure (for instance support or rank) by regularized empirical risk minimization. It is known that model consistency holds under appropriate non-degeneracy conditions. However such conditions typically fail for highly correlated designs and it is observed that regularization methods tend to select larger models. In this work, we provide the theoretical underpinning of this behavior using the notion of mirror-stratifiable regularizers. This class of regularizers encompasses the most well-known in the literature, including the $\ell_1$ or trace norms. It brings into play a pair of primal-dual models, which in turn allows one to locate the structure of the solution using a specific dual certificate. We also show how this analysis is applicable to optimal solutions of the learning problem, and also to the iterates computed by a certain class of stochastic proximal-gradient algorithms., Comment: 14 pages, 4 figures
- Published
- 2018
42. On the Proximal Gradient Algorithm with Alternated Inertia
- Author
-
Iutzeler, Franck and Malick, Jerome
- Subjects
Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
In this paper, we investigate the attractive properties of the proximal gradient algorithm with inertia. Notably, we show that using alternated inertia yields monotonically decreasing functional values, which contrasts with usual accelerated proximal gradient methods. We also provide convergence rates for the algorithm with alternated inertia based on local geometric properties of the objective function. The results are put into perspective by discussions on several extensions and illustrations on common regularized problems., Comment: Journal of Optimization Theory and Applications, Springer Verlag, A Para{\^i}tre
- Published
- 2018
43. Complete descriptions of the tautological rings of the moduli spaces of curves of genus lower or equal to 4 with marked points
- Author
-
Camara, Malick
- Subjects
Mathematics - Algebraic Geometry - Abstract
We study here the tautological rings of the moduli spaces of compact Riemann surfaces of genus 1,2,3 and 4 with marked points. The paper presents the complete descriptions of these rings by describing the groups of all degrees.
- Published
- 2017
44. Sensitivity Analysis for Mirror-Stratifiable Convex Functions
- Author
-
Fadili, Jalal, Malick, Jérôme, and Peyré, Gabriel
- Subjects
Mathematics - Optimization and Control ,Computer Science - Computer Vision and Pattern Recognition ,Statistics - Machine Learning ,65K05, 65K10, 90C25, 90C31 - Abstract
This paper provides a set of sensitivity analysis and activity identification results for a class of convex functions with a strong geometric structure, that we coined "mirror-stratifiable". These functions are such that there is a bijection between a primal and a dual stratification of the space into partitioning sets, called strata. This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms. This class of functions encompasses all regularizers routinely used in signal and image processing, machine learning, and statistics. We show that this "mirror-stratifiable" structure enjoys a nice sensitivity theory, allowing us to study stability of solutions of optimization problems to small perturbations, as well as activity identification of first-order proximal splitting-type algorithms. Existing results in the literature typically assume that, under a non-degeneracy condition, the active set associated to a minimizer is stable to small perturbations and is identified in finite time by optimization schemes. In contrast, our results do not require any non-degeneracy assumption: in consequence, the optimal active set is not necessarily stable anymore, but we are able to track precisely the set of identifiable strata.We show that these results have crucial implications when solving challenging ill-posed inverse problems via regularization, a typical scenario where the non-degeneracy condition is not fulfilled. Our theoretical results, illustrated by numerical simulations, allow to characterize the instability behaviour of the regularized solutions, by locating the set of all low-dimensional strata that can be potentially identified by these solutions.
- Published
- 2017
45. Overlap Coefficients Based on Kullback-Leibler Divergence: Exponential Populations Case
- Author
-
Dhaker, Hamza, Ngom, Papa, and Mbodj, Malick
- Subjects
Statistics - Methodology - Abstract
This article is devoted to the study of overlap measures of densities of two exponential populations. Various Overlapping Coefficients, namely: Matusita's measure $\rho$, Morisita's measure $\lambda$ and Weitzman's measure $\Delta$. A new overlap measure $\Lambda$ based on Kullback-Leibler measure is proposed. The invariance property and a method of statistical inference of these coefficients also are presented. Taylor series approximation are used to construct confidence intervals for the overlap measures. The bias and mean square error properties of the estimators are studied through a simulation study.
- Published
- 2017
46. Recommendations for quantifying and reducing uncertainty in climate projections of species distributions
- Author
-
Stephanie Brodie, James A. Smith, Barbara A. Muhling, Lewis A. K. Barnett, Gemma Carroll, Paul Fiedler, Steven J. Bograd, Elliott L. Hazen, Michael G. Jacox, Kelly S. Andrews, Cheryl L. Barnes, Lisa G. Crozier, Jerome Fiechter, Alexa Fredston, Melissa A. Haltuch, Chris J. Harvey, Elizabeth Holmes, Melissa A. Karp, Owen R. Liu, Michael J. Malick, Mercedes Pozo Buil, Kate Richerson, Christopher N. Rooper, Jameal Samhouri, Rachel Seary, Rebecca L. Selden, Andrew R. Thompson, Desiree Tommasi, Eric J. Ward, and Isaac C. Kaplan
- Published
- 2022
- Full Text
- View/download PDF
47. Disclosure of violence against women and girls in Senegal
- Author
-
Peterman, Amber, primary, Dione, Malick, primary, Le Port, Agnès, primary, Briaux, Justine, primary, Lamesse, Fatma, primary, and Hidrobo, Melissa, primary
- Published
- 2023
- Full Text
- View/download PDF
48. C’est la vie!: Mixed impacts of an edutainment television series in West Africa
- Author
-
Dione, Malick, primary, Heckert, Jessica, primary, Hidrobo, Melissa, primary, Le Port, Agnès, primary, Peterman, Amber, primary, and Seye, Moustapha, primary
- Published
- 2023
- Full Text
- View/download PDF
49. Locally symmetric submanifolds lift to spectral manifolds
- Author
-
Daniilidis, Aris, Malick, Jerome, and Sendov, Hristo
- Subjects
Mathematics - Optimization and Control ,Mathematics - Differential Geometry - Abstract
In this work we prove that every locally symmetric smooth submanifold gives rise to a naturally defined smooth submanifold of the space of symmetric matrices, called spectral manifold, consisting of all matrices whose ordered vector of eigenvalues belongs to the locally symmetric manifold. We also present an explicit formula for the dimension of the spectral manifold in terms of the dimension and the intrinsic properties of the locally symmetric manifold.
- Published
- 2012
50. Projection methods in conic optimization
- Author
-
Henrion, Didier and Malick, Jérôme
- Subjects
Mathematics - Optimization and Control - Abstract
There exist efficient algorithms to project a point onto the intersection of a convex cone and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques.
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.