167 results on '"Bocquet, Marc"'
Search Results
2. Powering AI at the edge: A robust, memristor-based binarized neural network with near-memory computing and miniaturized solar cell
- Author
-
Jebali, Fadi, Majumdar, Atreya, Turck, Clément, Harabi, Kamel-Eddine, Faye, Mathieu-Coumba, Muhr, Eloi, Walder, Jean-Pierre, Bilousov, Oleksandr, Michaud, Amadéo, Vianello, Elisa, Hirtzlin, Tifenn, Andrieu, François, Bocquet, Marc, Collin, Stéphane, Querlioz, Damien, and Portal, Jean-Michel
- Published
- 2024
- Full Text
- View/download PDF
3. Parameter sensitivity analysis of a sea ice melt pond parametrisation and its emulation using neural networks
- Author
-
Driscoll, Simon, Carrassi, Alberto, Brajard, Julien, Bertino, Laurent, Bocquet, Marc, and Ólason, Einar Örn
- Published
- 2024
- Full Text
- View/download PDF
4. Bayesian inversion of emissions from large urban fire using in situ observations
- Author
-
Launay, Emilie, Hergault, Virginie, Bocquet, Marc, Dumont Le Brazidec, Joffrey, and Roustan, Yelva
- Published
- 2024
- Full Text
- View/download PDF
5. A memristor-based Bayesian machine
- Author
-
Harabi, Kamel-Eddine, Hirtzlin, Tifenn, Turck, Clément, Vianello, Elisa, Laurent, Raphaël, Droulez, Jacques, Bessière, Pierre, Portal, Jean-Michel, Bocquet, Marc, and Querlioz, Damien
- Published
- 2023
- Full Text
- View/download PDF
6. Morphology and reliability aspects of 40 nm eSTM™ architecture
- Author
-
Melul, Franck, Marca, Vincenzo Della, Bocquet, Marc, Akbal, Madjid, Laine, Pierre, Trenteseaux, Frederique, Mantelli, Marc, Hesse, Marjorie, Regnier, Arnaud, Niel, Stephan, and La Rosa, Francesco
- Published
- 2021
- Full Text
- View/download PDF
7. Improving Numerical Dispersion Modelling in Built Environments with Data Assimilation Using the Iterative Ensemble Kalman Smoother
- Author
-
Defforge, Cécile L., Carissimo, Bertrand, Bocquet, Marc, Bresson, Raphaël, and Armand, Patrick
- Published
- 2021
- Full Text
- View/download PDF
8. Multivariate state and parameter estimation with data assimilation applied to sea-ice models using a Maxwell elasto-brittle rheology.
- Author
-
Chen, Yumeng, Smith, Polly, Carrassi, Alberto, Pasmans, Ivo, Bertino, Laurent, Bocquet, Marc, Finn, Tobias Sebastian, Rampal, Pierre, and Dansereau, Véronique
- Subjects
PARAMETER estimation ,KALMAN filtering ,DRAG coefficient ,RHEOLOGY - Abstract
In this study, we investigate the fully multivariate state and parameter estimation through idealised simulations of a dynamics-only model that uses the novel Maxwell elasto-brittle (MEB) sea-ice rheology and in which we estimate not only the sea-ice concentration, thickness and velocity, but also its level of damage, internal stress and cohesion. Specifically, we estimate the air drag coefficient and the so-called damage parameter of the MEB model. Mimicking the realistic observation network with different combinations of observations, we demonstrate that various issues can potentially arise in a complex sea-ice model, especially in instances for which the external forcing dominates the model forecast error growth. Even though further investigation will be needed using an operational (a coupled dynamics–thermodynamics) sea-ice model, we show that, with the current observation network, it is possible to improve both the observed and the unobserved model state forecast and parameter accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. MCMC methods applied to the reconstruction of the autumn 2017 Ruthenium-106 atmospheric contamination source
- Author
-
Dumont Le Brazidec, Joffrey, Bocquet, Marc, Saunier, Olivier, and Roustan, Yelva
- Published
- 2020
- Full Text
- View/download PDF
10. On Temporal Scale Separation in Coupled Data Assimilation with the Ensemble Kalman Filter
- Author
-
Tondeur, Maxime, Carrassi, Alberto, Vannitsem, Stephane, and Bocquet, Marc
- Published
- 2020
- Full Text
- View/download PDF
11. Data-driven surrogate modeling of high-resolution sea-ice thickness in the Arctic.
- Author
-
Durand, Charlotte, Finn, Tobias Sebastian, Farchi, Alban, Bocquet, Marc, Boutin, Guillaume, and Ólason, Einar
- Subjects
SEA ice ,DEEP learning ,SUPERVISED learning ,LEAD time (Supply chain management) - Abstract
A novel generation of sea-ice models with elasto-brittle rheologies, such as neXtSIM, can represent sea-ice processes with an unprecedented accuracy at the mesoscale for resolutions of around 10 km. As these models are computationally expensive, we introduce supervised deep learning techniques for surrogate modeling of the sea-ice thickness from neXtSIM simulations. We adapt a convolutional U-Net architecture to an Arctic-wide setup by taking the land–sea mask with partial convolutions into account. Trained to emulate the sea-ice thickness at a lead time of 12 h, the neural network can be iteratively applied to predictions for up to 1 year. The improvements of the surrogate model over a persistence forecast persist from 12 h to roughly 1 year, with improvements of up to 50 % in the forecast error. Moreover, the predictability gain for the sea-ice thickness measured against the daily climatology extends to over 6 months. By using atmospheric forcings as additional input, the surrogate model can represent advective and thermodynamical processes which influence the sea-ice thickness and the growth and melting therein. While iterating, the surrogate model experiences diffusive processes which result in a loss of fine-scale structures. However, this smoothing increases the coherence of large-scale features and thereby the stability of the model. Therefore, based on these results, we see huge potential for surrogate modeling of state-of-the-art sea-ice models with neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Deep learning applied to CO2 power plant emissions quantification using simulated satellite images.
- Author
-
Dumont Le Brazidec, Joffrey, Vanderbecken, Pierre, Farchi, Alban, Broquet, Grégoire, Kuhlmann, Gerrit, and Bocquet, Marc
- Subjects
DEEP learning ,REMOTE-sensing images ,POWER plants ,CARBON dioxide ,CONVOLUTIONAL neural networks ,GREENHOUSE gases ,AIR pollutants - Abstract
The quantification of emissions of greenhouse gases and air pollutants through the inversion of plumes in satellite images remains a complex problem that current methods can only assess with significant uncertainties. The anticipated launch of the CO2M (Copernicus Anthropogenic Carbon Dioxide Monitoring) satellite constellation in 2026 is expected to provide high-resolution images of CO2 (carbon dioxide) column-averaged mole fractions (XCO2), opening up new possibilities. However, the inversion of future CO2 plumes from CO2M will encounter various obstacles. A challenge is the low CO2 plume signal-to-noise ratio due to the variability in the background and instrumental errors in satellite measurements. Moreover, uncertainties in the transport and dispersion processes further complicate the inversion task. To address these challenges, deep learning techniques, such as neural networks, offer promising solutions for retrieving emissions from plumes in XCO2 images. Deep learning models can be trained to identify emissions from plume dynamics simulated using a transport model. It then becomes possible to extract relevant information from new plumes and predict their emissions. In this paper, we develop a strategy employing convolutional neural networks (CNNs) to estimate the emission fluxes from a plume in a pseudo- XCO2 image. Our dataset used to train and test such methods includes pseudo-images based on simulations of hourly XCO2 , NO2 (nitrogen dioxide), and wind fields near various power plants in eastern Germany, tracing plumes from anthropogenic and biogenic sources. CNN models are trained to predict emissions from three power plants that exhibit diverse characteristics. The power plants used to assess the deep learning model's performance are not used to train the model. We find that the CNN model outperforms state-of-the-art plume inversion approaches, achieving highly accurate results with an absolute error about half of that of the cross-sectional flux method and an absolute relative error of ∼ 20 % when only the XCO2 and wind fields are used as inputs. Furthermore, we show that our estimations are only slightly affected by the absence of NO2 fields or a detection mechanism as additional information. Finally, interpretability techniques applied to our models confirm that the CNN automatically learns to identify the XCO2 plume and to assess emissions from the plume concentrations. These promising results suggest a high potential of CNNs in estimating local CO2 emissions from satellite images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Optimising assimilation of hydrographic profiles into isopycnal ocean models with ensemble data assimilation
- Author
-
Wang, Yiguo, Counillon, François, Bethke, Ingo, Keenlyside, Noel, Bocquet, Marc, and Shen, Mao-lin
- Published
- 2017
- Full Text
- View/download PDF
14. Bridging classical data assimilation and optimal transport.
- Author
-
Bocquet, Marc, Vanderbecken, Pierre J., Farchi, Alban, Brazidec, Joffrey Dumont Le, and Roustan, Yelva
- Subjects
INTERPOLATION spaces ,ANALYSIS of covariance ,COVARIANCE matrices ,TRUCKLOAD shipping - Abstract
Because optimal transport acts as displacement interpolation in physical space rather than as interpolation in value space, it can potentially avoid double penalty errors. As such it provides a very attractive metric for non-negative physical fields comparison – the Wasserstein distance – which could further be used in data assimilation for the geosciences. The algorithmic and numerical implementations of such distance are however not straightforward. Moreover, its theoretical formulation within typical data assimilation problems face conceptual challenges, resulting in scarce contributions on the topic in the literature. We formulate the problem in a way that offers a unified view on both classical data assimilation and optimal transport. The resulting OTDA framework accounts for both the classical source of prior errors, background and observation, together with a Wasserstein barycentre in between states that stand for these background and observation. We show that the hybrid OTDA analysis can be decomposed as a simpler OTDA problem involving a single Wasserstein distance, followed by a Wasserstein barycentre problem which ignores the prior errors and can be seen as a McCann interpolant. We also propose a less enlightening but straightforward solution to the full OTDA problem, which includes the derivation of its analysis error covariance matrix. Thanks to these theoretical developments, we are able to extend the classical 3D-Var/BLUE paradigm at the core of most classical data assimilation schemes. The resulting formalism is very flexible and can account for sparse, noisy observations and non-Gaussian error statistics. It is illustrated by simple one– and two–dimensional examples that show the richness of the new types of analysis offered by this unification. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Representation learning with unconditional denoising diffusion models for dynamical systems.
- Author
-
Finn, Tobias Sebastian, Disson, Lucas, Farchi, Alban, Bocquet, Marc, and Durand, Charlotte
- Subjects
ARTIFICIAL neural networks ,DYNAMICAL systems ,ATTRACTORS (Mathematics) ,DEEP learning ,RANDOM noise theory ,RUNNING speed ,NONLINEAR dynamical systems - Abstract
We propose denoising diffusion models for data-driven representation learning of dynamical systems. In this type of generative deep learning, a neural network is trained to denoise and reverse a diffusion process, where Gaussian noise is added to states from the attractor of a dynamical system. Iteratively applied, the neural network can then map samples from isotropic Gaussian noise to the state distribution. We showcase the potential of such neural networks in experiments with the Lorenz 63 system. Trained for state generation, the neural network can produce samples, almost indistinguishable from those on the attractor. The model has thereby learned an internal representation of the system, applicable on different tasks than state generation. As a first task, we fine-tune the pre-trained neural network for surrogate modelling by retraining its last layer and keeping the remaining network as a fixed feature extractor. In these low-dimensional settings, such fine-tuned models perform similarly to deep neural networks trained from scratch. As a second task, we apply the pre-trained model to generate an ensemble out of a deterministic run. Diffusing the run, and then iteratively applying the neural network, conditions the state generation, which allows us to sample from the attractor in the run's neighboring region. To control the resulting ensemble spread and Gaussianity, we tune the diffusion time and, thus, the sampled portion of the attractor. While easier to tune, this proposed ensemble sampler can outperform tuned static covariances in ensemble optimal interpolation. Therefore, these two applications show that denoising diffusion models are a promising way towards representation learning for dynamical systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Multivariate state and parameter estimation with data assimilation on sea-ice models using a Maxwell-Elasto-Brittle rheology.
- Author
-
Chen, Yumeng, Smith, Polly, Carrassi, Alberto, Pasmans, Ivo, Bertino, Laurent, Bocquet, Marc, Finn, Tobias Sebastian, Rampal, Pierre, and Dansereau, Véronique
- Subjects
PARAMETER estimation ,SEA ice ,RHEOLOGY ,KALMAN filtering ,DRAG coefficient - Abstract
In this study, we investigate the fully multivariate state and parameter estimation through idealised simulations of a dynamic-only model that uses the novel Maxwell-Elasto-Brittle (MEB) sea ice rheology and in which we estimate not only the sea ice concentration, thickness and velocity, but also its level of damage, internal stress and cohesion. Specifically, we estimate the air drag coefficient and the so-called damage parameter of the MEB model. Mimicking the realistic observation network with different combinations of observations, we demonstrate that various issues can potentially arise in a complex sea ice model especially in instances for which the external forcing dominates the model forecast error growth. Even though further investigation will be needed using an operational (a coupled dynamics-thermodynamics) sea ice model, we show that, with the current observation network, it is possible to improve both the observed and unobserved model state forecast and parameters accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Multivariate state and parameter estimation with data assimilation on sea-ice models using a Maxwell-Elasto-Brittle rheology.
- Author
-
Yumeng Chen, Smith, Polly, Carrassi, Alberto, Pasmans, Ivo, Bertino, Laurent, Bocquet, Marc, Finn, Tobias Sebastian, Rampal, Pierre, and Dansereau, Véronique
- Abstract
In this study, we investigate the fully multivariate state and parameter estimation through idealised simulations of a dynamic-only model that uses the novel Maxwell-Elasto-Brittle (MEB) sea ice rheology and in which we estimate not only the sea ice concentration, thickness and velocity, but also its level of damage, internal stress and cohesion. Specifically, we estimate the air drag coefficient and the so-called damage parameter of the MEB model. Mimicking the realistic observation network with different combinations of observations, we demonstrate that various issues can potentially arise in a complex sea ice model especially in instances for which the external forcing dominates the model forecast error growth. Even though further investigation will be needed using an operational (a coupled dynamics-thermodynamics) sea ice model, we show that, with the current observation network, it is possible to improve both the observed and unobserved model state forecast and parameters accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Online Model Error Correction With Neural Networks in the Incremental 4D‐Var Framework.
- Author
-
Farchi, Alban, Chrust, Marcin, Bocquet, Marc, Laloyaux, Patrick, and Bonavita, Massimo
- Subjects
LONG-range weather forecasting ,NUMERICAL weather forecasting ,MACHINE learning ,ONLINE education ,ERROR correction (Information theory) ,PRIOR learning ,STATISTICAL models - Abstract
Recent studies have demonstrated that it is possible to combine machine learning with data assimilation to reconstruct the dynamics of a physical model partially and imperfectly observed. The surrogate model can be defined as an hybrid combination where a physical model based on prior knowledge is enhanced with a statistical model estimated by a neural network (NN). The training of the NN is typically done offline, once a large enough data set of model state estimates is available. By contrast, with online approaches the surrogate model is improved each time a new system state estimate is computed. Online approaches naturally fit the sequential framework encountered in geosciences where new observations become available with time. In a recent methodology paper, we have developed a new weak‐constraint 4D‐Var formulation which can be used to train a NN for online model error correction. In the present article, we develop a simplified version of that method, in the incremental 4D‐Var framework adopted by most operational weather centers. The simplified method is implemented in the European Center for Medium‐Range Weather Forecasts (ECMWF) Object‐Oriented Prediction System, with the help of a newly developed Fortran NN library, and tested with a two‐layer two‐dimensional quasi geostrophic model. The results confirm that online learning is effective and yields a more accurate model error correction than offline learning. Finally, the simplified method is compatible with future applications to state‐of‐the‐art models such as the ECMWF Integrated Forecasting System. Plain Language Summary: We have recently proposed a general framework for combining data assimilation (DA) and machine learning (ML) techniques to train a neural network for online model error correction. In the present article, we develop a simplified version of this online training method, compatible with future applications to more realistic models. Using numerical illustrations, we show that the new method is effective and yields a more accurate model error correction than the usual offline learning approach. The results show the potential of incorporating DA and ML tightly, and pave the way toward an application to the Integrated Forecasting System used for operational numerical weather prediction at the European Centre for Medium‐Range Weather Forecasts. Key Points: Variants of weak‐constraint 4D‐Var can be used to train neural networks for online model error correctionOnline learning yields a more accurate model error correction than offline learningThe new, simplified method, developed in the incremental 4D‐Var framework, can be easily applied in operational weather models [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Data-driven surrogate modeling of high-resolution sea-ice thickness in the Arctic.
- Author
-
Durand, Charlotte, Finn, Tobias Sebastian, Farchi, Alban, Bocquet, Marc, and Òlason, Einar
- Subjects
SEA ice ,DEEP learning ,SUPERVISED learning ,LEAD time (Supply chain management) - Abstract
A novel generation of sea-ice models with Elasto-Brittle rheologies, such as neXtSIM, can represent sea-ice processes with an unprecedented accuracy at the mesoscale, for resolutions of around 10 km. As these models are computationally expensive, we introduce supervised deep learning techniques for surrogate modeling of the sea-ice thickness from neXtSIM simulations. We adapt a convolutional UNet architecture to an Arctic-wide setup by taking the land-sea mask with partial convolutions into account. Trained to emulate the sea-ice thickness on a lead time of 12 hours, the neural network can be iteratively applied to predictions up to a year. The improvements of the surrogate model over a persistence forecast prevail from 12 hours to roughly a year, with improvements of up to 50 % in the forecast error. The predictability of the sea-ice thickness measured against a daily climatology additionally lays by around 8 months. By using atmospheric forcings as additional input, the surrogate model can represent advective and thermodynamical processes, which influence the sea-ice thickness and the growth and melting therein. While iterating, the surrogate model experiences diffusive processes, which result into a loss of fine-scale structures. However, this smoothing increases the coherence of large-scale features and hereby the stability of the model. Therefore, based on these results, we see a huge potential for surrogate modelling of state-of-art sea-ice models with neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. RRAM-based FPGA for “Normally Off, Instantly On” applications
- Author
-
Turkyilmaz, Ogun, Onkaraiah, Santhosh, Reyboz, Marina, Clermidy, Fabien, Hraziia, Anghel, Costin, Portal, Jean-Michel, and Bocquet, Marc
- Published
- 2014
- Full Text
- View/download PDF
21. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations
- Author
-
Winiarek, Victor, Bocquet, Marc, Duhanyan, Nora, Roustan, Yelva, Saunier, Olivier, and Mathieu, Anne
- Published
- 2014
- Full Text
- View/download PDF
22. Deep learning applied to CO2 power plant emissions quantification using simulated satellite images.
- Author
-
Dumont Le Brazidec, Joffrey, Vanderbecken, Pierre, Farchi, Alban, Broquet, Grégoire, Kuhlmann, Gerrit, and Bocquet, Marc
- Subjects
DEEP learning ,REMOTE-sensing images ,POWER plants ,GREENHOUSE gases ,CONVOLUTIONAL neural networks ,COAL-fired power plants ,AIR pollutants ,WIND power plants - Abstract
The quantification of emissions of greenhouse gases and air pollutants through the inversion of plumes in satellite images remains a complex problem that current methods can only assess with significant uncertainties. The anticipated launch of the CO
2 M satellite constellation in 2026 is expected to provide high-resolution images of CO2 column-averaged mole fractions (XCO2 ), opening up new possibilities. However, the inversion of future CO2 plumes from CO2 M will encounter various obstacles. A challenge is the CO2 plume low signal-to-noise ratio, due to the variability of the background and instrumental errors in satellite measurements. Moreover, uncertainties in the transport and dispersion processes further complicate the inversion task. To address these challenges, deep learning techniques, such as neural networks, offer promising solutions for retrieving emissions from plumes in XCO2 images. Deep learning models can be trained to identify emissions from plume dynamics simulated using a transport model. It then becomes possible to extract relevant information from new plumes and predict their emissions. In this paper, we employ convolutional neural networks (CNN) to estimate the emission fluxes from a plume in a pseudo XCO2 image. Our dataset used to train and test such methods includes pseudo images based on simulations of hourly XCO2 , NO2 and wind fields near various power plants in Eastern Germany, tracing plumes from anthropogenic and biogenic sources. CNN models are trained to predict emissions from three power plants that exhibit diverse characteristics. The power plants used to assess the deep learning model's performance are not used to train the model. We find that the CNN model outperforms state of the art plume inversion approaches, achieving highly accurate results with an absolute error about half of that of the cross-sectional flux method. Furthermore, we show that our estimations are only slightly affected by the absence of NO2 fields or a detection mechanism as additional information. Finally, interpretability techniques applied to our models confirm that the CNN automatically learns to identify the XCO2 plume and to assess emissions from the plume concentrations. These promising results suggest a high potential of CNNs in estimating local CO2 emissions from satellite images. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
23. Deep learning subgrid-scale parametrisations for short-term forecasting of sea-ice dynamics with a Maxwell elasto-brittle rheology.
- Author
-
Finn, Tobias Sebastian, Durand, Charlotte, Farchi, Alban, Bocquet, Marc, Chen, Yumeng, Carrassi, Alberto, and Dansereau, Véronique
- Subjects
DEEP learning ,RHEOLOGY ,SEA ice ,WIND pressure ,LEAD time (Supply chain management) ,FORECASTING - Abstract
We introduce a proof of concept to parametrise the unresolved subgrid scale of sea-ice dynamics with deep learning techniques. Instead of parametrising single processes, a single neural network is trained to correct all model variables at the same time. This data-driven approach is applied to a regional sea-ice model that accounts exclusively for dynamical processes with a Maxwell elasto-brittle rheology. Driven by an external wind forcing in a 40km×200km domain, the model generates examples of sharp transitions between unfractured and fully fractured sea ice. To correct such examples, we propose a convolutional U-Net architecture which extracts features at multiple scales. We test this approach in twin experiments: the neural network learns to correct forecasts from low-resolution simulations towards high-resolution simulations for a lead time of about 10 min. At this lead time, our approach reduces the forecast errors by more than 75% , averaged over all model variables. As the most important predictors, we identify the dynamics of the model variables. Furthermore, the neural network extracts localised and directional-dependent features, which point towards the shortcomings of the low-resolution simulations. Applied to correct the forecasts every 10 min, the neural network is run together with the sea-ice model. This improves the short-term forecasts up to an hour. These results consequently show that neural networks can correct model errors from the subgrid scale for sea-ice dynamics. We therefore see this study as an important first step towards hybrid modelling to forecast sea-ice dynamics on an hourly to daily timescale. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Segmentation of XCO2 images with deep learning: application to synthetic plumes from cities and power plants.
- Author
-
Dumont Le Brazidec, Joffrey, Vanderbecken, Pierre, Farchi, Alban, Bocquet, Marc, Lian, Jinghui, Broquet, Grégoire, Kuhlmann, Gerrit, Danjou, Alexandre, and Lauvaux, Thomas
- Subjects
DEEP learning ,ATMOSPHERIC carbon dioxide ,CONVOLUTIONAL neural networks ,POWER plants ,URBAN plants ,IMAGE segmentation ,COAL-fired power plants - Abstract
Under the Copernicus programme, an operational CO 2 Monitoring Verification and Support system (CO 2 MVS) is being developed and will exploit data from future satellites monitoring the distribution of CO 2 within the atmosphere. Methods for estimating CO 2 emissions from significant local emitters (hotspots; i.e. cities or power plants) can greatly benefit from the availability of such satellite images that display the atmospheric plumes of CO 2. Indeed, local emissions are strongly correlated to the size, shape, and concentration distribution of the corresponding plume, which is a visible consequence of the emission. The estimation of emissions from a given source can therefore directly benefit from the detection of its associated plumes in the satellite image. In this study, we address the problem of plume segmentation (i.e. the problem of finding all pixels in an image that constitute a city or power plant plume). This represents a significant challenge, as the signal from CO 2 plumes induced by emissions from cities or power plants is inherently difficult to detect, since it rarely exceeds values of a few parts per million (ppm) and is perturbed by variable regional CO 2 background signals and observation errors. To address this key issue, we investigate the potential of deep learning methods and in particular convolutional neural networks to learn to distinguish plume-specific spatial features from background or instrument features. Specifically, a U-Net algorithm, an image-to-image convolutional neural network with a state-of-the-art encoder, is used to transform an XCO 2 field into an image representing the positions of the targeted plume. Our models are trained on hourly 1 km simulated XCO 2 fields in the regions of Paris, Berlin, and several power plants in Germany. Each field represents the plume of the hotspot, with the background consisting of the signal of anthropogenic and biogenic CO 2 surface fluxes near to or far from the targeted source and the simulated satellite observation errors. The performance of the deep learning method is thereafter evaluated and compared with a plume segmentation technique based on thresholding in two contexts, namely (1) where the model is trained and tested on data from the same region and (2) where the model is trained and tested in two different regions. In both contexts, our method outperforms the usual segmentation technique based on thresholding and demonstrates its ability to generalise in various cases, with respect to city plumes, power plant plumes, and areas with multiple plumes. Although less accurate than in the first context, the ability of the algorithm to extrapolate on new geographical data is conclusive, paving the way to a promising universal segmentation model trained on a well-chosen sample of power plants and cities and able to detect the majority of the plumes from all of them. Finally, the highly accurate results for segmentation suggest the significant potential of convolutional neural networks for estimating local emissions from spaceborne imagery. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Operation and stability analysis of bipolar OxRRAM-based Non-Volatile 8T2R SRAM as solution for information back-up
- Author
-
Hraziia, Makosiej, Adam, Palma, Giorgio, Portal, Jean-Michel, Bocquet, Marc, Thomas, Olivier, Clermidy, Fabien, Reyboz, Marina, Onkaraiah, Santhosh, Muller, Christophe, Deleruyelle, Damien, Vladimirescu, Andrei, Amara, Amara, and Anghel, Costin
- Published
- 2013
- Full Text
- View/download PDF
26. Accounting for meteorological biases in simulated plumes using smarter metrics.
- Author
-
Vanderbecken, Pierre J., Dumont Le Brazidec, Joffrey, Farchi, Alban, Bocquet, Marc, Roustan, Yelva, Potier, Élise, and Broquet, Grégoire
- Subjects
GREENHOUSE gases ,ATMOSPHERIC transport ,ATMOSPHERIC chemistry ,ATMOSPHERIC models ,CHEMICAL models - Abstract
In the next few years, numerous satellites with high-resolution instruments dedicated to the imaging of atmospheric gaseous compounds will be launched, to finely monitor emissions of greenhouse gases and pollutants. Processing the resulting images of plumes from cities and industrial plants to infer the emissions of these sources can be challenging. In particular traditional atmospheric inversion techniques, relying on objective comparisons to simulations with atmospheric chemistry transport models, may poorly fit the observed plume due to modelling errors rather than due to uncertainties in the emissions. The present article discusses how these images can be adequately compared to simulated concentrations to limit the weight of modelling errors due to the meteorology used to analyse the images. For such comparisons, the usual pixel-wise L2 norm may not be suitable, since it does not linearly penalise a displacement between two identical plumes. By definition, such a metric considers a displacement as an accumulation of significant local amplitude discrepancies. This is the so-called double penalty issue. To avoid this issue, we propose three solutions: (i) compensate for position error, due to a displacement, before the local comparison; (ii) use non-local metrics of density distribution comparison; and (iii) use a combination of the first two solutions. All the metrics are evaluated using first a catalogue of analytical plumes and then more realistic plumes simulated with a mesoscale Eulerian atmospheric transport model, with an emphasis on the sensitivity of the metrics to position error and the concentration values within the plumes. As expected, the metrics with the upstream correction are found to be less sensitive to position error in both analytical and realistic conditions. Furthermore, in realistic cases, we evaluate the weight of changes in the norm and the direction of the four-dimensional wind fields in our metric values. This comparison highlights the link between differences in the synoptic-scale winds direction and position error. Hence the contribution of the latter to our new metrics is reduced, thus limiting misinterpretation. Furthermore, the new metrics also avoid the double penalty issue. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Bayesian transdimensional inverse reconstruction of the Fukushima Daiichi caesium 137 release.
- Author
-
Dumont Le Brazidec, Joffrey, Bocquet, Marc, Saunier, Olivier, and Roustan, Yelva
- Subjects
- *
CESIUM , *MARKOV chain Monte Carlo , *FUKUSHIMA Nuclear Accident, Fukushima, Japan, 2011 , *ATMOSPHERIC transport , *ATMOSPHERIC models - Abstract
The accident at the Fukushima Daiichi nuclear power plant (NPP) yielded massive and rapidly varying atmospheric radionuclide releases. The assessment of these releases and of the corresponding uncertainties can be performed using inverse modelling methods that combine an atmospheric transport model with a set of observations and have proven to be very effective for this type of problem. In the case of the Fukushima Daiichi NPP, a Bayesian inversion is particularly suitable because it allows errors to be modelled rigorously and a large number of observations of different natures to be assimilated at the same time. More specifically, one of the major sources of uncertainty in the source assessment of the Fukushima Daiichi NPP releases stems from the temporal representation of the source. To obtain a well-time-resolved estimate, we implement a sampling algorithm within a Bayesian framework – the reversible-jump Markov chain Monte Carlo – in order to retrieve the distributions of the magnitude of the Fukushima Daiichi NPP caesium 137 (137 Cs) source as well as its temporal discretization. In addition, we develop Bayesian methods that allow us to combine air concentration and deposition measurements as well as to assess the spatio-temporal information of the air concentration observations in the definition of the observation error matrix. These methods are applied to the reconstruction of the posterior distributions of the magnitude and temporal evolution of the 137 Cs release. They yield a source estimate between 11 and 24 March as well as an assessment of the uncertainties associated with the observations, the model, and the source estimate. The total reconstructed release activity is estimated to be between 10 and 20 PBq, although it increases when the deposition measurements are taken into account. Finally, the variable discretization of the source term yields an almost hourly profile over certain intervals of high temporal variability, signalling identifiable portions of the source term. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Deep learning of subgrid-scale parametrisations for short-term forecasting of sea-ice dynamics with a Maxwell-Elasto-Brittle rheology.
- Author
-
Finn, Tobias Sebastian, Durand, Charlotte, Farchi, Alban, Bocquet, Marc, Chen, Yumeng, Carrassi, Alberto, and Dansereau, Véronique
- Subjects
RHEOLOGY ,SEA ice ,DEEP learning ,ANISOTROPY ,GEOPHYSICAL surveys - Abstract
We introduce a scalable approach to parametrise the unresolved subgrid-scale of sea-ice dynamics with deep learning techniques. We apply this data-driven approach to a regional sea-ice model that accounts exclusively for dynamical processes with a Maxwell-Elasto-Brittle rheology. Our channel-like model setup is driven by a wave-like wind forcing, which generates examples of sharp transitions between unfractured and fully-fractured sea ice. Using a convolutional U-Net architecture, the parametrising neural network extracts multiscale and anisotropic features and, thus, includes important inductive biases needed for sea-ice dynamics. The neural network is trained to correct all nine model variables at the same time. With the initial and forecast state as input into the neural network, we cast the subgrid-scale parametrisation as model error correction, needed to correct unresolved model dynamics. We test the here-proposed approach in twin experiments, where forecasts of a low-resolution forecast model are corrected towards high-resolution truth states for a forecast lead time of about 10 min. At this lead time, our approach reduces the forecast errors by more than 75 %, averaged over all model variables. The neural network learns hereby a physically-explainable input-to-output relation. Furthermore, cycling the subgrid-scale parametrisation together with the geophysical model improves the short-term forecast up to one hour. We consequently show that neural networks can parametrise the subgrid-scale for sea-ice dynamics. We therefore see this study as an important first step towards hybrid modelling to forecast sea-ice dynamics on an hourly to daily timescale. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. A fast, single-iteration ensemble Kalman smoother for sequential data assimilation.
- Author
-
Grudzien, Colin and Bocquet, Marc
- Subjects
- *
SEQUENTIAL analysis , *KALMAN filtering , *RETROSPECTIVE studies , *METEOROLOGY , *FORECASTING - Abstract
Ensemble variational methods form the basis of the state of the art for nonlinear, scalable data assimilation, yet current designs may not be cost-effective for real-time, short-range forecast systems. We propose a novel estimator in this formalism that is designed for applications in which forecast error dynamics is weakly nonlinear, such as synoptic-scale meteorology. Our method combines the 3D sequential filter analysis and retrospective reanalysis of the classic ensemble Kalman smoother with an iterative ensemble simulation of 4D smoothers. To rigorously derive and contextualize our method, we review related ensemble smoothers in a Bayesian maximum a posteriori narrative. We then develop and intercompare these schemes in the open-source Julia package DataAssimilationBenchmarks.jl, with pseudo-code provided for their implementations. This numerical framework, supporting our mathematical results, produces extensive benchmarks demonstrating the significant performance advantages of our proposed technique. Particularly, our single-iteration ensemble Kalman smoother (SIEnKS) is shown to improve prediction/analysis accuracy and to simultaneously reduce the leading-order computational cost of iterative smoothing in a variety of test cases relevant for short-range forecasting. This long work presents our novel SIEnKS and provides a theoretical and computational framework for the further development of ensemble variational Kalman filters and smoothers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Segmentation of XCO2 images with deep learning: application to synthetic plumes from cities and power plants.
- Author
-
Le Brazidec, Joffrey Dumont, Vanderbecken, Pierre, Farchi, Alban, Bocquet, Marc, Jinghui Lian, Broquet, Grégoire, Kuhlmann, Gerrit, Danjou, Alexandre, and Lauvaux, Thomas
- Subjects
DEEP learning ,URBAN plants ,CONVOLUTIONAL neural networks ,IMAGE segmentation ,REMOTE-sensing images - Abstract
Under the Copernicus programme, an operational CO
2 monitoring system (CO2 MVS) is being developed and will exploit data from future satellites monitoring the amount of CO2 within the atmosphere. Methods for estimating CO2 emissions from significant local emitters (hotspots, i.e. cities or power plants) can greatly benefit from the availability of such satellite images, displaying atmospheric plumes of CO2 . Indeed, local emissions are strongly correlated to the size, shape and concentrations distribution of the corresponding plume, the visible consequence of the emission. The estimation of emissions from a given source can therefore directly benefit from the detection of its associated plumes in the satellite image. In this study, we address the problem of plume segmentation, i.e. the problem of finding all pixels in an image that constitute a city or power plant plume. This represents a significant challenge, as the signal from CO2 plumes induced by emissions from cities or power plants is inherently difficult to detect since it rarely exceeds values of a few ppm and is perturbed by variable regional CO2 background signals and observation errors. To address this key issue, we investigate the potential of deep learning methods and in particular convolutional neural networks to learn to distinguish plume-specific spatial features from background or instrument features. Specifically, a U-net algorithm, an image-to-image convolutional neural network, with a state-of-the-art encoder, is used to transform an XCO2 field into an image representing the positions of the targeted plume. Our models are trained on hourly 1 km simulated XCO2 fields in the regions of Paris, Berlin and several German power plants. Each field represents the plume of the hotspot, the background consisting of the signal of anthropogenic and biogenic CO2 surface fluxes near or far from the targeted source and the simulated satellite observation errors. The performance of the deep learning method is thereafter evaluated and compared with a plume segmentation technique based on thresholding in two contexts: the first where the model is trained and tested on data from the same region, and the second where the model is trained and tested in two different regions. In both contexts, our method outperforms the usual segmentation technique based on thresholding and demonstrates its ability to generalise in various cases: city plumes, power plant plumes, and areas with multiple plumes. Although less accurate than in the first context, the ability of the algorithm to extrapolate on new geographical data is conclusive, paving the way to a promising universal segmentation model, trained on a well-chosen sample of power plants and cities, and able to detect the majority of the plumes from all of them. Finally, the highly accurate results for segmentation suggest a significant potential of convolutional neural networks for estimating local emissions from spaceborne imagery. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
31. New plume comparison metrics for the inversion of passive gases emissions.
- Author
-
Vanderbecken, Pierre J., Le Brazidec, Joffrey Dumont, Farchi, Alban, Bocquet, Marc, Roustan, Yelva, Potier, Élise, and Broquet, Grégoire
- Subjects
ATMOSPHERIC transport ,ATMOSPHERIC chemistry ,TRANSPORT theory ,ATMOSPHERIC models ,CHEMICAL models - Abstract
In the next few years, numerous satellites with high-resolution instruments dedicated to the imaging of atmospheric gaseous compounds will be launched, to finely monitor emissions of greenhouse gases and pollutants. Processing the resulting images of plumes from cities and industrial plants to infer the emissions of these sources can be challenging. In particular traditional atmospheric inversion techniques, relying on objective comparisons to simulations with atmospheric chemistry transport models may poorly fit the observed plume due to modelling errors rather than due to uncertainties in the emissions. The present article discusses how these images can be properly compared to simulated concentrations to limit the weight modelling errors due to the meteorology used to analyse the images. For such comparisons, the usual pixel-wise L2 norm may not be a good option, because it is subject to the double penalty issue inherent to its local definition. This issue is characterised by a mutation of any position shift into significant amplitude discrepancies. To circumvent this issue, we propose to either provide an upstream correction of the position misfit between the observed and simulated plumes in the usual L2 norm or use a non-local metric based on the optimal transport theory, such as the Wasserstein distance. All the metrics are evaluated using first a catalogue of analytical plumes and then more realistic plumes simulated with mesoscale Eulerian atmospheric transport model, with an emphasis on the sensitivity of the metrics to position mismatch and the concentration values within the plumes. As expected, the metrics with the upstream correction are found to be less sensitive to position errors in both analytical and realistic conditions. Furthermore, in realistic cases, we evaluate the weight of changes in the norm and the direction of the four-dimensional wind fields in our metric values. This comparison highlights the link between differences in the synoptic-scale winds direction and position error. It is found that discrepancies between two plume images due to wind direction errors in the meteorological conditions are less penalised by our new metrics with the upstream correction than without, thus avoiding the double penalty issue. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Diagnosis and impacts of non-Gaussianity of innovations in data assimilation
- Author
-
Pires, Carlos A., Talagrand, Olivier, and Bocquet, Marc
- Published
- 2010
- Full Text
- View/download PDF
33. Voltage-dependent synaptic plasticity: Unsupervised probabilistic Hebbian plasticity rule based on neurons membrane potential.
- Author
-
Garg, Nikhil, Balafrej, Ismael, Stewart, Terrence C., Portal, Jean-Michel, Bocquet, Marc, Querlioz, Damien, Drouin, Dominique, Rouat, Jean, Beilliard, Yann, and Alibart, Fabien
- Subjects
ARTIFICIAL neural networks ,MEMBRANE potential ,HEBBIAN memory ,NEUROPLASTICITY ,RECOGNITION (Psychology) ,NEURONS ,GLUTAMATE receptors - Abstract
This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel brain-inspired unsupervised local learning rule for the online implementation of Hebb’s plasticity mechanism on neuromorphic hardware. The proposed VDSP learning rule updates the synaptic conductance on the spike of the postsynaptic neuron only, which reduces by a factor of two the number of updates with respect to standard spike timing dependent plasticity (STDP). This update is dependent on the membrane potential of the presynaptic neuron, which is readily available as part of neuron implementation and hence does not require additional memory for storage. Moreover, the update is also regularized on synaptic weight and prevents explosion or vanishing of weights on repeated stimulation. Rigorous mathematical analysis is performed to draw an equivalence between VDSP and STDP. To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits. We report 85.01 ± 0.76% (Mean ± SD) accuracy for a network of 100 output neurons on the MNIST dataset. The performance improves when scaling the network size (89.93 ± 0.41% for 400 output neurons, 90.56 ± 0.27 for 500 neurons), which validates the applicability of the proposed learning rule for spatial pattern recognition tasks. Future work will consider more complicated tasks. Interestingly, the learning rule better adapts than STDP to the frequency of input signal and does not require hand-tuning of hyperparameters. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Bayesian transdimensional inverse reconstruction of the 137Cs Fukushima-Daiichi release.
- Author
-
Dumont Le Brazidec, Joffrey, Bocquet, Marc, Saunier, Olivier, and Roustan, Yelva
- Subjects
- *
NUCLEAR power plant accidents , *ATMOSPHERIC transport - Abstract
The accident at the Fukushima-Daiichi nuclear power plant yielded massive and rapidly varying atmospheric radionuclide releases. The assessment of these releases and of the corresponding uncertainties can be performed using inverse modelling methods that combine an atmospheric transport model with a set of observations and have proven to be very effective for this type of 5 problem. In the case of Fukushima-Daiichi, a Bayesian inversion is particularly suitable because it allows errors to be modelled rigorously and a large amount of observations of different natures to be assimilated at the same time. More specifically, one of the major sources of uncertainty in the source assessment of the Fukushima-Daiichi releases stems from the temporal representation of the source. To obtain a well time-resolved estimate, we implement a MCMC sampling algorithm within a Bayesian framework, the Reversible-Jump MCMC, in order to retrieve the distributions of the magnitude of the Fukushima-Daiichi 137Cs source as well as its temporal discretisation. In addition, we develop Bayesian methods allowing to combine air concentration and deposition measurements, as well as to assess the spatio-temporal information of the air concentration observations in the definition of the observation error matrix. These methods are applied to the reconstruction of the posterior distributions of the magnitude and temporal evolution of the 137Cs release. They yield a source estimate between 11 and 24 March, as well as an assessment of the uncertainties associated with the observations, the model and the source estimate. The total released reconstructed activity is estimated to be between 10 and 20 PBq, although it increases when taking into account the deposition measurements. Finally, the variable discretisation of the source term yields an almost hourly profile over certain intervals of high temporal variability, signaling identifiable portions of the source term. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Investigation of hafnium-aluminate alloys in view of integration as interpoly dielectrics of future Flash memories
- Author
-
Molas, Gabriel, Bocquet, Marc, Buckley, Julien, Grampeix, Helen, Gély, Marc, Colonna, Jean-Philippe, Licitra, Christophe, Rochat, Névine, Veyront, Thomas, Garros, Xavier, Martin, François, Brianceau, Pierre, Vidal, Vincent, Bongiorno, Cosimo, Lombardo, Salvatore, Salvo, Barbara De, and Deleonibus, Simon
- Published
- 2007
- Full Text
- View/download PDF
36. Network Models for Chiral Symmetry Classes of Anderson Localisation
- Author
-
Bocquet, Marc and Chalker, J. T.
- Published
- 2003
- Full Text
- View/download PDF
37. A fast, single-iteration ensemble Kalman smoother for sequential data assimilation.
- Author
-
Grudzien, Colin and Bocquet, Marc
- Subjects
- *
KALMAN filtering , *SYNOPTIC meteorology , *FORECASTING - Abstract
Ensemble-variational methods form the basis of the state-of-the-art for nonlinear, scalable data assimilation, yet current designs may not be cost-effective for reducing prediction error in online, short-range forecast systems. We propose a novel, outer-loop optimization of the ensemble-variational formalism for applications in which forecast error dynamics are weakly nonlinear, such as synoptic meteorology. In order to rigorously derive our method and demonstrate its novelty, we review ensemble smoothers that appear throughout the literature in a unified Bayesian maximum-a-posteriori narrative, updating and simplifying some results. After mathematically deriving our technique, we systematically develop and inter-compare all studied schemes in the open-source Julia package DataAssimilationBenchmarks.jl, with pseudo-code provided for these methods. This high-performance numerical framework, supporting our mathematical results, produces extensive benchmarks that demonstrate the significant performance advantages of our proposed technique. In particular, our single-iteration ensemble Kalman smoother is shown both to improve prediction / posterior accuracy and to simultaneously reduce the leading order cost of iterative, sequential smoothers in a variety of relevant test cases for operational short-range forecasts. This long work is thus intended to present our novel single-iteration ensemble Kalman smoother, and to provide a theoretical and computational framework for the study of sequential, ensemble-variational Kalman filters and smoothers generally. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. Model of the Weak Reset Process in HfO x Resistive Memory for Deep Learning Frameworks.
- Author
-
Majumdar, Atreya, Bocquet, Marc, Hirtzlin, Tifenn, Laborieux, Axel, Klein, Jacques-Olivier, Nowak, Etienne, Vianello, Elisa, Portal, Jean-Michel, and Querlioz, Damien
- Subjects
- *
DEEP learning , *NONVOLATILE random-access memory , *MACHINE learning , *HAFNIUM oxide - Abstract
The implementation of current deep learning training algorithms is power-hungry, due to data transfer between memory and logic units. Oxide-based resistive random access memories (RRAMs) are outstanding candidates to implement in-memory computing, which is less power-intensive. Their weak RESET regime is particularly attractive for learning, as it allows tuning the resistance of the devices with remarkable endurance. However, the resistive change behavior in this regime suffers from many fluctuations and is particularly challenging to model, especially in a way compatible with tools used for simulating deep learning. In this work, we present a model of the weak RESET process in hafnium oxide RRAM and integrate this model within the PyTorch deep learning framework. Validated on experiments on a hybrid CMOS/RRAM technology, our model reproduces both the noisy progressive behavior and the device-to-device (D2D) variability. We use this tool to train binarized neural networks (BNNs) for the MNIST handwritten digit recognition task and the CIFAR-10 object classification task. We simulate our model with and without various aspects of device imperfections to understand their impact on the training process and identify that the D2D variability is the most detrimental aspect. The framework can be used in the same manner for other types of memories to identify the device imperfections that cause the most degradation, which can, in turn, be used to optimize the devices to reduce the impact of these imperfections. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Quantification of uncertainties in the assessment of an atmospheric release source applied to the autumn 2017 106Ru event.
- Author
-
Dumont Le Brazidec, Joffrey, Bocquet, Marc, Saunier, Olivier, and Roustan, Yelva
- Subjects
MARKOV chain Monte Carlo ,KALMAN filtering ,UNCERTAINTY ,PREDICATE calculus ,DISTRIBUTION (Probability theory) ,INVERSE problems ,COVARIANCE matrices - Abstract
Using a Bayesian framework in the inverse problem of estimating the source of an atmospheric release of a pollutant has proven fruitful in recent years. Through Markov chain Monte Carlo (MCMC) algorithms, the statistical distribution of the release parameters such as the location, the duration, and the magnitude as well as error covariances can be sampled so as to get a complete characterisation of the source. In this study, several approaches are described and applied to better quantify these distributions, and therefore to get a better representation of the uncertainties. First, we propose a method based on ensemble forecasting: physical parameters of both the meteorological fields and the transport model are perturbed to create an enhanced ensemble. In order to account for physical model errors, the importance of ensemble members are represented by weights and sampled together with the other variables of the source. Second, once the choice of the statistical likelihood is shown to alter the nuclear source assessment, we suggest several suitable distributions for the errors. Finally, we propose two specific designs of the covariance matrix associated with the observation error. These methods are applied to the source term reconstruction of the 106 Ru of unknown origin in Europe in autumn 2017. A posteriori distributions meant to identify the origin of the release, to assess the source term, and to quantify the uncertainties associated with the observations and the model, as well as densities of the weights of the perturbed ensemble, are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Using machine learning to correct model error in data assimilation and forecast applications.
- Author
-
Farchi, Alban, Laloyaux, Patrick, Bonavita, Massimo, and Bocquet, Marc
- Subjects
MACHINE learning ,ALGORITHMS ,SYSTEM dynamics ,DATA modeling ,DEEP learning ,EARTH sciences - Abstract
The idea of using machine learning (ML) methods to reconstruct the dynamics of a system is the topic of recent studies in the geosciences, in which the key output is a surrogate model meant to emulate the dynamical model. In order to treat sparse and noisy observations in a rigorous way, ML can be combined with data assimilation (DA). This yields a class of iterative methods in which, at each iteration, a DA step assimilates the observations and alternates with a ML step to learn the underlying dynamics of the DA analysis. In this article, we propose to use this method to correct the error of an existing, knowledge‐based model. In practice, the resulting surrogate model is a hybrid model between the original (knowledge‐based) model and the ML model. We demonstrate the feasibility of the method numerically using a two‐layer, two‐dimensional, quasi‐geostrophic channel model. Model error is introduced by the means of perturbed parameters. The DA step is performed using the strong‐constraint 4D‐Var algorithm, while the ML step is performed using deep learning tools. The ML models are able to learn a substantial part of the model error and the resulting hybrid surrogate models produce better short‐ to mid‐range forecasts. Furthermore, using the hybrid surrogate models for DA yields a significantly better analysis than using the original model. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Realization of Hybrid Silicon core/silicon Nitride Shell Nanodots by LPCVD for NVM Application
- Author
-
Colonna, Jean, Molas, Gabriel, Gely, Marc W., Bocquet, Marc, Jalaguier, Eric, Slavo, Barbara De, Grampeix, Helen, Brianceau, Pierre, Yckache, Karim, Papon, Anne-Marie, Auvert, Geoffroy, Bongiorno, Corrado, and Lombardo, Salvatore
- Published
- 2007
- Full Text
- View/download PDF
42. Combining data assimilation and machine learning to infer unresolved scale parametrization.
- Author
-
Brajard, Julien, Carrassi, Alberto, Bocquet, Marc, and Bertino, Laurent
- Subjects
MACHINE learning ,KALMAN filtering - Abstract
In recent years, machine learning (ML) has been proposed to devise data-driven parametrizations of unresolved processes in dynamical numerical models. In most cases, the ML training leverages highresolution simulations to provide a dense, noiseless target state. Our goal is to go beyond the use of high-resolution simulations and train ML-based parametrization using direct data, in the realistic scenario of noisy and sparse observations. The algorithm proposed in this work is a two-step process. First, data assimilation (DA) techniques are applied to estimate the full state of the system from a truncated model. The unresolved part of the truncated model is viewed as a model error in the DA system. In a second step, ML is used to emulate the unresolved part, a predictor of model error given the state of the system. Finally, the ML-based parametrization model is added to the physical core truncated model to produce a hybrid model. The DA component of the proposed method relies on an ensemble Kalman filter while the ML parametrization is represented by a neural network. The approach is applied to the two-scale Lorenz model and to MAOOAM, a reduced-order coupled ocean-atmosphere model. We show that in both cases, the hybrid model yields forecasts with better skill than the truncated model. Moreover, the attractor of the system is significantly better represented by the hybrid model than by the truncated model. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Quantification of the modelling uncertainties in atmospheric release source assessment and application to the reconstruction of the autumn 2017 Ruthenium 106 source.
- Author
-
Le Brazidec, Joffrey Dumont, Bocquet, Marc, Saunier, Olivier, and Roustan, Yelva
- Abstract
Using a Bayesian framework in the inverse problem of estimating the source of an atmospheric release of a pollutant has proven fruitful in recent years. Through Markov chain Monte Carlo (MCMC) algorithms, the statistical distribution of the release parameters such as the location, the duration, and the magnitude as well as the likelihood covariances can be sampled so as to get a complete characterisation of the source. In this study, several approaches are described and applied to improve on these distributions, and therefore to get a better representation of the uncertainties. First, a method based on ensemble forecasting is proposed: physical parameters of both the meteorological fields and the transport model are perturbed to create an enhanced ensemble. In order to account for model errors, the importance of ensemble members are represented by weights and sampled together with the other variables of the source. Secondly, the choice of the statistical likelihood is shown to alter the nuclear source assessment, and several suited distributions for the errors are advised. Finally, two advanced designs of the covariance matrix associated to the observation error are proposed. These methods are applied to the case of the detection of Ruthenium 106 of unknown origin in Europe in autumn 2017. A posteriori distributions meant to identify the origin of the release, to assess the source term, to quantify the uncertainties associated to the observations and the model, as well as densities of the weights of the perturbed ensemble, are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. A Review of Innovation-Based Methods to Jointly Estimate Model and Observation Error Covariance Matrices in Ensemble Data Assimilation.
- Author
-
TANDEO, PIERRE, AILLIOT, PIERRE, BOCQUET, MARC, CARRASSI, ALBERTO, TAKEMASA MIYOSHI, MANUEL PULIDO, and YICUN ZHEN
- Subjects
COVARIANCE matrices ,RANDOM noise theory ,RANDOM variables ,MOMENTS method (Statistics) ,PREDICTION theory ,PARAMETER estimation ,DATA - Abstract
Data assimilation combines forecasts from a numerical model with observations. Most of the current data assimilation algorithms consider the model and observation error terms as additive Gaussian noise, specified by their covariance matrices Q and R, respectively. These error covariances, and specifically their respective amplitudes, determine the weights given to the background (i.e., the model forecasts) and to the observations in the solution of data assimilation algorithms (i.e., the analysis). Consequently, Q and R matrices significantly impact the accuracy of the analysis. This review aims to present and to discuss, with a unified framework, different methods to jointly estimate the Q and R matrices using ensemble-based data assimilation techniques. Most of the methods developed to date use the innovations, defined as differences between the observations and the projection of the forecasts onto the observation space. These methods are based on two main statistical criteria: 1) the method of moments, in which the theoretical and empirical moments of the innovations are assumed to be equal, and 2) methods that use the likelihood of the observations, themselves contained in the innovations. The reviewed methods assume that innovations are Gaussian random variables, although extension to other distributions is possible for likelihood-based methods. The methods also show some differences in terms of levels of complexity and applicability to high-dimensional systems. The conclusion of the review discusses the key challenges to further develop estimation methods for Q and R. These challenges include taking into account time-varying error covariances, using limited observational coverage, estimating additional deterministic error terms, or accounting for correlated noise. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. On the numerical integration of the Lorenz-96 model, with scalar additive noise, for benchmark twin experiments.
- Author
-
Grudzien, Colin, Bocquet, Marc, and Carrassi, Alberto
- Subjects
- *
NUMERICAL integration , *NOISE , *SYSTEM integration , *EXPERIMENTS - Abstract
Relatively little attention has been given to the impact of discretization error on twin experiments in the stochastic form of the Lorenz-96 equations when the dynamics are fully resolved but random. We study a simple form of the stochastically forced Lorenz-96 equations that is amenable to higher-order time-discretization schemes in order to investigate these effects. We provide numerical benchmarks for the overall discretization error, in the strong and weak sense, for several commonly used integration schemes and compare these methods for biases introduced into ensemble-based statistics and filtering performance. The distinction between strong and weak convergence of the numerical schemes is focused on, highlighting which of the two concepts is relevant based on the problem at hand. Using the above analysis, we suggest a mathematically consistent framework for the treatment of these discretization errors in ensemble forecasting and data assimilation twin experiments for unbiased and computationally efficient benchmark studies. Pursuant to this, we provide a novel derivation of the order 2.0 strong Taylor scheme for numerically generating the truth twin in the stochastically perturbed Lorenz-96 equations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. Digital Biologically Plausible Implementation of Binarized Neural Networks With Differential Hafnium Oxide Resistive Memory Arrays.
- Author
-
Hirtzlin, Tifenn, Bocquet, Marc, Penkovsky, Bogdan, Klein, Jacques-Olivier, Nowak, Etienne, Vianello, Elisa, Portal, Jean-Michel, and Querlioz, Damien
- Subjects
HAFNIUM oxide ,DIGITAL electronics ,ERROR-correcting codes ,DETECTOR circuits ,MEMORY ,VISUAL memory - Abstract
The brain performs intelligent tasks with extremely low energy consumption. This work takes its inspiration from two strategies used by the brain to achieve this energy efficiency: the absence of separation between computing and memory functions and reliance on low-precision computation. The emergence of resistive memory technologies indeed provides an opportunity to tightly co-integrate logic and memory in hardware. In parallel, the recently proposed concept of a Binarized Neural Network, where multiplications are replaced by exclusive NOR (XNOR) logic gates, offers a way to implement artificial intelligence using very low precision computation. In this work, we therefore propose a strategy for implementing low-energy Binarized Neural Networks that employs brain-inspired concepts while retaining the energy benefits of digital electronics. We design, fabricate, and test a memory array, including periphery and sensing circuits, that is optimized for this in-memory computing scheme. Our circuit employs hafnium oxide resistive memory integrated in the back end of line of a 130-nm CMOS process, in a two-transistor, two-resistor cell, which allows the exclusive NOR operations of the neural network to be performed directly within the sense amplifiers. We show, based on extensive electrical measurements, that our design allows a reduction in the number of bit errors on the synaptic weights without the use of formal error-correcting codes. We design a whole system using this memory array. We show on standard machine learning tasks (MNIST, CIFAR-10, ImageNet, and an ECG task) that the system has inherent resilience to bit errors. We evidence that its energy consumption is attractive compared to more standard approaches and that it can use memory devices in regimes where they exhibit particularly low programming energy and high endurance. We conclude the work by discussing how it associates biologically plausible ideas with more traditional digital electronics concepts. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
47. On the consistency of the local ensemble square root Kalman filter perturbation update.
- Author
-
Bocquet, Marc and Farchi, Alban
- Abstract
We examine the perturbation update step of the ensemble Kalman filters which rely on covariance localisation, and hence have the ability to assimilate non-local observations in geophysical models. We show that the updated perturbations of these ensemble filters are not to be identified with the main empirical orthogonal functions of the analysis covariance matrix, in contrast with the updated perturbations of the local ensemble transform Kalman filter (LETKF). Building on that evidence, we propose a new scheme to update the perturbations of a local ensemble square root Kalman filter (LEnSRF) with the goal to minimise the discrepancy between the analysis covariances and the sample covariances regularised by covariance localisation. The scheme has the potential to be more consistent and to generate updated members closer to the model's attractor (showing fewer imbalances). We show how to solve the corresponding optimisation problem and discuss its numerical complexity. The qualitative properties of the perturbations generated from this new scheme are illustrated using a simple one-dimensional covariance model. Moreover, we demonstrate on the discrete Lorenz–96 and continuous Kuramoto–Sivashinsky one-dimensional low-order models that the new scheme requires significantly less, and possibly none, multiplicative inflation needed to counteract imbalance, compared to the LETKF and the LEnSRF without the new scheme. Finally, we notice a gain in accuracy of the new LEnSRF as measured by the analysis and forecast root mean square errors, despite using well-tuned configurations where such gain is very difficult to obtain. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Some spectral properties of the one-dimensional disordered Dirac equation
- Author
-
Bocquet, Marc
- Published
- 1999
- Full Text
- View/download PDF
49. On the numerical integration of the Lorenz-96 model, with scalar additive noise, for benchmark twin experiments.
- Author
-
Grudzien, Colin, Bocquet, Marc, and Carrassi, Alberto
- Subjects
- *
NUMERICAL integration , *NOISE , *SYSTEM integration , *EXPERIMENTS - Abstract
Relatively little attention has been given to the impact of discretization error on twin experiments in the stochastic form of the Lorenz-96 equations when the dynamics are fully resolved but random. We study a simple form of the stochastically forced Lorenz-96 equations that is amenable to higher order time-discretization schemes in order to investigate these effects. We provide numerical benchmarks for the overall discretization error, in the strong and weak sense, for several commonly used integration schemes and compare these methods for biases introduced into ensemble-based statistics and filtering performance. Focus is given to the distinction between strong and weak convergence of the numerical schemes, highlighting which of the two concepts is relevant based on the problem at hand. Using the above analysis, we suggest a mathematically consistent framework for the treatment of these discretization errors in ensemble forecasting and data assimilation twin experiments for unbiased and computationally efficient benchmark studies. Pursuant to this, we provide a novel derivation of the order 2.0 strong Taylor scheme for numerically generating the truth-twin in the stochastically perturbed Lorenz-96 equations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Diagnosing spatial error structures in CO2 mole fractions and XCO2 column mole fractions from atmospheric transport.
- Author
-
Lauvaux, Thomas, Díaz-Isaac, Liza I., Bocquet, Marc, and Bousserez, Nicolas
- Subjects
MOLE fraction ,ATMOSPHERIC transport ,GREENHOUSE gas analysis ,SAMPLING errors ,HETEROTROPHIC respiration ,ERROR analysis in mathematics ,GREENHOUSE gases ,WIND speed - Abstract
Atmospheric inversions inform us about the magnitude and variations of greenhouse gas (GHG) sources and sinks from global to local scales. Deployment of observing systems such as spaceborne sensors and ground-based instruments distributed around the globe has started to offer an unprecedented amount of information to estimate surface exchanges of GHG at finer spatial and temporal scales. However, all inversion methods still rely on imperfect atmospheric transport models whose error structures directly affect the inverse estimates of GHG fluxes. The impact of spatial error structures on the retrieved fluxes increase concurrently with the density of the available measurements. In this study, we diagnose the spatial structures due to transport model errors affecting modeled in situ carbon dioxide (CO2) mole fractions and total-column dry air mole fractions of CO2 (XCO2). We implement a cost-effective filtering technique recently developed in the meteorological data assimilation community to describe spatial error structures using a small-size ensemble. This technique can enable ensemble-based error analysis for multiyear inversions of sources and sinks. The removal of noisy structures due to sampling errors in our small-size ensembles is evaluated by comparison with larger-size ensembles. A second filtering approach for error covariances is proposed (Wiener filter), producing similar results over the 1-month simulation period compared to a Schur filter. Based on a comparison to a reference 25-member calibrated ensemble, we demonstrate that error variances and spatial error correlation structures are recoverable from small-size ensembles of about 8 to 10 members, improving the representation of transport errors in mesoscale inversions of CO2 fluxes. Moreover, error variances of in situ near-surface and free-tropospheric CO2 mole fractions differ significantly from total-column XCO2 error variances. We conclude that error variances for remote-sensing observations need to be quantified independently of in situ CO2 mole fractions due to the complexity of spatial error structures at different altitudes. However, we show the potential use of meteorological error structures such as the mean horizontal wind speed, directly available from ensemble prediction systems, to approximate spatial error correlations of in situ CO2 mole fractions, with similarities in seasonal variations and characteristic error length scales. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.