11 results
Search Results
2. Build TensorFlow Input Pipelines
- Author
-
David Paper
- Subjects
Pipeline transport ,Engineering drawing ,Sequence ,Computer science ,Simple (abstract algebra) ,business.industry ,Component (UML) ,Deep learning ,Artificial intelligence ,Learning models ,business ,Abstraction (linguistics) - Abstract
We introduce you to TensorFlow input pipelines with the tf.data API, which enables you to build complex input pipelines from simple, reusable pieces. Input pipelines are the lifeblood of any deep learning experiment because learning models expect data in a TensorFlow consumable form. It is very easy to create high-performance pipelines with the tf.data.Dataset abstraction (a component of the tf.data API) because it represents a sequence of elements from a dataset in a simple format.
- Published
- 2021
3. Deep Learning with TensorFlow Datasets
- Author
-
David Paper
- Subjects
business.industry ,Computer science ,Simple (abstract algebra) ,Deep learning ,Artificial intelligence ,Machine learning ,computer.software_genre ,business ,computer - Abstract
In the previous chapter, we demonstrated how to work with TFDS objects. In this chapter, we work through two end-to-end deep learning experiments with large and complex TFDS objects. The Fashion-MNIST and beans datasets are small with simple images.
- Published
- 2021
4. Increase the Diversity of Your Dataset with Data Augmentation
- Author
-
David Paper
- Subjects
Training set ,business.industry ,Computer science ,Deep learning ,Artificial intelligence ,business ,Machine learning ,computer.software_genre ,computer ,Diversity (business) - Abstract
We guide you in the creation of augmented data experiments to increase the diversity of a training set by applying random (but realistic) transformations. Data augmentation is very useful for small datasets because deep learning models crave a lot of data to perform well.
- Published
- 2021
5. Build Your First Neural Network with Google Colab
- Author
-
David Paper
- Subjects
World Wide Web ,Work (electrical) ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,Cloud computing ,Artificial intelligence ,Python (programming language) ,business ,computer ,computer.programming_language - Abstract
We work through a complete deep learning example with Python’s TensorFlow 2.x library in the Google Colab cloud service. We also demonstrate how to link your Google Drive with the Colab cloud service.
- Published
- 2021
6. Introduction to Tensor Processing Units
- Author
-
David Paper
- Subjects
Tensor processing unit ,business.industry ,Computer science ,Deep learning ,Integrated circuit ,law.invention ,Computer engineering ,Application-specific integrated circuit ,law ,Tensor (intrinsic definition) ,Code (cryptography) ,Google Brain ,Artificial intelligence ,business - Abstract
We introduce you to Tensor Processing Units with code examples. A Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) designed to accelerate ML workloads. The TPUs available in TensorFlow are custom-developed from the ground up by the Google Brain team based on its plethora of experience and leadership in the ML community. Google Brain is a deep learning artificial intelligence (AI) research team at Google who research ways to make machines intelligent for the improvement of people’s lives.
- Published
- 2021
7. Explaining Deep Learning Models for Speech Enhancement
- Author
-
Sunit Sivasankaran, Emmanuel Vincent, Dominique Fohr, Microsoft Corporation [Redmond], Microsoft Corporation [Redmond, Wash.], Speech Modeling for Facilitating Oral-Based Communication (MULTISPEECH), Inria Nancy - Grand Est, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Department of Natural Language Processing & Knowledge Discovery (LORIA - NLPKD), Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL), This work was made with the support of the French National Research Agency, in the framework of the project VOCADOM 'Robust voice command adapted to the user and to the context for AAL' (ANR-16-CE33-0006). Experiments presented in this paper were carried out using the Grid’5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as otherorganizations (see https://www.grid5000.fr)., ANR-16-CE33-0006,VOCADOM,Commande vocale robuste adaptée à la personne et au contexte pour l'autonomie à domicile(2016), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), and Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,Speech recognition ,Word error rate ,020206 networking & telecommunications ,Context (language use) ,02 engineering and technology ,explainable AI ,Speech enhancement ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Noise ,feature attribution ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,Robustness (computer science) ,[INFO.INFO-SD]Computer Science [cs]/Sound [cs.SD] ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,speech enhancement ,Artificial intelligence ,0305 other medical science ,business - Abstract
International audience; We consider the problem of explaining the robustness of neural networks used to compute time-frequency masks for speech enhancement to mismatched noise conditions. We employ the Deep SHapley Additive exPlanations (DeepSHAP) feature attribution method to quantify the contribution of every timefrequency bin in the input noisy speech signal to every timefrequency bin in the output time-frequency mask. We define an objective metric-referred to as the speech relevance scorethat summarizes the obtained SHAP values and show that it correlates with the enhancement performance, as measured by the word error rate on the CHiME-4 real evaluation dataset. We use the speech relevance score to explain the generalization ability of three speech enhancement models trained using synthetically generated speech-shaped noise, noise from a professional sound effects library, or real CHiME-4 noise. To the best of our knowledge, this is the first study on neural network explainability in the context of speech enhancement.
- Published
- 2021
8. Optimizing Motor Intention Detection With Deep Learning: Towards Management of Intraoperative Awareness
- Author
-
Laurent Bougrain, Oleksii Avilov, Anton Popov, Sébastien Rimbert, National Technical University of Ukraine 'Kyiv Polytechnic Institute' [Kiev], Analysis and modeling of neural systems by a system neuroscience approach (NEUROSYS), Inria Nancy - Grand Est, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Department of Complex Systems, Artificial Intelligence & Robotics (LORIA - AIS), Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL), Oleksii Avilov was supported by scholarship from the French Embassy to Ukraine while working on this topic at the NEUROSYS team at LORIA (Université de Lorraine/CNRS/Inria), Nancy, France. Experiments presented in this paper were carried out using the Grid’5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (https://www.grid5000.fr)., Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), and Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
0209 industrial biotechnology ,Computer science ,Biomedical Engineering ,motor imagery AAGA: accidental awareness during general anesthesia ,02 engineering and technology ,Intention ,Electroencephalography ,[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] ,Intraoperative Awareness ,020901 industrial engineering & automation ,Motor imagery ,median nerve stimulation ,Deep Learning ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,intraoperative awareness during general anesthesia ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,[INFO.INFO-IM]Computer Science [cs]/Medical Imaging ,Functional electrical stimulation ,Humans ,Brain-computer interface (BCI) ,Artificial neural network ,medicine.diagnostic_test ,Median nerve stimulation ,business.industry ,electroencephalogram (EEG) ,Deep learning ,[SCCO.NEUR]Cognitive science/Neuroscience ,Pattern recognition ,Linear discriminant analysis ,Median nerve ,medicine.anatomical_structure ,machine learning ,Frontal lobe ,Brain-Computer Interfaces ,Imagination ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,Algorithms ,Motor cortex - Abstract
International audience; Objective: This article shows the interest in deep learning techniques to detect motor imagery (MI) from raw electroencephalographic (EEG) signals when a functional electrical stimulation is added or not. Impacts of electrode montages and bandwidth are also reported. The perspective of this work is to improve the detection of intraoperative awareness during general anesthesia. Methods: Various architectures of EEGNet were investigated to optimize MI detection. They have been compared to the state-of-the-art classifiers in Brain-Computer Interfaces (based on Riemannian geometry, linear discriminant analysis), and other deep learning architectures (deep convolution network, shallow convolutional network). EEG data were measured from 22 participants performing motor imagery with and without median nerve stimulation. Results: The proposed architecture of EEGNet reaches the best classification accuracy (83.2%) and false-positive rate (FPR 19.0%) for a setup with only six electrodes over the motor cortex and frontal lobe and for an extended 4-38 Hz EEG frequency range while the subject is being stimulated via a median nerve. Configurations with a larger number of electrodes result in higher accuracy (94.5%) and FPR (6.1%) for 128 electrodes (and respectively 88.0% and 12.9% for 13 electrodes).Conclusion: The present work demonstrates that using an extended EEG frequency band and a modified EEGNet deep neural network increases the accuracy of MI detection when used with as few as 6 electrodes which include frontal channels. Significance: The proposed method contributes to the development of Brain-Computer Interface systems based on MI detection from EEG.
- Published
- 2021
9. Evolution Control for parallel ANN-assisted simulation-based optimization application to Tuberculosis Transmission Control
- Author
-
Mohand-Said Mezmaz, Nouredine Melab, Romain Ragonnet, Guillaume Briffoteaux, Daniel Tuyttens, Optimisation de grande taille et calcul large échelle (BONUS), Inria Lille - Nord Europe, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), University of Mons [Belgium] (UMONS), Monash University [Melbourne], Laboratoire d'Informatique Fondamentale de Lille (LIFL), Université de Lille, Sciences et Technologies-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lille, Sciences Humaines et Sociales-Centre National de la Recherche Scientifique (CNRS), and Experiments presented in this paper were carried out using the Grid’5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see https://www.grid5000.fr).
- Subjects
Artificial Neural Network ,Computer Networks and Communications ,Computer science ,Monte Carlo method ,Context (language use) ,02 engineering and technology ,[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] ,Simulation-based optimization ,Surrogate-assisted Optimization ,0202 electrical engineering, electronic engineering, information engineering ,Massively parallel ,Dropout (neural networks) ,Artificial neural network ,Evolution Control ,business.industry ,Deep learning ,020206 networking & telecommunications ,[INFO.INFO-RO]Computer Science [cs]/Operations Research [cs.RO] ,Supercomputer ,Computer engineering ,Hardware and Architecture ,020201 artificial intelligence & image processing ,Artificial intelligence ,Massively Parallel Computing ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,business ,Software ,Simulation - Abstract
International audience; In many optimal design searches, the function to optimise is a simulator that is computationally expensive. While current High Performance Computing (HPC) methods are not able to solve such problems efficiently, parallelism can be coupled with approximate models (surrogates or meta-models) that imitate the simulator in timely fashion to achieve better results. This combined approach reduces the number of simulations thanks to surrogate use whereas the remaining evaluations are handled by supercomputers. While the surrogates' ability to limit computational times is very attractive, integrating them into the over-arching optimization process can be challenging. Indeed, it is critical to address the major trade-off between the quality (precision) and the efficiency (execution time) of the resolution. In this article, we investigate Evolution Controls (ECs) which are strategies that define the alternation between the simulator and the surrogate within the optimization process. We propose a new EC based on the prediction uncertainty obtained from Monte Carlo Dropout (MCDropout), a technique originally dedicated to quantifying uncertainty in deep learning. Investigations of such uncertainty-aware ECs remain uncommon in surrogate-assisted evolutionary optimization. In addition, we use parallel computing in a complementary way to address the high computational burden. Our new strategy is implemented in the context of a pioneering application to Tuberculosis Transmission Control. The reported results show that the MCDropout-based EC coupled with massively parallel computing outperforms strategies previously proposed in the field of surrogate-assisted optimization.
- Published
- 2020
10. Foreground-Background Ambient Sound Scene Separation
- Author
-
Michel Olvera, Romain Serizel, Emmanuel Vincent, Gilles Gasso, Speech Modeling for Facilitating Oral-Based Communication (MULTISPEECH), Inria Nancy - Grand Est, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Department of Natural Language Processing & Knowledge Discovery (LORIA - NLPKD), Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS), Laboratoire d'Informatique, de Traitement de l'Information et des Systèmes (LITIS), Université Le Havre Normandie (ULH), Normandie Université (NU)-Normandie Université (NU)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Institut National des Sciences Appliquées (INSA), This work was made with the support of the French National Research Agency, in the framework of the project LEAUDS 'Learning to understandaudio scenes' (ANR-18-CE23-0020). Experiments presented in this paper were carried out using the Grid’5000 testbed, supported by a scientificinterest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see https://www.grid5000.fr)., GRID5000, ANR-18-CE23-0020,LEAUDS,Apprentissage statistique pour la compréhension de scènes audio(2018), Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL), Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), Normandie Université (NU)-Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Institut National des Sciences Appliquées (INSA)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Université Le Havre Normandie (ULH), Normandie Université (NU), and ANR-18-CE23-0020,LEAUDS,LEARNING TO UNDERSTAND AUDIO SCENES(2018)
- Subjects
Normalization (statistics) ,Signal Processing (eess.SP) ,FOS: Computer and information sciences ,Sound (cs.SD) ,Computer Science - Machine Learning ,Computer science ,Generalization ,Ambient noise level ,02 engineering and technology ,Computer Science - Sound ,Machine Learning (cs.LG) ,Signal-to-noise ratio ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Audio and Speech Processing (eess.AS) ,0202 electrical engineering, electronic engineering, information engineering ,ambient sound scenes ,FOS: Electrical engineering, electronic engineering, information engineering ,Foreground-background ,Computer vision ,generalization ability ,Electrical Engineering and Systems Science - Signal Processing ,Sound (geography) ,Signal processing ,geography ,geography.geographical_feature_category ,business.industry ,Deep learning ,deep learning ,audio source separation ,020206 networking & telecommunications ,Feature (computer vision) ,[INFO.INFO-SD]Computer Science [cs]/Sound [cs.SD] ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
International audience; Ambient sound scenes typically comprise multiple short events occurring on top of a somewhat stationary background. We consider the task of separating these events from the background, which we call foreground-background ambient sound scene separation. We propose a deep learning-based separation framework with a suitable feature normaliza-tion scheme and an optional auxiliary network capturing the background statistics, and we investigate its ability to handle the great variety of sound classes encountered in ambient sound scenes, which have often not been seen in training. To do so, we create single-channel foreground-background mixtures using isolated sounds from the DESED and Audioset datasets, and we conduct extensive experiments with mixtures of seen or unseen sound classes at various signal-to-noise ratios. Our experimental findings demonstrate the generalization ability of the proposed approach.
- Published
- 2020
11. Towards Portable Online Prediction of Network Utilization using MPI-level Monitoring
- Author
-
Emmanuel Jeannot, Franck Cappello, Shu Mei Tseng, Bogdan Nicolae, Aparna Chandramowlishwaran, George Bosilca, University of California [Irvine] (UCI), University of California, Argonne National Laboratory [Lemont] (ANL), The University of Tennessee [Knoxville], Topology-Aware System-Scale Data Management for High-Performance Computing (TADAAM), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. This material was based upon work supported by the U.S. Department of Energy, Office of Science, under contract DE-AC02-06CH11357,and by the National Science Foundation under Grant No. #1664142. The experimentspresented in this paper were carried out using the Grid’5000/ALADDIN-G5K experimental testbed, an initiative of the French Ministry of Research through the ACI GRID incentive action, INRIA, CNRS and RENATER and other contributing partners (see http://www.grid5000.fr/)., University of California [Irvine] (UC Irvine), University of California (UC), and Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Inria Bordeaux - Sud-Ouest
- Subjects
Artificial neural network ,business.industry ,Computer science ,Network monitoring ,Deep learning ,Distributed computing ,010103 numerical & computational mathematics ,010501 environmental sciences ,Prediction of resource utilization ,01 natural sciences ,Timeseries forecasting ,Work stealing ,Online learning ,Leverage (statistics) ,Artificial intelligence ,0101 mathematics ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,business ,0105 earth and related environmental sciences - Abstract
International audience; Stealing network bandwidth helps a variety of HPC runtimes and services to run additional operations in the background without negatively affecting the applications. A key ingredient to make this possible is an accurate prediction of the future network utilization, enabling the runtime to plan the background operations in advance, such as to avoid competing with the application for network bandwidth. In this paper, we propose a portable deep learning predictor that only uses the information available through MPI introspection to construct a recurrent sequence-to-sequence neural network capable of forecasting network utilization. We leverage the fact that most HPC applications exhibit periodic behaviors to enable predictions far into the future (at least the length of a period). Our on-line approach does not have an initial training phase, it continuously improves itself during application execution without incurring significant computational overhead. Experimental results show better accuracy and lower computational overhead compared with the state-of-the-art on two representative applications.
- Published
- 2019
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.