355 results on '"equivariance"'
Search Results
2. Equi-GSPR: Equivariant SE(3) Graph Network Model for Sparse Point Cloud Registration
- Author
-
Kang, Xueyang, Luan, Zhaoliang, Khoshelham, Kourosh, Wang, Bing, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Learning Temporally Equivariance for Degenerative Disease Progression in OCT by Predicting Future Representations
- Author
-
Emre, Taha, Chakravarty, Arunava, Lachinov, Dmitrii, Rivail, Antoine, Schmidt-Erfurth, Ursula, Bogunović, Hrvoje, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Linguraru, Marius George, editor, Dou, Qi, editor, Feragen, Aasa, editor, Giannarou, Stamatia, editor, Glocker, Ben, editor, Lekadir, Karim, editor, and Schnabel, Julia A., editor
- Published
- 2024
- Full Text
- View/download PDF
4. Self Supervised Contrastive Learning Combining Equivariance and Invariance
- Author
-
Yang, Longze, Yang, Yan, Jin, Hu, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zhang, Wenjie, editor, Tung, Anthony, editor, Zheng, Zhonglong, editor, Yang, Zhengyi, editor, Wang, Xiaoyang, editor, and Guo, Hongjie, editor
- Published
- 2024
- Full Text
- View/download PDF
5. Complex-Valued FastICA Estimator with a Weighted Unitary Constraint: A Robust and Equivariant Estimator.
- Author
-
E, Jianwei and Yang, Mingshu
- Subjects
- *
BLIND source separation , *DIGITAL signal processing , *COMPLEX numbers , *BEHAVIORAL assessment , *MISSING data (Statistics) - Abstract
Independent component analysis (ICA), as a statistical and computational approach, has been successfully applied to digital signal processing. Performance analysis for the ICA approach is perceived as a challenging task to work on. This contribution concerns the complex-valued FastICA algorithm in the range of ICA over the complex number domain. The focus is on the robust and equivariant behavior analysis of the complex-valued FastICA estimator. Although the complex-valued FastICA algorithm as well as its derivatives have been widely used methods for approaching the complex blind signal separation problem, rigorous mathematical treatments of the robust measurement and equivariance for the complex-valued FastICA estimator are still missing. This paper strictly analyzes the robustness against outliers and separation performance depending on the global system. We begin with defining the influence function (IF) of complex-valued FastICA functional and followed by deriving its closed-form expression. Then, we prove that the complex-valued FastICA algorithm based on the optimizing cost function is linear-equivariant, depending only on the source signals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Symmetry-aware Neural Architecture for Embodied Visual Navigation.
- Author
-
Liu, Shuang, Suganuma, Masanori, and Okatani, Takayuki
- Subjects
- *
DEEP reinforcement learning , *NAVIGATION , *REINFORCEMENT learning , *AERONAUTICAL navigation - Abstract
The existing methods for addressing visual navigation employ deep reinforcement learning as the standard tool for the task. However, they tend to be vulnerable to statistical shifts between the training and test data, resulting in poor generalization over novel environments that are out-of-distribution from the training data. In this study, we attempt to improve the generalization ability by utilizing the inductive biases available for the task. Employing the active neural SLAM that learns policies with the advantage actor-critic method as the base framework, we first point out that the mappings represented by the actor and the critic should satisfy specific symmetries. We then propose a network design for the actor and the critic to inherently attain these symmetries. Specifically, we use G-convolution instead of the standard convolution and insert the semi-global polar pooling layer, which we newly design in this study, in the last section of the critic network. Our method can be integrated into existing methods that utilize intermediate goals and 2D occupancy maps. Experimental results show that our method improves generalization ability by a good margin over visual exploration and object goal navigation, which are two main embodied visual navigation tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Boosting deep neural networks with geometrical prior knowledge: a survey.
- Author
-
Rath, Matthias and Condurache, Alexandru Paul
- Abstract
Deep neural networks achieve state-of-the-art results in many different problem settings by exploiting vast amounts of training data. However, collecting, storing and—in the case of supervised learning—labelling the data is expensive and time-consuming. Additionally, assessing the networks’ generalization abilities or predicting how the inferred output changes under input transformations is complicated since the networks are usually treated as a black box. Both of these problems can be mitigated by incorporating prior knowledge into the neural network. One promising approach, inspired by the success of convolutional neural networks in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations of the problem to solve that affect the output in a predictable way. This promises an increased data efficiency and more interpretable network outputs. In this survey, we try to give a concise overview about different approaches that incorporate geometrical prior knowledge into neural networks. Additionally, we connect those methods to 3D object detection for autonomous driving, where we expect promising results when applying those methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Computational aspects of experimental designs in multiple-group mixed models.
- Author
-
Prus, Maryna and Filová, Lenka
- Abstract
We extend the equivariance and invariance conditions for construction of optimal designs to multiple-group mixed models and, hence, derive the support of optimal designs for first- and second-order models on a symmetric square. Moreover, we provide a tool for computation of D- and L-efficient exact designs in multiple-group mixed models by adapting the algorithm of Harman et al. (Appl Stoch Models Bus Ind, 32:3–17, 2016). We show that this algorithm can be used both for size-constrained problems and also in settings that require multiple resource constraints on the design, such as cost constraints or marginal constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. The Universal Equivariance Properties of Exotic Aromatic B-Series
- Author
-
Laurent, Adrien and Munthe-Kaas, Hans
- Published
- 2024
- Full Text
- View/download PDF
10. ℤ 2 × ℤ 2 Equivariant Quantum Neural Networks: Benchmarking against Classical Neural Networks.
- Author
-
Dong, Zhongtian, Comajoan Cara, Marçal, Dahale, Gopal Ramesh, Forestano, Roy T., Gleyzer, Sergei, Justice, Daniel, Kong, Kyoungchul, Magorsch, Tom, Matchev, Konstantin T., Matcheva, Katia, and Unlu, Eyup B.
- Subjects
- *
ARTIFICIAL neural networks , *LARGE Hadron Collider , *NETWORK performance , *DEEP learning , *TASK performance - Abstract
This paper presents a comparative analysis of the performance of Equivariant Quantum Neural Networks (EQNNs) and Quantum Neural Networks (QNNs), juxtaposed against their classical counterparts: Equivariant Neural Networks (ENNs) and Deep Neural Networks (DNNs). We evaluate the performance of each network with three two-dimensional toy examples for a binary classification task, focusing on model complexity (measured by the number of parameters) and the size of the training dataset. Our results show that the Z 2 × Z 2 EQNN and the QNN provide superior performance for smaller parameter sets and modest training data samples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A Comparison between Invariant and Equivariant Classical and Quantum Graph Neural Networks.
- Author
-
Forestano, Roy T., Comajoan Cara, Marçal, Dahale, Gopal Ramesh, Dong, Zhongtian, Gleyzer, Sergei, Justice, Daniel, Kong, Kyoungchul, Magorsch, Tom, Matchev, Konstantin T., Matcheva, Katia, and Unlu, Eyup B.
- Subjects
- *
GRAPH neural networks , *QUANTUM graph theory , *MACHINE learning , *LARGE Hadron Collider , *COLLISIONS (Nuclear physics) - Abstract
Machine learning algorithms are heavily relied on to understand the vast amounts of data from high-energy particle collisions at the CERN Large Hadron Collider (LHC). The data from such collision events can naturally be represented with graph structures. Therefore, deep geometric methods, such as graph neural networks (GNNs), have been leveraged for various data analysis tasks in high-energy physics. One typical task is jet tagging, where jets are viewed as point clouds with distinct features and edge connections between their constituent particles. The increasing size and complexity of the LHC particle datasets, as well as the computational models used for their analysis, have greatly motivated the development of alternative fast and efficient computational paradigms such as quantum computation. In addition, to enhance the validity and robustness of deep networks, we can leverage the fundamental symmetries present in the data through the use of invariant inputs and equivariant layers. In this paper, we provide a fair and comprehensive comparison of classical graph neural networks (GNNs) and equivariant graph neural networks (EGNNs) and their quantum counterparts: quantum graph neural networks (QGNNs) and equivariant quantum graph neural networks (EQGNN). The four architectures were benchmarked on a binary classification task to classify the parton-level particle initiating the jet. Based on their area under the curve (AUC) scores, the quantum networks were found to outperform the classical networks. However, seeing the computational advantage of quantum networks in practice may have to wait for the further development of quantum technology and its associated application programming interfaces (APIs). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Rotation-equivariant spherical vector networks for objects recognition with unknown poses.
- Author
-
Chen, Hao, Zhao, Jieyu, and Zhang, Qiang
- Subjects
- *
CONVOLUTIONAL neural networks , *OBJECT recognition (Computer vision) , *ROUTING algorithms , *POSE estimation (Computer vision) - Abstract
Analyzing 3D objects without pose priors using neural networks is challenging. In view of the shortcoming that spherical convolutional networks lack the construction of a part–whole hierarchy with rotation equivariance for 3D object recognition with unknown poses, which generates whole rotation-equivariant features that cannot be effectively preserved, a rotation-equivariant part–whole hierarchy spherical vector network is proposed in this paper. In our experiments, we map a 3D object onto the unit sphere, construct an ordered list of vectors from the convolutional layers of the rotation-equivariant spherical convolutional network and then construct a part–whole hierarchy to classify the 3D object using the proposed rotation-equivariant routing algorithm. The experimental results show that the proposed method improves not only the recognition of 3D objects with known poses, but also the recognition of 3D objects with unknown poses compared to previous spherical convolutional neural networks. This finding validates the construction of the rotation-equivariant part–whole hierarchy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Randomization Tests for Peer Effects in Group Formation Experiments.
- Author
-
Basse, Guillaume, Ding, Peng, Feller, Avi, and Toulis, Panos
- Subjects
GROUP formation ,PEERS ,PERMUTATION groups ,EDUCATIONAL background ,CAUSAL inference - Abstract
Measuring the effect of peers on individuals' outcomes is a challenging problem, in part because individuals often select peers who are similar in both observable and unobservable ways. Group formation experiments avoid this problem by randomly assigning individuals to groups and observing their responses; for example, do first‐year students have better grades when they are randomly assigned roommates who have stronger academic backgrounds? In this paper, we propose randomization‐based permutation tests for group formation experiments, extending classical Fisher Randomization Tests to this setting. The proposed tests are justified by the randomization itself, require relatively few assumptions, and are exact in finite samples. This approach can also complement existing strategies, such as linear‐in‐means models, by using a regression coefficient as the test statistic. We apply the proposed tests to two recent group formation experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Galois equivariant functions on Galois orbits in large p-adic fields.
- Author
-
ALEXANDRU, VICTOR and VÂJÂITU, MARIAN
- Subjects
ORBITS (Astronomy) ,ANALYTIC functions ,ALGEBRAIC fields ,ORTHONORMAL basis ,DIFFERENTIABLE functions ,P-adic analysis - Abstract
Given a prime number p let C
p be the topological completion of the algebraic closure of the field of p-adic numbers. Let O(T) be the Galois orbit of a transcendental element T of Cp with respect to the absolute Galois group. Our aim is to study the class of Galois equivariant functions defined on O(T) with values in Cp . We show that each function from this class is continuous and we characterize the class of Lipschitz functions, respectively the class of differentiable functions, with respect to a new orthonormal basis. Then we discuss some aspects related to analytic continuation for the functions of this class. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
15. Modeling Barrett’s Esophagus Progression Using Geometric Variational Autoencoders
- Author
-
van Veldhuizen, Vivien, Vadgama, Sharvaree, de Boer, Onno, Meijer, Sybren, Bekkers, Erik J., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ali, Sharib, editor, van der Sommen, Fons, editor, van Eijnatten, Maureen, editor, Papież, Bartłomiej W., editor, Jin, Yueming, editor, and Kolenbrander, Iris, editor
- Published
- 2023
- Full Text
- View/download PDF
16. Regular SE(3) Group Convolutions for Volumetric Medical Image Analysis
- Author
-
Kuipers, Thijs P., Bekkers, Erik J., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Greenspan, Hayit, editor, Madabhushi, Anant, editor, Mousavi, Parvin, editor, Salcudean, Septimiu, editor, Duncan, James, editor, Syeda-Mahmood, Tanveer, editor, and Taylor, Russell, editor
- Published
- 2023
- Full Text
- View/download PDF
17. Equivariant Representation Learning in the Presence of Stabilizers
- Author
-
Pérez Rey, Luis Armando, Marchetti, Giovanni Luca, Kragic, Danica, Jarnikov, Dmitri, Holenderski, Mike, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Koutra, Danai, editor, Plant, Claudia, editor, Gomez Rodriguez, Manuel, editor, Baralis, Elena, editor, and Bonchi, Francesco, editor
- Published
- 2023
- Full Text
- View/download PDF
18. Learning Geometric Representations of Objects via Interaction
- Author
-
Reichlin, Alfredo, Marchetti, Giovanni Luca, Yin, Hang, Varava, Anastasiia, Kragic, Danica, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Koutra, Danai, editor, Plant, Claudia, editor, Gomez Rodriguez, Manuel, editor, Baralis, Elena, editor, and Bonchi, Francesco, editor
- Published
- 2023
- Full Text
- View/download PDF
19. Learning Lagrangian Fluid Mechanics with E(3)-Equivariant Graph Neural Networks
- Author
-
Toshev, Artur P., Galletti, Gianluca, Brandstetter, Johannes, Adami, Stefan, Adams, Nikolaus A., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nielsen, Frank, editor, and Barbaresco, Frédéric, editor
- Published
- 2023
- Full Text
- View/download PDF
20. Continuous Kendall Shape Variational Autoencoders
- Author
-
Vadgama, Sharvaree, Tomczak, Jakub M., Bekkers, Erik, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nielsen, Frank, editor, and Barbaresco, Frédéric, editor
- Published
- 2023
- Full Text
- View/download PDF
21. Group Equivariant Sparse Coding
- Author
-
Shewmake, Christian, Miolane, Nina, Olshausen, Bruno, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nielsen, Frank, editor, and Barbaresco, Frédéric, editor
- Published
- 2023
- Full Text
- View/download PDF
22. Convolution Filter Equivariance/Invariance in Convolutional Neural Networks: A Survey
- Author
-
Habte, Sinshaw Bekele, Ibenthal, Achim, Bekele, Ephrem Tehsale, Debelee, Taye Girma, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Girma Debelee, Taye, editor, Ibenthal, Achim, editor, and Schwenker, Friedhelm, editor
- Published
- 2023
- Full Text
- View/download PDF
23. 面向多姿态点云目标的在线类增量学习.
- Author
-
张润江, 郭杰龙, 俞 辉, 兰 海, 王希豪, and 魏 宪
- Subjects
POINT cloud ,ONLINE education ,CLASSIFICATION - Abstract
Copyright of Chinese Journal of Liquid Crystal & Displays is the property of Chinese Journal of Liquid Crystal & Displays and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
24. Complex-Valued FastICA Estimator with a Weighted Unitary Constraint: A Robust and Equivariant Estimator
- Author
-
Jianwei E and Mingshu Yang
- Subjects
complex-valued independent component analysis ,FastICA algorithm ,robustness ,equivariance ,Mathematics ,QA1-939 - Abstract
Independent component analysis (ICA), as a statistical and computational approach, has been successfully applied to digital signal processing. Performance analysis for the ICA approach is perceived as a challenging task to work on. This contribution concerns the complex-valued FastICA algorithm in the range of ICA over the complex number domain. The focus is on the robust and equivariant behavior analysis of the complex-valued FastICA estimator. Although the complex-valued FastICA algorithm as well as its derivatives have been widely used methods for approaching the complex blind signal separation problem, rigorous mathematical treatments of the robust measurement and equivariance for the complex-valued FastICA estimator are still missing. This paper strictly analyzes the robustness against outliers and separation performance depending on the global system. We begin with defining the influence function (IF) of complex-valued FastICA functional and followed by deriving its closed-form expression. Then, we prove that the complex-valued FastICA algorithm based on the optimizing cost function is linear-equivariant, depending only on the source signals.
- Published
- 2024
- Full Text
- View/download PDF
25. Towards Feasible Capsule Network for Vision Tasks.
- Author
-
Vu, Dang Thanh, An, Le Bao Thai, Kim, Jin Young, and Yu, Gwang Hyun
- Subjects
CAPSULE neural networks ,COMPUTER networks ,COMPUTATIONAL complexity ,NETWORK PC (Computer) - Abstract
Capsule networks exhibit the potential to enhance computer vision tasks through their utilization of equivariance for capturing spatial relationships. However, the broader adoption of these networks has been impeded by the computational complexity of their routing mechanism and shallow backbone model. To address these challenges, this paper introduces an innovative hybrid architecture that seamlessly integrates a pretrained backbone model with a task-specific capsule head (CapsHead). Our methodology is extensively evaluated across a range of classification and segmentation tasks, encompassing diverse datasets. The empirical findings robustly underscore the efficacy and practical feasibility of our proposed approach in real-world vision applications. Notably, our approach yields substantial 3.45 % and 6.24 % enhancement in linear evaluation on the CIFAR10 dataset and segmentation on the VOC2012 dataset, respectively, compared to baselines that do not incorporate the capsule head. This research offers a noteworthy contribution by not only advancing the application of capsule networks, but also mitigating their computational complexities. The results substantiate the feasibility of our hybrid architecture, thereby paving the way for a wider integration of capsule networks into various computer vision tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Use of the bias-corrected parametric bootstrap in sensitivity testing/analysis to construct confidence bounds with accurate levels of coverage.
- Author
-
Thomas, Edward V.
- Subjects
MAXIMUM likelihood statistics ,PARAMETER estimation ,CONFIDENCE ,PROBIT analysis - Abstract
Sensitivity testing often involves sequential design strategies in small-sample settings that provide binary data which are then used to develop generalized linear models. Model parameters are usually estimated via maximum likelihood methods. Often, confidence bounds relating to model parameters and quantiles are based on the likelihood ratio. In this paper, it is demonstrated how the bias-corrected parametric bootstrap used in conjunction with approximate pivotal quantities can be used to provide an alternative means for constructing bounds when using a location-scale model. In small-sample settings, the coverage of bounds based on the likelihood ratio is often anticonservative due to bias in estimating the scale parameter. In contrast, bounds produced by the bias-corrected parametric bootstrap can provide accurate levels of coverage in such settings when both the sequential strategy and method for parameter estimation effectively adapt (are approximately equivariant) to the location and scale. A series of simulations illustrate this contrasting behavior in a small-sample setting when assuming a normal/probit model in conjunction with a popular sequential design strategy. In addition, it is shown how a high-fidelity assessment of performance can be attained with reduced computational effort by using the nonparametric bootstrap to resample pivotal quantities obtained from a small-scale set of parametric bootstrap simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Data Symmetries and Learning in Fully Connected Neural Networks
- Author
-
Fabio Anselmi, Luca Manzoni, Alberto D'onofrio, Alex Rodriguez, Giulio Caravagna, Luca Bortolussi, and Francesca Cairoli
- Subjects
Artificial neural networks ,symmetry invariance ,equivariance ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Symmetries in the data and how they constrain the learned weights of modern deep networks is still an open problem. In this work we study the simple case of fully connected shallow non-linear neural networks and consider two types of symmetries: full dataset symmetries where the dataset $X$ is mapped into itself by any transformation $g$ , i.e. $gX=X$ or single data point symmetries where $gx=x$ , $x\in X$ . We prove and experimentally confirm that symmetries in the data are directly inherited at the level of the network’s learned weights and relate these findings with the common practice of data augmentation in modern machine learning. Finally, we show how symmetry constraints have a profound impact on the spectrum of the learned weights, an aspect of the so-called network implicit bias.
- Published
- 2023
- Full Text
- View/download PDF
28. 3D Equivariant Graph Implicit Functions
- Author
-
Chen, Yunlu, Fernando, Basura, Bilen, Hakan, Nießner, Matthias, Gavves, Efstratios, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
29. Shape-Pose Disentanglement Using SE(3)-Equivariant Vector Neurons
- Author
-
Katzir, Oren, Lischinski, Dani, Cohen-Or, Daniel, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
30. DEVIANT: Depth EquiVarIAnt NeTwork for Monocular 3D Object Detection
- Author
-
Kumar, Abhinav, Brazil, Garrick, Corona, Enrique, Parchami, Armin, Liu, Xiaoming, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
31. Equivariance and Invariance Inductive Bias for Learning from Insufficient Data
- Author
-
Wad, Tan, Sun, Qianru, Pranata, Sugiri, Jayashree, Karlekar, Zhang, Hanwang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
32. Utility of Equivariant Message Passing in Cortical Mesh Segmentation
- Author
-
Unyi, Dániel, Insalata, Ferdinando, Veličković, Petar, Gyires-Tóth, Bálint, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yang, Guang, editor, Aviles-Rivero, Angelica, editor, Roberts, Michael, editor, and Schönlieb, Carola-Bibiane, editor
- Published
- 2022
- Full Text
- View/download PDF
33. ITERATIVE COLLABORATIVE ROUTING AMONG EQUIVARIANT CAPSULES FOR TRANSFORMATION-ROBUST CAPSULE NETWORKS
- Author
-
Sai Raam Venkataraman, S. Balasubramanian, and Raghunatha Sarma
- Subjects
equivariance ,transformation robustness ,capsule network ,image classification ,Telecommunication ,TK5101-6720 ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Transformation-robustness is an important feature for machine learning models that perform image classification. Many methods aim to bestow this property to models by the use of data augmentation strategies, while more formal guarantees are obtained via the use of equivariant models. We recognise that compositional, or part-whole structure is also an important aspect of images that has to be considered for building transformation-robust models. Thus, we propose a capsule network model that is, at once, equivariant and compositionality aware. Equivariance of our capsule network model comes from the use of equivariant convolutions in a carefully-chosen novel architecture. The awareness of compositionality comes from the use of our proposed novel, iterative, graph-based routing algorithm, termed Iterative collaborative routing (ICR). ICR, the core of our contribution, weights the predictions made for capsules based on an iteratively averaged score of the degree-centralities of its nearest neighbours. Experiments on transformed image classification on FashionMNIST, CIFAR-10, and CIFAR-100 show that our model that uses ICR outperforms convolutional and capsule baselines to achieve state-of-the-art performance.
- Published
- 2022
- Full Text
- View/download PDF
34. Evolution groups and reversibility.
- Author
-
Barreira, Luís and Valls, Claudia
- Abstract
We give a complete description of the correspondence between the reversibility and equivariance properties of evolution families (so for dynamical systems with continuous time, possibly nonautonomous) and their evolution groups. To the best of our knowledge, no similar result appeared before in the literature for dynamical systems with continuous time, even autonomous. Moreover, based on these results, we describe the faithful correspondence between the reversibility properties of stable and unstable invariant manifolds of evolution families and of their associated evolution groups. Finally, we construct center invariant manifolds for an equation that has a line of equilibria generated by a group of symmetries for which the equation is equivariant. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. ℤ2 × ℤ2 Equivariant Quantum Neural Networks: Benchmarking against Classical Neural Networks
- Author
-
Zhongtian Dong, Marçal Comajoan Cara, Gopal Ramesh Dahale, Roy T. Forestano, Sergei Gleyzer, Daniel Justice, Kyoungchul Kong, Tom Magorsch, Konstantin T. Matchev, Katia Matcheva, and Eyup B. Unlu
- Subjects
quantum computing ,deep learning ,quantum machine learning ,equivariance ,invariance ,supervised learning ,Mathematics ,QA1-939 - Abstract
This paper presents a comparative analysis of the performance of Equivariant Quantum Neural Networks (EQNNs) and Quantum Neural Networks (QNNs), juxtaposed against their classical counterparts: Equivariant Neural Networks (ENNs) and Deep Neural Networks (DNNs). We evaluate the performance of each network with three two-dimensional toy examples for a binary classification task, focusing on model complexity (measured by the number of parameters) and the size of the training dataset. Our results show that the Z2×Z2 EQNN and the QNN provide superior performance for smaller parameter sets and modest training data samples.
- Published
- 2024
- Full Text
- View/download PDF
36. A Comparison between Invariant and Equivariant Classical and Quantum Graph Neural Networks
- Author
-
Roy T. Forestano, Marçal Comajoan Cara, Gopal Ramesh Dahale, Zhongtian Dong, Sergei Gleyzer, Daniel Justice, Kyoungchul Kong, Tom Magorsch, Konstantin T. Matchev, Katia Matcheva, and Eyup B. Unlu
- Subjects
quantum computing ,deep learning ,quantum machine learning ,equivariance ,invariance ,supervised learning ,Mathematics ,QA1-939 - Abstract
Machine learning algorithms are heavily relied on to understand the vast amounts of data from high-energy particle collisions at the CERN Large Hadron Collider (LHC). The data from such collision events can naturally be represented with graph structures. Therefore, deep geometric methods, such as graph neural networks (GNNs), have been leveraged for various data analysis tasks in high-energy physics. One typical task is jet tagging, where jets are viewed as point clouds with distinct features and edge connections between their constituent particles. The increasing size and complexity of the LHC particle datasets, as well as the computational models used for their analysis, have greatly motivated the development of alternative fast and efficient computational paradigms such as quantum computation. In addition, to enhance the validity and robustness of deep networks, we can leverage the fundamental symmetries present in the data through the use of invariant inputs and equivariant layers. In this paper, we provide a fair and comprehensive comparison of classical graph neural networks (GNNs) and equivariant graph neural networks (EGNNs) and their quantum counterparts: quantum graph neural networks (QGNNs) and equivariant quantum graph neural networks (EQGNN). The four architectures were benchmarked on a binary classification task to classify the parton-level particle initiating the jet. Based on their area under the curve (AUC) scores, the quantum networks were found to outperform the classical networks. However, seeing the computational advantage of quantum networks in practice may have to wait for the further development of quantum technology and its associated application programming interfaces (APIs).
- Published
- 2024
- Full Text
- View/download PDF
37. Equivariant tensor network potentials
- Author
-
M Hodapp and A Shapeev
- Subjects
machine learning ,interatomic potential ,tensor network ,equivariance ,Computer engineering. Computer hardware ,TK7885-7895 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Machine-learning interatomic potentials (MLIPs) have made a significant contribution to the recent progress in the fields of computational materials and chemistry due to the MLIPs’ ability of accurately approximating energy landscapes of quantum-mechanical models while being orders of magnitude more computationally efficient. However, the computational cost and number of parameters of many state-of-the-art MLIPs increases exponentially with the number of atomic features. Tensor (non-neural) networks, based on low-rank representations of high-dimensional tensors, have been a way to reduce the number of parameters in approximating multidimensional functions, however, it is often not easy to encode the model symmetries into them. In this work we develop a formalism for rank-efficient equivariant tensor networks (ETNs), i.e. tensor networks that remain invariant under actions of SO(3) upon contraction. All the key algorithms of tensor networks like orthogonalization of cores and DMRG-based algorithms carry over to our equivariant case. Moreover, we show that many elements of modern neural network architectures like message passing, pulling, or attention mechanisms, can in some form be implemented into the ETNs. Based on ETNs, we develop a new class of polynomial-based MLIPs that demonstrate superior performance over existing MLIPs for multicomponent systems.
- Published
- 2024
- Full Text
- View/download PDF
38. On the universality of Sn -equivariant k-body gates
- Author
-
Sujay Kazi, Martín Larocca, and M Cerezo
- Subjects
quantum neural networks ,equivariance ,quantum computing ,universality ,Science ,Physics ,QC1-999 - Abstract
The importance of symmetries has recently been recognized in quantum machine learning from the simple motto: if a task exhibits a symmetry (given by a group $\mathfrak{G}$ ), the learning model should respect said symmetry. This can be instantiated via $\mathfrak{G}$ -equivariant quantum neural networks (QNNs), i.e. parametrized quantum circuits whose gates are generated by operators commuting with a given representation of $\mathfrak{G}$ . In practice, however, there might be additional restrictions to the types of gates one can use, such as being able to act on at most k qubits. In this work we study how the interplay between symmetry and k -bodyness in the QNN generators affect its expressiveness for the special case of $\mathfrak{G} = S_n$ , the symmetric group. Our results show that if the QNN is generated by one- and two-body S _n -equivariant gates, the QNN is semi-universal but not universal. That is, the QNN can generate any arbitrary special unitary matrix in the invariant subspaces, but has no control over the relative phases between them. Then, we show that in order to reach universality one needs to include n -body generators (if n is even) or $(n-1)$ -body generators (if n is odd). As such, our results brings us a step closer to better understanding the capabilities and limitations of equivariant QNNs.
- Published
- 2024
- Full Text
- View/download PDF
39. Generalization in Deep RL for TSP Problems via Equivariance and Local Search
- Author
-
Ouyang, Wenbin, Wang, Yisen, Weng, Paul, and Han, Shaochen
- Published
- 2024
- Full Text
- View/download PDF
40. ROBUSTCAPS: A TRANSFORMATION-ROBUST CAPSULE NETWORK FOR IMAGE CLASSIFICATION.
- Author
-
Venkataraman, Sai Raam, Balasubramanian, S., and Raghunatha Sarma, R.
- Subjects
CAPSULE neural networks ,ARTIFICIAL neural networks ,CONVOLUTIONAL neural networks ,IMAGE recognition (Computer vision) ,MACHINE learning - Abstract
Geometric transformations of the training data as well as the test data present challenges to the use of deep neural networks to vision-based learning tasks. To address this issue, we present a deep neural network model that exhibits the desirable property of transformationrobustness. Our model, termed RobustCaps, uses group-equivariant convolutions in an improved capsule network model. RobustCaps uses a global context-normalised procedure in its routing algorithm to learn transformation-invariant part-whole relationships within image data. This learning of such relationships allows our model to outperform both capsule and convolutional neural network baselines on transformation-robust classification tasks. Specifically, RobustCaps achieves state-of-the-art accuracies on CIFAR-10, FashionMNIST, and CIFAR-100 when the images in these datasets are subjected to train and test-time rotations and translations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Sensing Theorems for Unsupervised Learning in Linear Inverse Problems.
- Author
-
Tachella, Julián, Dongdong Chen, and Davies, Mike
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *INVERSE problems - Abstract
Solving an ill-posed linear inverse problem requires knowledge about the underlying signal model. In many applications, this model is a priori unknown and has to be learned from data. However, it is impossible to learn the model using observations obtained via a single incomplete measurement operator, as there is no information about the signal model in the nullspace of the operator, resulting in a chicken-and-egg problem: to learn the model we need reconstructed signals, but to reconstruct the signals we need to know the model. Two ways to overcome this limitation are using multiple measurement operators or assuming that the signal model is invariant to a certain group action. In this paper, we present necessary and sufficient sensing conditions for learning the signal model from measurement data alone which only depend on the dimension of the model and the number of operators or properties of the group action that the model is invariant to. As our results are agnostic of the learning algorithm, they shed light into the fundamental limitations of learning from incomplete data and have implications in a wide range set of practical algorithms, such as dictionary learning, matrix completion and deep neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
42. Weisfeiler and Leman go Machine Learning: The Story so far.
- Author
-
Morris, Christopher, Lipman, Yaron, Maron, Haggai, Rieck, Bastian, Kriege, Nils M., Grohe, Martin, Fey, Matthias, and Borgwardt, Karsten
- Subjects
- *
MACHINE learning , *GRAPH neural networks , *REPRESENTATIONS of graphs , *GRAPH algorithms , *SUPERVISED learning , *ISOMORPHISM (Mathematics) - Abstract
In recent years, algorithms and neural architectures based on the Weisfeiler-Leman algorithm, a well-known heuristic for the graph isomorphism problem, have emerged as a powerful tool for machine learning with graphs and relational data. Here, we give a comprehensive overview of the algorithm's use in a machine-learning setting, focusing on the supervised regime. We discuss the theoretical background, show how to use it for supervised graph and node representation learning, discuss recent extensions, and outline the algorithm's connection to (permutation-)equivariant neural architectures. Moreover, we give an overview of current applications and future directions to stimulate further research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
43. Equivariant graph convolutional neural networks for the representation of homogenized anisotropic microstructural mechanical response.
- Author
-
Patel, Ravi, Safta, Cosmin, and Jones, Reese E.
- Subjects
- *
CONVOLUTIONAL neural networks , *STRUCTURAL optimization , *MATERIAL plasticity , *COMPOSITE materials , *MICROSTRUCTURE - Abstract
Composite materials with different microstructural material symmetries are common in engineering applications where grain structure, alloying and particle/fiber packing are optimized via controlled manufacturing. In fact these microstructural tunings can be done throughout a part to achieve functional gradation and optimization at a structural level. To predict the performance of particular microstructural configuration and thereby overall performance, constitutive models of materials with microstructure are needed. In this work we provide neural network architectures that provide effective homogenization models of materials with anisotropic components. These models satisfy equivariance and material symmetry principles inherently through a combination of equivariant and tensor basis operations. We demonstrate them on datasets of stochastic volume elements with different textures and phases where the material undergoes elastic and plastic deformation, and show that the these network architectures provide significant performance improvements. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. UIR-ES: An unsupervised underwater image restoration framework with equivariance and stein unbiased risk estimator.
- Author
-
Zhu, Jiacheng, Wen, Junjie, Hong, Duanqin, Lin, Zhanpeng, and Hong, Wenxing
- Subjects
- *
ARTIFICIAL neural networks , *IMAGE reconstruction , *LEARNING strategies , *PRIOR learning , *SELF-efficacy - Abstract
Underwater imaging faces challenges for enhancing object visibility and restoring true colors due to the absorptive and scattering characteristics of water. Underwater image restoration (UIR) seeks solutions to restore clean images from degraded ones, providing significant utility in downstream tasks. Recently, data-driven UIR has garnered much attention due to the potent expressive capabilities of deep neural networks (DNNs). These DNNs are supervised, relying on a large amount of labeled training samples. However, acquiring such data is expensive or even impossible in real-world underwater scenarios. While recent researches suggest that unsupervised learning is effective in UIR, none of these frameworks consider signal physical priors. In this work, we present a novel physics-inspired unsupervised UIR framework empowered by equivariance and unbiased estimation techniques. Specifically, equivariance stems from the invariance, inherent in natural signals to enhance data-efficient learning. Given that degraded images invariably contain noise, we propose a noise-tolerant loss for unsupervised UIR based on the Stein unbiased risk estimator to achieve an accurate estimation of the data consistency. Extensive experiments on the benchmark UIR datasets, including the UIEB and RUIE datasets, validate the superiority of the proposed method in terms of quantitative scores, visual outcomes, and generalization ability, compared to state-of-the-art counterparts. Moreover, our method demonstrates even comparable performance with the supervised model. • Introduction of UIR-ES, a fully unsupervised learning framework for UIR. • Proposing scene radiance equivariant learning and observation equivariant learning strategies. • Using SURE to estimate data consistency for improved performance. • Superior UIR performance over previous unsupervised learning methods on two UIR datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. A geometric approach to robust medical image segmentation.
- Author
-
Santhirasekaram, Ainkaran, Winkler, Mathias, Rockall, Andrea, and Glocker, Ben
- Subjects
- *
CARDIAC magnetic resonance imaging , *MAGNETIC resonance imaging , *DEEP learning , *IMAGE segmentation , *GROUP theory - Abstract
Robustness of deep learning segmentation models is crucial for their safe incorporation into clinical practice. However, these models can falter when faced with distributional changes. This challenge is evident in magnetic resonance imaging (MRI) scans due to the diverse acquisition protocols across various domains, leading to differences in image characteristics such as textural appearances. We posit that the restricted anatomical differences between subjects could be harnessed to refine the latent space into a set of shape components. The learned set then aims to encompass the relevant anatomical shape variation found within the patient population. We explore this by utilising multiple MRI sequences to learn texture invariant and shape equivariant features which are used to construct a shape dictionary using vector quantisation. We investigate shape equivariance to a number of different types of groups. We hypothesise and prove that the greater the group order, i.e., the denser the constraint, the better becomes the model robustness. We achieve shape equivariance either with a contrastive based approach or by imposing equivariant constraints on the convolutional kernels. The resulting shape equivariant dictionary is then sampled to compose the segmentation output. Our method achieves state-of-the-art performance for the task of single domain generalisation for prostate and cardiac MRI segmentation. Code is available at https://github.com/AinkaranSanthi/A_Geometric_Perspective_For_Robust_Segmentation. • Geometric constraints in the latent space of a deep learning for Robust Segmentation. • We hypothesis and prove group equivariant constraints in the latent space improves robustness. • A discrete equivariant shape latent space is sampled to construct the segmentation map. • Method demonstrated in the task of single domain generalisation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Iterative SE(3)-Transformers
- Author
-
Fuchs, Fabian B., Wagstaff, Edward, Dauparas, Justas, Posner, Ingmar, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nielsen, Frank, editor, and Barbaresco, Frédéric, editor
- Published
- 2021
- Full Text
- View/download PDF
47. Augmented Equivariant Attention Networks for Microscopy Image Transformation.
- Author
-
Xie, Yaochen, Ding, Yu, and Ji, Shuiwang
- Subjects
- *
MICROSCOPY , *IMAGE denoising , *FLUORESCENCE microscopy , *DEEP learning , *MACHINE learning , *ELECTRON microscopy - Abstract
It is time-consuming and expensive to take high-quality or high-resolution electron microscopy (EM) and fluorescence microscopy (FM) images. Taking these images could be even invasive to samples and may damage certain subtleties in the samples after long or intense exposures, often necessary for achieving high-quality or high-resolution in the first place. Advances in deep learning enable us to perform various types of microscopy image-to-image transformation tasks such as image denoising, super-resolution, and segmentation that computationally produce high-quality images from the physically acquired low-quality ones. When training image-to-image transformation models on pairs of experimentally acquired microscopy images, prior models suffer from performance loss due to their inability to capture inter-image dependencies and common features shared among images. Existing methods that take advantage of shared features in image classification tasks cannot be properly applied to image transformation tasks because they fail to preserve the equivariance property under spatial permutations, something essential in image-to-image transformation. To address these limitations, we propose the augmented equivariant attention networks (AEANets) with better capability to capture inter-image dependencies, while preserving the equivariance property. The proposed AEANets captures inter-image dependencies and shared features via two augmentations on the attention mechanism, which are the shared references and the batch-aware attention during training. We theoretically derive the equivariance property of the proposed augmented attention model and experimentally demonstrate its consistent superiority in both quantitative and visual results over the baseline methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. ITERATIVE COLLABORATIVE ROUTING AMONG EQUIVARIANT CAPSULES FOR TRANSFORMATION-ROBUST CAPSULE NETWORKS.
- Author
-
Venkataraman, Sai Raam, Balasubramanian, S., and Sarma, R. Raghunatha
- Subjects
CAPSULE neural networks ,IMAGE recognition (Computer vision) ,ROUTING algorithms ,DATA augmentation ,MACHINE learning ,DEEP learning ,TANNER graphs - Abstract
Transformation-robustness is an important feature for machine learning models that perform image classification. Many methods aim to bestow this property to models by the use of data augmentation strategies, while more formal guarantees are obtained via the use of equivariant models. We recognise that compositional, or part-whole structure is also an important aspect of images that has to be considered for building transformation-robust models. Thus, we propose a capsule network model that is, at once, equivariant and compositionality aware. Equivariance of our capsule network model comes from the use of equivariant convolutions in a carefully-chosen novel architecture. The awareness of compositionality comes from the use of our proposed novel, iterative, graph-based routing algorithm, termed Iterative collaborative routing (ICR). ICR, the core of our contribution, weights the predictions made for capsules based on an iteratively averaged score of the degree-centralities of its nearest neighbours. Experiments on transformed image classification on FashionMNIST, CIFAR-10, and CIFAR-100 show that our model that uses ICR outperforms convolutional and capsule baselines to achieve state-of-the-art performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. On Symmetries and Metrics in Geometric Inference
- Author
-
Marchetti, Giovanni Luca and Marchetti, Giovanni Luca
- Abstract
Spaces of data naturally carry intrinsic geometry. Statistics and machine learning can leverage on this rich structure in order to achieve efficiency and semantic generalization. Extracting geometry from data is therefore a fundamental challenge which by itself defines a statistical, computational and unsupervised learning problem. To this end, symmetries and metrics are two fundamental objects which are ubiquitous in continuous and discrete geometry. Both are suitable for data-driven approaches since symmetries arise as interactions and are thus collectable in practice while metrics can be induced locally from the ambient space. In this thesis, we address the question of extracting geometry from data by leveraging on symmetries and metrics. Additionally, we explore methods for statistical inference exploiting the extracted geometric structure. On the metric side, we focus on Voronoi tessellations and Delaunay triangulations, which are classical tools in computational geometry. Based on them, we propose novel non-parametric methods for machine learning and statistics, focusing on theoretical and computational aspects. These methods include an active version of the nearest neighbor regressor as well as two high-dimensional density estimators. All of them possess convergence guarantees due to the adaptiveness of Voronoi cells. On the symmetry side, we focus on representation learning in the context of data acted upon by a group. Specifically, we propose a method for learning equivariant representations which are guaranteed to be isomorphic to the data space, even in the presence of symmetries stabilizing data. We additionally explore applications of such representations in a robotics context, where symmetries correspond to actions performed by an agent. Lastly, we provide a theoretical analysis of invariant neural networks and show how the group-theoretical Fourier transform emerges in their weights. This addresses the problem of symmetry discovery in a self-supervise, Datamängder innehar en naturlig inneboende geometri. Statistik och maskininlärning kan dra nytta av denna rika struktur för att uppnå effektivitet och semantisk generalisering. Att extrahera geometri ifrån data är därför en grundläggande utmaning som i sig definierar ett statistiskt, beräkningsmässigt och oövervakat inlärningsproblem. För detta ändamål är symmetrier och metriker två grundläggande objekt som är allestädes närvarande i kontinuerlig och diskret geometri. Båda är lämpliga för datadrivna tillvägagångssätt eftersom symmetrier uppstår som interaktioner och är därmed i praktiken samlingsbara medan metriker kan induceras lokalt ifrån det omgivande rummet. I denna avhandling adresserar vi frågan om att extrahera geometri ifrån data genom att utnyttja symmetrier och metriker. Dessutom utforskar vi metoder för statistisk inferens som utnyttjar den extraherade geometriska strukturen. På den metriska sidan fokuserar vi på Voronoi-tessellationer och Delaunay-trianguleringar, som är klassiska verktyg inom beräkningsgeometri. Baserat på dem föreslår vi nya icke-parametriska metoder för maskininlärning och statistik, med fokus på teoretiska och beräkningsmässiga aspekter. Dessa metoder inkluderar en aktiv version av närmaste grann-regressorn samt två högdimensionella täthetsskattare. Alla dessa besitter konvergensgarantier på grund av Voronoi-cellernas anpassningsbarhet. På symmetrisidan fokuserar vi på representationsinlärning i sammanhanget av data som påverkas av en grupp. Specifikt föreslår vi en metod för att lära sig ekvivarianta representationer som garanteras vara isomorfa till datarummet, även i närvaro av symmetrier som stabiliserar data. Vi utforskar även tillämpningar av sådana representationer i ett robotiksammanhang, där symmetrier motsvarar handlingar utförda av en agent. Slutligen tillhandahåller vi en teoretisk analys av invarianta neuronnät och visar hur den gruppteoretiska Fouriertransformen framträder i deras vikter. Detta adresserar problemet med, QC 20240304
- Published
- 2024
50. On Color and Symmetries for Data Efficient Deep Learning
- Author
-
Lengyel, A. (author) and Lengyel, A. (author)
- Abstract
Computer vision algorithms are getting more advanced by the day and slowly approach human-like capabilities, such as detecting objects in cluttered scenes and recognizing facial expressions. Yet, computers learn to perform these tasks very differently from humans. Where humans can generalize between different lighting conditions or geometric orientations with ease, computers require vast amounts of training data to adapt from day to night images, or even to recognize a cat hanging upside-down. This requires additional data, annotations and compute power, increasing the development costs of useful computer vision models. This thesis is therefore concerned with reducing the data and compute hunger of computer vision algorithms by incorporating prior knowledge into the model architecture. Knowledge that is built in no longer needs to be learned from data. This thesis considers various knowledge priors. To improve the robustness of deep learning models to changes in illumination, we make use of color invariant representations derived from physics-based reflection models. We find that a color invariant input layer effectively normalizes the feature map activations throughout the entire network, thereby reducing the distribution shift that normally occurs between day and night images. Equivariance has proven to be a useful network property for improving data efficiency. We introduce the color equivariant convolution, where spatial features are explicitly shared between different colors. This improves generalization to out-of-distribution colors, and therefore reduces the amount of required training data. We subsequently investigate Group Equivariant Convolutions (GConvs). First, we discover that GConv filters learn redundant symmetries, which can be hard-coded using separable convolutions. This preserves equivariance to rotation and mirroring, and improves data and compute efficiency. We also explore the notion of approximate equivariance in GCo, Pattern Recognition and Bioinformatics
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.