73 results on '"Müller, Henning"'
Search Results
2. Elektronische Aktenführung am Beispiel der Sozialverwaltung.
- Author
-
Müller, Henning
- Published
- 2024
3. Fortgeschrittenenklausur Strafrecht: »Guter Rat ist nicht teuer«.
- Author
-
Müller, Henning Ernst and Mansouri, Yusef
- Published
- 2023
- Full Text
- View/download PDF
4. On dynamic crack propagation in a lattice Boltzmann method for elastodynamics in 2D.
- Author
-
Müller, Henning, Schlüter, Alexander, Faust, Erik, and Müller, Ralf
- Subjects
- *
LATTICE Boltzmann methods , *BALANCE laws (Mechanics) , *SHEAR (Mechanics) , *FRACTURE mechanics , *ELASTODYNAMICS - Abstract
In recent years, the development of lattice Boltzmann methods (LBMs) for solids has gained attention. Fracture mechanics as a viable application for these methods has been presented before, albeit for mode III cracks in the context of an LBM for antiplane shear deformation. The performance of the LBM itself is promising, while the usage of a regular lattice simplifies the modelling of fractures significantly. Recent advancements in LBMs for solids, especially the description of Dirichlet‐ and Neumann‐type boundary conditions, now make it possible to extend the LBM simulation of crack propagation to the plane strain case with modes I and II crack opening, including growth with non‐uniform speed in arbitrary directions. For this, the configurational force acting on a crack tip is utilised. The definition of the moments of the LBM, which are based on the balance laws of continuum mechanics, render the evaluation of macroscopic fields in the configuration straightforward. In this work, the general in‐plane case of dynamic crack propagation is shown and necessary considerations for the implementation are discussed. Lastly, numerical examples showcase the capabilities of the proposed method to model dynamic fractures and establish a proof‐of‐concept. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Dynamic propagation of mode III cracks in a Lattice Boltzmann method for solids.
- Author
-
Müller, Henning, Touil, Ali, Schlüter, Alexander, and Müller, Ralf
- Subjects
- *
LATTICE Boltzmann methods , *SHEAR (Mechanics) , *SOLID mechanics , *ELASTIC solids , *CRACK propagation (Fracture mechanics) , *FRACTURE mechanics - Abstract
This work presents concepts and algorithms for the simulation of dynamic fractures with a Lattice Boltzmann method (LBM) for linear elastic solids. This LBM has been presented previously and solves the wave equation, which is interpreted as the governing equation for antiplane shear deformation. Besides the steady growth of a crack at a prescribed crack velocity, a fracture criterion based on stress intensity factors has been implemented. This is the first time that crack propagation with a mechanically relevant criterion is regarded in the context of LBMs. Numerical results are examined to validate the proposed method. The concepts of crack propagation introduced here are not limited to mode III cracks or the simplified deformation assumption of antiplane shear. By introducing a rather simple processing step into the existing LBM at the level of individual lattice sites, the overall performance of the LBM is maintained. Our findings underline the validity of the LBM as a numerical tool to simulate solids in general as well as dynamic fractures in particular. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Dynamic Crack Propagation in a Lattice Boltzmann Method for Solid Mechanics.
- Author
-
Müller, Henning, Schlüter, Alexander, and Müller, Ralf
- Subjects
- *
LATTICE Boltzmann methods , *CRACK propagation (Fracture mechanics) , *SHEAR (Mechanics) , *FRACTURE mechanics , *SOLID mechanics , *WAVE equation - Abstract
In recent years, Lattice Boltzmann methods (LBMs) have been adapted and developed to simulate the behavior of solids. They have already been applied to fractures as well. However, until now, our previous work has been restricted to stationary cracks. In this work, we regard the reduced 2D case of anti‐plane shear deformation with mode III crack opening. The wave equation is the governing equation for this problem, which is solved via an LBM. The main contribution of this work is the introduction of an algorithm to handle crack growth in an LBM for solids. The underlying scheme is based on geometric assumptions, which is well suited for the regular lattice used by the LBM. A fracture criterion based on the stress intensity factor is implemented and illustrated by a numerical example. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Lattice Boltzmann method for antiplane shear deformation: non-lattice-conforming boundary conditions.
- Author
-
Schlüter, Alexander, Müller, Henning, and Müller, Ralf
- Subjects
- *
LATTICE Boltzmann methods , *SHEAR (Mechanics) , *DISTRIBUTION (Probability theory) , *FRACTURE mechanics - Abstract
In this work, two different approaches to treat boundary conditions in a lattice Boltzmann method (LBM) for the wave equation are presented. We interpret the wave equation as the governing equation of the displacement field of a solid under simplified deformation assumptions, but the algorithms are not limited to this interpretation. A feature of both algorithms is that the boundary does not need to conform with the discretization, i.e., the regular lattice. This allows for a larger flexibility regarding the geometries that can be handled by the LBM. The first algorithm aims at determining the missing distribution functions at boundary lattice points in such a way that a desired macroscopic boundary condition is fulfilled. The second algorithm is only available for Neumann-type boundary conditions and considers a balance of momentum for control volumes on the mesoscopic scale, i.e., at the scale of the lattice spacing. Numerical examples demonstrate that the new algorithms indeed improve the accuracy of the LBM compared to previous results and that they are able to model boundary conditions for complex geometries that do not conform with the lattice. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging.
- Author
-
Champendal, Mélanie, Müller, Henning, Prior, John O., and dos Reis, Cláudia Sá
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *DIAGNOSTIC imaging , *ARTIFICIAL intelligence , *ALZHEIMER'S disease , *MAGNETIC resonance imaging - Abstract
To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Dirichlet and Neumann boundary conditions in a lattice Boltzmann method for elastodynamics.
- Author
-
Faust, Erik, Schlüter, Alexander, Müller, Henning, Steinmetz, Felix, and Müller, Ralf
- Subjects
- *
LATTICE Boltzmann methods , *NEUMANN boundary conditions , *TRANSIENTS (Dynamics) , *ELASTODYNAMICS , *ELASTIC wave propagation - Abstract
Recently, Murthy et al. (Commun Comput Phys 2:23, 2017. http://dx.doi.org/10.4208/cicp.OA-2016-0259) and Escande et al. (Lattice Boltzmann method for wave propagation in elastic solids with a regular lattice: theoretical analysis and validation, 2020. arXiv.doi:1048550/ARXIV.2009.06404. arXiv:2009.06404) adopted the Lattice Boltzmann Method (LBM) to model the linear elastodynamic behaviour of isotropic solids. The LBM is attractive as an elastodynamic solver because it can be parallelised readily and lends itself to finely discretised simulations of dynamic effects in continua, allowing transient phenomena such as wave propagation to be modeled efficiently. This work proposes simple local boundary rules which approximate the behaviour of Dirichlet and Neumann boundary conditions with an LBM for elastic solids. The boundary rules are shown to be consistent with the target boundary values in the first order. An empirical convergence study is performed for the transient tension loading of a rectangular plate, with a Finite Element (FE) simulation being used as a reference. Additionally, we compare results produced by the LBM for the sudden loading of a stationary crack with an analytical solution from Freund (Dynamic fracture mechanics. Cambridge Monographs on Mechanics. Cambridge University Press, Cambridge, 1990. https://doi.org/10.1017/CBO9780511546761). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Darlegungsanforderungen bei standardisierter biostatistischer Wahrscheinlichkeitsberechnung: BGH, Beschl. v. 28. 8. 2018 – 5 StR 50/17 –, für BGHSt bestimmt.
- Author
-
Müller, Henning Ernst and Eisenberg, Ulrich
- Published
- 2019
- Full Text
- View/download PDF
11. Rotation-covariant tissue analysis for interstitial lung diseases using learned steerable filters: Performance evaluation and relevance for diagnostic aid.
- Author
-
Joyseeree, Ranveer, Müller, Henning, and Depeursinge, Adrien
- Subjects
- *
TISSUE analysis , *INTERSTITIAL lung diseases , *RADIAL basis functions , *SUPPORT vector machines , *PROBABILITY theory - Abstract
A novel method to detect and classify several classes of diseased and healthy lung tissue of interstitial lung diseases is presented, as these diseases are hard to diagnose and differentiate. Local organizations of image directions at several scales drive the process of creating discriminative lung tissue texture signatures using spatial and Fourier domain information extracted from the images. The signatures are generated for four diseased tissue classes and healthy tissue, all of which appear in the Interstitial Lung Disease (ILD) database, using a novel one-versus-one approach for learning discriminative filter signatures. A multiclass tissue classification accuracy of 80.31% is observed using Radial Basis Function (RBF) Support Vector Machines (SVMs). The presented method compares well against a variety of state-of-the-art approaches. Another strong feature of our approach is the ability to access the individual class probabilities before a final classification decision is made. This enables an analysis of the causes of misclassification in this paper. We also make the case against total reliance on the accuracy of the ground truth given that the ILD database only contains a single label for a specific region and sometimes more than one pattern can be present, particularly for regions classified as healthy tissue. Measures to address misclassifications in this context are also proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Besondere Schwere der Schuld bei einem Heranwachsenden.
- Author
-
Müller, Henning Ernst
- Published
- 2017
- Full Text
- View/download PDF
13. Boundary Conditions in a Lattice Boltzmann Method For Plane Strain Problems.
- Author
-
Schlüter, Alexander, Müller, Henning, and Müller, Ralf
- Subjects
- *
LATTICE Boltzmann methods , *NUMERICAL solutions to partial differential equations , *FLUID mechanics , *NEUMANN boundary conditions , *COMPUTATIONAL mechanics , *ELASTIC solids , *STRAINS & stresses (Mechanics) , *SOLID mechanics - Abstract
The Lattice Boltzmann Method (LBM), e.g. in [1] and [2], can be interpreted as an alternative method for the numerical solution of certain partial differential equations that is not restricted to its origin in computational fluid mechanics. The interpretation of the LBM as a general numerical tool allows to extend the LBM to solid mechanics as well, see e.g. [3], which is concerned with the simulation of elastic solids under simplified deformation assumptions, and [4] as well as [5] which propose LBMs for the general plane strain case. In previous works on a LBM for plain strain such as [5], the treatment of practically relevant boundary conditions like Neumann and Dirichlet type boundary conditions is not the main focus and thus periodic conditions or absorbing layers are specified to simulate numerical examples. In this work, we show how Neumann and Dirichlet type boundary conditions are implemented in our LBM for plane strain from [4]. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Lattice Boltzmann Method for Antiplane Shear with Non‐Mesh Conforming Boundary Conditions.
- Author
-
Müller, Henning, Schlüter, Alexander, and Müller, Ralf
- Subjects
- *
LATTICE Boltzmann methods , *SHEAR (Mechanics) , *FRACTURE mechanics , *PARTIAL differential equations - Abstract
Lattice Boltzmann methods [1] have been extended beyond their initial usage in transport problems, and can be used to solve a broader range of partial differential equations, e.g. the wave equation [2]. Thereby they can be utilized for fracture mechanics [3]. In the context of antiplane shear deformation we previously examined a stationary crack [4, 5] with a finite width. In this work we present two implementation strategies for non‐mesh conforming boundary conditions, for which the bounding geometry does not need to adhere to the underlying lattice. This rectifies problems in modeling the crack. A numerical example shows the improvement compared to the previous results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Spatial and Temporal Muscle Synergies Provide a Dual Characterization of Low-dimensional and Intermittent Control of Upper-limb Movements.
- Author
-
Brambilla, Cristina, Atzori, Manfredo, Müller, Henning, d'Avella, Andrea, and Scano, Alessandro
- Subjects
- *
MATRIX decomposition , *NONNEGATIVE matrices , *CENTRAL nervous system , *MOTOR ability - Abstract
• Non-negative matrix factorization allows to extract spatial and temporal synergies. • Spatial and temporal synergies were extracted from two upper-limb datasets. • We showed the existence of EMG spatial and temporal structure with surrogate data. • Spatial and temporal synergies may capture different hierarchical levels of motor control. • The structure of temporal synergies may be related to intermittent control. Muscle synergy analysis investigates the neurophysiological mechanisms that the central nervous system employs to coordinate muscles. Several models have been developed to decompose electromyographic (EMG) signals into spatial and temporal synergies. However, using multiple approaches can complicate the interpretation of results. Spatial synergies represent invariant muscle weights modulated with variant temporal coefficients; temporal synergies are invariant temporal profiles that coordinate variant muscle weights. While non-negative matrix factorization allows to extract both spatial and temporal synergies, the comparison between the two approaches was rarely investigated targeting a large set of multi-joint upper-limb movements. Spatial and temporal synergies were extracted from two datasets with proximal (16 subjects, 10M, 6F) and distal upper-limb movements (30 subjects, 21M, 9F), focusing on their differences in reconstruction accuracy and inter-individual variability. We showed the existence of both spatial and temporal structure in the EMG data, comparing synergies with those from a surrogate dataset in which the phases were shuffled preserving the frequency content of the original data. The two models provide a compact characterization of motor coordination at the spatial or temporal level, respectively. However, a lower number of temporal synergies are needed to achieve the same reconstruction R2 : spatial and temporal synergies may capture different hierarchical levels of motor control and are dual approaches to the characterization of low-dimensional coordination of the upper-limb. Last, a detailed characterization of the structure of the temporal synergies suggested that they can be related to intermittent control of the movement, allowing high flexibility and dexterity. These results improve neurophysiology understanding in several fields such as motor control, rehabilitation, and prosthetics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. HT 2012: Archivische Ressourcen -- Didaktische Chancen. Kompetenzorientiertes Lernen im Archiv.
- Author
-
Müller-Henning, Markus
- Subjects
- *
ARCHIVES & education , *CULTURAL programs in archives , *HISTORY education , *SCHOOL contests , *CONFERENCES & conventions - Abstract
The article reports on a panel on competency-based history education in archives, held during the Deutscher Historikertag (German Historians Conference) in Mainz, Germany, from September 25-28, 2012. Topics of discussion included methods and philosophies for teaching history, outreach programs in archives, and a student essay competition organized by teachers in Freiburg im Breisgau, Germany.
- Published
- 2012
17. The CLEF 2005 Automatic Medical Image Annotation Task.
- Author
-
Deselaers, Thomas, Müller, Henning, Clough, Paul, Ney, Hermann, and Lehmann, Thomas
- Subjects
- *
DIAGNOSTIC imaging , *MEDICAL imaging systems , *MULTIMEDIA systems , *CROSS-language information retrieval , *COMPUTATIONAL linguistics , *IMAGE retrieval - Abstract
In this paper, the automatic annotation task of the 2005 CLEF cross-language image retrieval campaign (ImageCLEF) is described. This paper focuses on the database used, the task setup, and the plans for further medical image annotation tasks in the context of ImageCLEF. Furthermore, a short summary of the results of 2005 is given. The automatic annotation task was added to ImageCLEF in 2005 and provides the first international evaluation of state-of-the-art methods for completely automatic annotation of medical images based on visual properties. The aim of this task is to explore and promote the use of automatic annotation techniques to allow for extracting semantic information from little-annotated medical images. A database of 10.000 images was established and annotated by experienced physicians resulting in 57 classes, each with at least 10 images. Detailed analysis is done regarding the (i) image representation, (ii) classification method, and (iii) learning method. Based on the strong participation of the 2005 campain, future benchmarks are planned. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
18. Image Retrieval in Medicine: The ImageCLEF Medical Image Retrieval Evaluation.
- Author
-
Hersh, William and Müller, Henning
- Subjects
- *
INFORMATION retrieval , *MEDICAL imaging systems , *MEDICINE , *RESEARCH teams , *INFORMATION resources , *INFORMATION science , *MULTIMEDIA systems , *DATABASES , *CROSS-language information retrieval - Abstract
The article focuses on the evaluation of the ImageCLEF medical image retrieval services. The cross-language medical image-retrieval track is part of the Cross-Language Evaluation Forum that functions on an annual cycle through test collections. It has also produced a huge test collection and pulled in research groups aiming for better image retrieval processes. Key information regarding the expansion of the ImageCLEF in terms of image collection, experimentation on real systems and topic development are also discussed.
- Published
- 2007
- Full Text
- View/download PDF
19. Assessment of Internet-based tele-medicine in Africa (the RAFT project)
- Author
-
Bagayoko, Cheick Oumar, Müller, Henning, and Geissbuhler, Antoine
- Subjects
- *
INTERNET , *MEDICAL care , *SOCIAL systems , *DISTANCE education - Abstract
Abstract: The objectives of this paper on the Réseau Afrique Francophone de Télémédecine (RAFT) project are the evaluation of feasibility, potential, problems and risks of an Internet-based tele-medicine network in developing countries of Africa. The RAFT project was started in Western African countries 5 years ago and has now extended to other regions of Africa as well (i.e. Madagascar, Rwanda). A project for the development of a national tele-medicine network in Mali was initiated in 2001, extended to Mauritania in 2002 and to Morocco in 2003. By 2006, a total of nine countries are connected. The entire technical infrastructure is based on Internet technologies for medical distance learning and tele-consultations. The results are a tele-medicine network that has been in productive use for over 5 years and has enabled various collaboration channels, including North-to-South (from Europe to Africa), South-to-South (within Africa), and South-to-North (from Africa to Europe) distance learning and tele-consultations, plus many personal exchanges between the participating hospitals and Universities. It has also unveiled a set of potential problems: (a) the limited importance of North-to-South collaborations when there are major differences in the available resources or the socio-cultural contexts between the collaborating parties; (b) the risk of an induced digital divide if the periphery of the health system in developing countries is not involved in the development of the network; and (c) the need for the development of local medical content management skills. Particularly point (c) is improved through the collaboration between the various countries as professionals from the medical and the computer science field are sharing courses and resources. Personal exchanges between partners in the project are frequent, and several persons received an education at one of the partner Universities. As conclusion, we can say that the identified risks have to be taken into account when designing large-scale tele-medicine projects in developing countries. These problems can be mitigated by fostering South–South collaboration channels, by the use of satellite-based Internet connectivity in remote areas, the appreciation of local knowledge and its publication on-line. The availability of such an infrastructure also facilitates the development of other projects, courses, and local content creation. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
20. A reference data set for the evaluation of medical image retrieval systems
- Author
-
Müller, Henning, Rosset, Antoine, Vallée, Jean-Paul, Terrier, François, and Geissbuhler, Antoine
- Subjects
- *
MEDICAL imaging systems , *IMAGE retrieval , *EVALUATION , *DATABASES - Abstract
Content-based image retrieval is starting to become an increasingly important factor in medical imaging research and image management systems. Several retrieval systems and methodologies exist and are used in a large variety of applications from automatic labelling of images to diagnostic aid and image classification. Still, it is very hard to compare the performance of these systems as the used databases often contain copyrighted or private images and are thus not interchangeable between research groups, also for patient privacy. Most of the currently used databases for evaluating systems are also fairly small which is partly due to the high cost in obtaining a gold standard or ground truth that is necessary for evaluation. Several large image databases, though without a gold standard, start to be available publicly, for example by the NIH (National Institutes for Health).This article describes the creation of a large medical image database that is used in a teaching file containing more than 8700 varied medical images. The images are anonymised and can be exchanged free of charge and copyright. Ground truth (a gold standard) has been obtained for a set of 26 images being selected as query topics for content-based query by image example. To reduce the time for the generation of ground truth, pooling methods well known from the text or information retrieval field have been used. Such a database is a good starting point for comparing the current image retrieval systems and to measure the retrieval quality, especially within the context of teaching files, image case databases and the support of teaching. For a comparison of retrieval systems for diagnostic aid, specialised image databases, including the diagnosis and a case description will need to be made available, as well, including gold standards for a proper system evaluation.A first evaluation event for image retrieval is foreseen at the 2004 CLEF conference (Cross Language Evaluation Forum) to compare text-and content-based access mechanism to images. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
21. A review of content-based image retrieval systems in medical applications—clinical benefits and future directions
- Author
-
Müller, Henning, Michoux, Nicolas, Bandon, David, and Geissbuhler, Antoine
- Subjects
- *
INFORMATION retrieval , *INFORMATION storage & retrieval systems , *COMPUTER vision , *MEDICAL imaging systems , *DIAGNOSTIC imaging , *IMAGE processing - Abstract
Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered.In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization (∼1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data.With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into picture archiving and communication systems (PACS) have been created.This article gives an overview of available literature in the field of content-based access to medical image data and on the technologies used in the field. Section 1 gives an introduction into generic content-based image retrieval and the technologies used. Section 2 explains the propositions for the use of image retrieval in medical practice and the various approaches. Example systems and application areas are described. Section 3 describes the techniques used in the implemented systems, their datasets and evaluations. Section 4 identifies possible clinical benefits of image retrieval systems in clinical practice as well as in research and education. New research directions are being defined that can prove to be useful.This article also identifies explanations to some of the outlined problems in the field as it looks like many propositions for systems are made from the medical domain and research prototypes are developed in computer science departments using medical datasets. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
22. Learning from User Behavior in Image Retrieval: Application of Market Basket Analysis.
- Author
-
Müller, Henning, Pun, Thierry, and Squire, David
- Subjects
- *
INFORMATION retrieval , *INFORMATION storage & retrieval systems , *QUERY (Information retrieval system) , *IMAGE retrieval , *ABSTRACTING & indexing services , *ONLINE information services , *ONLINE data processing - Abstract
This article describes an approach to learn feature weights for content-based image retrieval (CBIR) from user interaction log files. These usage log files are analyzed for images marked together by a user in the same query step. The problem is somewhat similar to one of the traditional data mining problems, the market basket analysis problem, where items bought together in a supermarket are analyzed. This paper outlines similarities and differences between the two fields and explains how to use the interaction data for deriving a better feature weighting. Experiments with existing log files are done and a significant improvement in performance is reached with a feature weighting calculated from the information contained in the log files. Even with several steps of relevance feedback the results remain much better than without the learning, which means that not only information from feedback is taken into account earlier, but a better quality of retrieval is reached in all steps. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
23. DeepHistReg: Unsupervised Deep Learning Registration Framework for Differently Stained Histology Samples.
- Author
-
Wodzinski, Marek and Müller, Henning
- Subjects
- *
IMAGE registration , *RECORDING & registration , *DEEP learning , *HISTOLOGY , *FREEWARE (Computer software) , *ALGORITHMS - Abstract
• This article presents a deep learning-based framework dedicated to registration of differently stained high-resolution histology samples. • The framework consists of the background removal, initial rotation search, affine registration, and nonrigid registration. • The average registration time, including data loading, preprocessing, and all registration steps, is below 3 seconds. • The framework is evaluated and compared to other methods using the ANHIR dataset. The results are comparable to other state-of-the-art methods, however, the registration is orders of magnitude faster. The use of several stains during histology sample preparation can be useful for fusing complementary information about different tissue structures. It reveals distinct tissue properties that combined may be useful for grading, classification, or 3-D reconstruction. Nevertheless, since the slide preparation is different for each stain and the procedure uses consecutive slices, the tissue undergoes complex and possibly large deformations. Therefore, a nonrigid registration is required before further processing. The nonrigid registration of differently stained histology images is a challenging task because: (i) the registration must be fully automatic, (ii) the histology images are extremely high-resolution, (iii) the registration should be as fast as possible, (iv) there are significant differences in the tissue appearance, and (v) there are not many unique features due to a repetitive texture. In this article, we propose a deep learning-based solution to the histology registration. We describe a registration framework dedicated to high-resolution histology images that can perform the registration in real-time. The framework consists of an automatic background segmentation, iterative initial rotation search and learning-based affine/nonrigid registration. We evaluate our approach using an open dataset provided for the Automatic Non-rigid Histological Image Registration (ANHIR) challenge organized jointly with the IEEE ISBI 2019 conference. We compare our solution to the challenge participants using a server-side evaluation tool provided by the challenge organizers. Following the challenge evaluation criteria, we use the target registration error (TRE) as the evaluation metric. Our algorithm provides registration accuracy close to the best scoring teams (median rTRE 0.19% of the image diagonal) while being significantly faster (the average registration time is about 2 seconds). The proposed framework provides results, in terms of the TRE, comparable to the best-performing state-of-the-art methods. However, it is significantly faster, thus potentially more useful in clinical practice where a large number of histology images are being processed. The proposed method is of particular interest to researchers requiring an accurate, real-time, nonrigid registration of high-resolution histology images for whom the processing time of traditional, iterative methods in unacceptable. We provide free access to the software implementation of the method, including training and inference code, as well as pretrained models. Since the ANHIR dataset is open, this makes the results fully and easily reproducible. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Medical imaging and telemedicine – from medical data production, to processing, storing, and sharing: A short outlook
- Author
-
Müller, Henning, Gao, Xiaohong, Lin, Qiang, Lehmann, Thomas M., Thom, Simon, Inchingolo, Paolo, Chen, Jyh-Cheng, and Clark, John
- Published
- 2006
- Full Text
- View/download PDF
25. Improving the classification of veterinary thoracic radiographs through inter-species and inter-pathology self-supervised pre-training of deep learning models.
- Author
-
Celniak, Weronika, Wodziński, Marek, Jurgas, Artur, Burti, Silvia, Zotti, Alessandro, Atzori, Manfredo, Müller, Henning, and Banzato, Tommaso
- Subjects
- *
DEEP learning , *SUPERVISED learning , *VISUAL learning , *RADIOGRAPHS , *SPACE exploration , *DATABASES , *CLASSIFICATION - Abstract
The analysis of veterinary radiographic imaging data is an essential step in the diagnosis of many thoracic lesions. Given the limited time that physicians can devote to a single patient, it would be valuable to implement an automated system to help clinicians make faster but still accurate diagnoses. Currently, most of such systems are based on supervised deep learning approaches. However, the problem with these solutions is that they need a large database of labeled data. Access to such data is often limited, as it requires a great investment of both time and money. Therefore, in this work we present a solution that allows higher classification scores to be obtained using knowledge transfer from inter-species and inter-pathology self-supervised learning methods. Before training the network for classification, pretraining of the model was performed using self-supervised learning approaches on publicly available unlabeled radiographic data of human and dog images, which allowed substantially increasing the number of images for this phase. The self-supervised learning approaches included the Beta Variational Autoencoder, the Soft-Introspective Variational Autoencoder, and a Simple Framework for Contrastive Learning of Visual Representations. After the initial pretraining, fine-tuning was performed for the collected veterinary dataset using 20% of the available data. Next, a latent space exploration was performed for each model after which the encoding part of the model was fine-tuned again, this time in a supervised manner for classification. Simple Framework for Contrastive Learning of Visual Representations proved to be the most beneficial pretraining method. Therefore, it was for this method that experiments with various fine-tuning methods were carried out. We achieved a mean ROC AUC score of 0.77 and 0.66, respectively, for the laterolateral and dorsoventral projection datasets. The results show significant improvement compared to using the model without any pretraining approach. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Fusing learned representations from Riesz Filters and Deep CNN for lung tissue classification.
- Author
-
Joyseeree, Ranveer, Otálora, Sebastian, Müller, Henning, and Depeursinge, Adrien
- Subjects
- *
INTERSTITIAL lung diseases , *LUNGS , *TISSUES , *DEEP learning , *IMAGE fusion - Abstract
• A novel method to detect and classify several classes of diseased and healthy lung tissue in CT (Computed Tomography) images based on the fusion of Riesz and deep learning features is presented. • First, discriminative parametric lung tissue texture signatures are learned through Riesz representations using a one–versus–one approach. • Second, features from deep Convolutional Neural Networks (CNN) are computed by fine–tuning the GoogLeNet architecture using an augmented version of the same ILD dataset. • The two learned representations are combined in a joint softmax model for final classification, where early and late feature fusion schemes are compared. • The experimental results show that a late fusion of the independent probabilities leads to significant improvements in classification performance when compared to each of the separate feature representations. A novel method to detect and classify several classes of diseased and healthy lung tissue in CT (Computed Tomography), based on the fusion of Riesz and deep learning features, is presented. First, discriminative parametric lung tissue texture signatures are learned from Riesz representations using a one–versus–one approach. The signatures are generated for four diseased tissue types and a healthy tissue class, all of which frequently appear in the publicly available Interstitial Lung Diseases (ILD) dataset used in this article. Because the Riesz wavelets are steerable, they can easily be made invariant to local image rotations, a property that is desirable when analyzing lung tissue micro–architectures in CT images. Second, features from deep Convolutional Neural Networks (CNN) are computed by fine–tuning the Inception V3 architecture using an augmented version of the same ILD dataset. Because CNN features are both deep and non–parametric, they can accurately model virtually any pattern that is useful for tissue discrimination, and they are the de facto standard for many medical imaging tasks. However, invariance to local image rotations is not explicitly implemented and can only be approximated with rotation–based data augmentation. This motivates the fusion of Riesz and deep CNN features, as the two techniques are very complementary. The two learned representations are combined in a joint softmax model for final classification, where early and late feature fusion schemes are compared. The experimental results show that a late fusion of the independent probabilities leads to significant improvements in classification performance when compared to each of the separate feature representations and also compared to an ensemble of deep learning approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. A novel Siamese deep hashing model for histopathology image retrieval.
- Author
-
Mohammad Alizadeh, Seyed, Sadegh Helfroush, Mohammad, and Müller, Henning
- Subjects
- *
IMAGE databases , *IMAGE retrieval , *HISTOPATHOLOGY , *CONTENT-based image retrieval , *HAMMING distance , *BINARY codes - Abstract
Content-based histopathology image retrieval can be a useful technique for help in diagnosing various diseases. The process of retrieving images is often time-consuming and challenging due to the need for high-dimensional features when trying to model complex content. Hashing methods can therefore be employed to resolve the challenge by producing binary codes of different lengths. Deep hashing methods are frequently superior to traditional machine learning approaches but are affected by the size of training sets. In addition, back-propagation learning can further complicate the generation of binary values. Hence, this paper proposes a novel Siamese deep hashing model, named histopathology Siamese deep hashing (HSDH), for histopathology image retrieval. Two designed deep hashing models with shared weights and structures are used to generate hash codes. A Hamming distance layer is then applied to evaluate the similarity of the generated values. A highly effective loss function is also introduced that incorporates a modified version of the standard contrastive loss function with an error estimation term to improve both the training and retrieval phases. In the retrieval phase, the trained model compares a query image with all the training images and ranks the most similar images. According to the experimental results on two publicly available databases, BreakHis and Kather, the HSDH model outperforms other state-of-the-art hashing-based methods in histopathology image retrieval. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. 3D‐printed iodine‐ink CT phantom for radiomics feature extraction ‐ advantages and challenges.
- Author
-
Bach, Michael, Aberle, Christoph, Depeursinge, Adrien, Jimenez‐del‐Toro, Oscar, Schaer, Roger, Flouris, Kyriakos, Konukoglu, Ender, Müller, Henning, Stieltjes, Bram, and Obmann, Markus M.
- Subjects
- *
IMAGING phantoms , *FEATURE extraction , *RADIOMICS , *COMPUTED tomography , *NO-tillage , *DISTRIBUTION (Probability theory) - Abstract
Background: To test and validate novel CT techniques, such as texture analysis in radiomics, repeat measurements are required. Current anthropomorphic phantoms lack fine texture and true anatomic representation. 3D‐printing of iodinated ink on paper is a promising phantom manufacturing technique. Previously acquired or artificially created CT data can be used to generate realistic phantoms. Purpose: To present the design process of an anthropomorphic 3D‐printed iodine ink phantom, highlighting the different advantages and pitfalls in its use. To analyze the phantom's X‐ray attenuation properties, and the influences of the printing process on the imaging characteristics, by comparing it to the original input dataset. Methods: Two patient CT scans and artificially generated test patterns were combined in a single dataset for phantom printing and cropped to a size of 26 × 19 × 30 cm3. This DICOM dataset was printed on paper using iodinated ink. The phantom was CT‐scanned and compared to the original image dataset used for printing the phantom. The water‐equivalent diameter of the phantom was compared to that of a patient cohort (N = 104). Iodine concentrations in the phantom were measured using dual‐energy CT. 86 radiomics features were extracted from 10 repeat phantom scans and the input dataset. Features were compared using a histogram analysis and a PCA individually and overall, respectively. The frequency content was compared using the normalized spectrum modulus. Results: Low density structures are depicted incorrectly, while soft tissue structures show excellent visual accordance with the input dataset. Maximum deviations of around 30 HU between the original dataset and phantom HU values were observed. The phantom has X‐ray attenuation properties comparable to a lightweight adult patient (∼54 kg, BMI 19 kg/m2). Iodine concentrations in the phantom varied between 0 and 50 mg/ml. PCA of radiomics features shows different tissue types separate in similar areas of PCA representation in the phantom scans as in the input dataset. Individual feature analysis revealed systematic shift of first order radiomics features compared to the original dataset, while some higher order radiomics features did not. The normalized frequency modulus |f(ω)| of the phantom data agrees well with the original data. However, all frequencies systematically occur more frequently in the phantom compared to the maximum of the spectrum modulus than in the original data set, especially for mid‐frequencies (e.g., for ω = 0.3942 mm−1, |f(ω)|original = 0.09 * |fmax|original and |f(ω)|phantom = 0.12 * |fmax|phantom). Conclusions: 3D‐iodine‐ink‐printing technology can be used to print anthropomorphic phantoms with a water‐equivalent diameter of a lightweight adult patient. Challenges include small residual air enclosures and the fidelity of HU values. For soft tissue, there is a good agreement between the HU values of the phantom and input data set. Radiomics texture features of the phantom scans are similar to the input data set, but systematic shifts of radiomics features in first order features, due to differences in HU values, need to be considered. The paper substrate influences the spatial frequency distribution of the phantom scans. This phantom type is of very limited use for dual‐energy CT analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. From medical imaging to medical informatics
- Author
-
Müller, Henning, Gao, Xiaohong, and Luo, Shuqian
- Published
- 2008
- Full Text
- View/download PDF
30. CheckList for EvaluAtion of Radiomics research (CLEAR): a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII.
- Author
-
Kocak, Burak, Baessler, Bettina, Bakas, Spyridon, Cuocolo, Renato, Fedorov, Andrey, Maier-Hein, Lena, Mercaldo, Nathaniel, Müller, Henning, Orlhac, Fanny, Pinto dos Santos, Daniel, Stanzione, Arnaldo, Ugga, Lorenzo, and Zwanenburg, Alex
- Subjects
- *
RADIOMICS , *DELPHI method , *DOCUMENTATION standards , *TEXTURE analysis (Image processing) ,RESEARCH evaluation - Abstract
Even though radiomics can hold great potential for supporting clinical decision-making, its current use is mostly limited to academic research, without applications in routine clinical practice. The workflow of radiomics is complex due to several methodological steps and nuances, which often leads to inadequate reporting and evaluation, and poor reproducibility. Available reporting guidelines and checklists for artificial intelligence and predictive modeling include relevant good practices, but they are not tailored to radiomic research. There is a clear need for a complete radiomics checklist for study planning, manuscript writing, and evaluation during the review process to facilitate the repeatability and reproducibility of studies. We here present a documentation standard for radiomic research that can guide authors and reviewers. Our motivation is to improve the quality and reliability and, in turn, the reproducibility of radiomic research. We name the checklist CLEAR (CheckList for EvaluAtion of Radiomics research), to convey the idea of being more transparent. With its 58 items, the CLEAR checklist should be considered a standardization tool providing the minimum requirements for presenting clinical radiomics research. In addition to a dynamic online version of the checklist, a public repository has also been set up to allow the radiomics community to comment on the checklist items and adapt the checklist for future versions. Prepared and revised by an international group of experts using a modified Delphi method, we hope the CLEAR checklist will serve well as a single and complete scientific documentation tool for authors and reviewers to improve the radiomics literature. Key points: The workflow of radiomics is complex with several methodological steps and nuances, which often leads to inadequate reproducibility, reporting, and evaluation. The CLEAR checklist proposes a single documentation standard for radiomics research that can guide authors, providing the minimum requirements for presenting clinical radiomics research. The CLEAR checklist aims to include all necessary items to support reviewer evaluation of radiomics-related manuscripts. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Modelling transient stresses in dynamically loaded elastic solids using the Lattice Boltzmann Method.
- Author
-
Faust, Erik, Steinmetz, Felix, Schlüter, Alexander, Müller, Henning, and Müller, Ralf
- Subjects
- *
LATTICE Boltzmann methods , *ELASTIC solids , *DISTRIBUTION (Probability theory) , *CONSERVATION laws (Physics) , *SPATIAL resolution , *IMPACT loads , *MINE safety - Abstract
In solids subjected to transient loading, inertial effects and S‐ or P‐wave superposition can give rise to stresses which significantly exceed those predicted by quasi‐static models. It pays to accurately predict such stresses – and the failures induced by them – in fields from mining to automotive safety and biomechanics. This, however, requires costly simulations with fine spatial and temporal resolutions. The Lattice Boltzmann Method (LBM) can be used as an explicit numerical solver for certain appropriately formulated conservation laws [1]. It encodes information about the field variables to be simulated in distribution functions, which are modified locally and propagated across a regular lattice. As the LBM lends itself to finely discretised simulations and is easy to parallelise [2, p.55], it is an intriguing candidate as a solver for dynamic continuum problems. Recently, Murthy et al. [3] and Escande et al. [4] adopted LBM algorithms to model isotropic, linear elastic solids. We extended these algorithms using local boundary rules that allow us to model arbitrary‐valued Dirichlet and Neumann boundaries. Here, we illustrate applications of the LBM for solids and the proposed additions by way of a simple numerical example – a glass pane subject to a sudden impact load. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Large-scale retrieval for medical image analytics: A comprehensive review.
- Author
-
Li, Zhongyu, Zhang, Xiaofan, Müller, Henning, and Zhang, Shaoting
- Subjects
- *
DIAGNOSTIC imaging , *INFORMATION retrieval , *DIGITAL images , *IMAGE quality analysis , *COMPUTER vision , *MACHINE learning - Abstract
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. Shangri-La: A medical case-based retrieval tool.
- Author
-
Seco de Herrera, Alba G., Schaer, Roger, and Müller, Henning
- Subjects
- *
DIAGNOSTIC imaging , *EXPERIMENTAL design , *INFORMATION retrieval , *MEDICAL literature , *USER interfaces , *WORLD Wide Web , *DECISION making in clinical medicine - Abstract
Large amounts of medical visual data are produced in hospitals daily and made available continuously via publications in the scientific literature, representing the medical knowledge. However, it is not always easy to find the desired information and in clinical routine the time to fulfil an information need is often very limited. Information retrieval systems are a useful tool to provide access to these documents/images in the biomedical literature related to information needs of medical professionals. Shangri-La is a medical retrieval system that can potentially help clinicians to make decisions on difficult cases. It retrieves articles from the biomedical literature when querying a case description and attached images. The system is based on a multimodal retrieval approach with a focus on the integration of visual information connected to text. The approach includes a query-adaptive multimodal fusion criterion that analyses if visual features are suitable to be fused with text for the retrieval. Furthermore, image modality information is integrated in the retrieval step. The approach is evaluated using the ImageCLEFmed 2013 medical retrieval benchmark and can thus be compared to other approaches. Results show that the final approach outperforms the best multimodal approach submitted to ImageCLEFmed 2013. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
34. Data-driven color augmentation for H&E stained images in computational pathology.
- Author
-
Marini, Niccolò, Otalora, Sebastian, Wodzinski, Marek, Tomassini, Selene, Dragoni, Aldo Franco, Marchand-Maillet, Stephane, Dominguez Morales, Juan Pedro, Duran-Lopez, Lourdes, Vatrano, Simona, Müller, Henning, and Atzori, Manfredo
- Subjects
- *
HEMATOXYLIN & eosin staining , *CONVOLUTIONAL neural networks , *COLOR , *TUMOR classification , *COLON cancer , *COLORING matter in food - Abstract
Computational pathology targets the automatic analysis of Whole Slide Images (WSI). WSIs are high-resolution digitized histopathology images, stained with chemical reagents to highlight specific tissue structures and scanned via whole slide scanners. The application of different parameters during WSI acquisition may lead to stain color heterogeneity, especially considering samples collected from several medical centers. Dealing with stain color heterogeneity often limits the robustness of methods developed to analyze WSIs, in particular Convolutional Neural Networks (CNN), the state-of-the-art algorithm for most computational pathology tasks. Stain color heterogeneity is still an unsolved problem, although several methods have been developed to alleviate it, such as Hue-Saturation-Contrast (HSC) color augmentation and stain augmentation methods. The goal of this paper is to present Data-Driven Color Augmentation (DDCA), a method to improve the efficiency of color augmentation methods by increasing the reliability of the samples used for training computational pathology models. During CNN training, a database including over 2 million H&E color variations collected from private and public datasets is used as a reference to discard augmented data with color distributions that do not correspond to realistic data. DDCA is applied to HSC color augmentation, stain augmentation and H&E-adversarial networks in colon and prostate cancer classification tasks. DDCA is then compared with 11 state-of-the-art baseline methods to handle color heterogeneity, showing that it can substantially improve classification performance on unseen data including heterogeneous color variations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Modelling digital health data: The ExaMode ontology for computational pathology.
- Author
-
Menotti, Laura, Silvello, Gianmaria, Atzori, Manfredo, Boytcheva, Svetla, Ciompi, Francesco, Di Nunzio, Giorgio Maria, Fraggetta, Filippo, Giachelle, Fabio, Irrera, Ornella, Marchesin, Stefano, Marini, Niccolò, Müller, Henning, and Primov, Todor
- Subjects
- *
DIGITAL health , *ONTOLOGIES (Information retrieval) , *ONTOLOGY , *RDF (Document markup language) , *PATHOLOGY , *CELIAC disease , *DATA integration - Abstract
Computational pathology can significantly benefit from ontologies to standardize the employed nomenclature and help with knowledge extraction processes for high-quality annotated image datasets. The end goal is to reach a shared model for digital pathology to overcome data variability and integration problems. Indeed, data annotation in such a specific domain is still an unsolved challenge and datasets cannot be steadily reused in diverse contexts due to heterogeneity issues of the adopted labels, multilingualism, and different clinical practices. Material and methods: This paper presents the ExaMode ontology, modeling the histopathology process by considering 3 key cancer diseases (colon, cervical, and lung tumors) and celiac disease. The ExaMode ontology has been designed bottom-up in an iterative fashion with continuous feedback and validation from pathologists and clinicians. The ontology is organized into 5 semantic areas that defines an ontological template to model any disease of interest in histopathology. Results: The ExaMode ontology is currently being used as a common semantic layer in: (i) an entity linking tool for the automatic annotation of medical records; (ii) a web-based collaborative annotation tool for histopathology text reports; and (iii) a software platform for building holistic solutions integrating multimodal histopathology data. Discussion: The ontology ExaMode is a key means to store data in a graph database according to the RDF data model. The creation of an RDF dataset can help develop more accurate algorithms for image analysis, especially in the field of digital pathology. This approach allows for seamless data integration and a unified query access point, from which we can extract relevant clinical insights about the considered diseases using SPARQL queries. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Erratum to “A review of content-based image retrieval systems in medical applications—Clinical benefits and future directions” [Int. J. Med. Inform. 73 (1) (2004) 1–23]
- Author
-
Müller, Henning, Michoux, Nicolas, Bandon, David, and Geissbuhler, Antoine
- Published
- 2009
- Full Text
- View/download PDF
37. Editorial to the Special Issue on Medical Image Annotation in ImageCLEF 2007
- Author
-
Deselaers, Thomas, Müller, Henning, and Deserno, Thomas M.
- Published
- 2008
- Full Text
- View/download PDF
38. How users search and what they search for in the medical domain.
- Author
-
Palotti, João, Hanbury, Allan, Müller, Henning, and Kahn, Charles
- Subjects
- *
INTERNET searching , *MEDICAL informatics , *INTERNET content , *INFORMATION retrieval - Abstract
The internet is an important source of medical knowledge for everyone, from laypeople to medical professionals. We investigate how these two extremes, in terms of user groups, have distinct needs and exhibit significantly different search behaviour. We make use of query logs in order to study various aspects of these two kinds of users. The logs from America Online, Health on the Net, Turning Research Into Practice and American Roentgen Ray Society (ARRS) GoldMiner were divided into three sets: (1) laypeople, (2) medical professionals (such as physicians or nurses) searching for health content and (3) users not seeking health advice. Several analyses are made focusing on discovering how users search and what they are most interested in. One possible outcome of our analysis is a classifier to infer user expertise, which was built. We show the results and analyse the feature set used to infer expertise. We conclude that medical experts are more persistent, interacting more with the search engine. Also, our study reveals that, conversely to what is stated in much of the literature, the main focus of users, both laypeople and professionals, is on disease rather than symptoms. The results of this article, especially through the classifier built, could be used to detect specific user groups and then adapt search results to the user group. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
39. Evaluating multimodal relevance feedback techniques for medical image retrieval.
- Author
-
Markonis, Dimitrios, Schaer, Roger, and Müller, Henning
- Subjects
- *
IMAGE retrieval , *COMBINED modality therapy , *INFORMATION retrieval research , *MEDICAL informatics , *RESEARCH in information science - Abstract
Medical image retrieval can assist physicians in finding information supporting their diagnosis and fulfilling information needs. Systems that allow searching for medical images need to provide tools for quick and easy navigation and query refinement as the time available for information search is often short. Relevance feedback is a powerful tool in information retrieval. This study evaluates relevance feedback techniques with regard to the content they use. A novel relevance feedback technique that uses both text and visual information of the results is proposed. The two information modalities from the image examples are fused either at the feature level using the Rocchio algorithm or at the query list fusion step using a common late fusion rule. Results using the ImageCLEF 2012 benchmark database for medical image retrieval show the potential of relevance feedback techniques in medical image retrieval. The mean average precision (mAP) is used as the evaluation metric and the proposed method outperforms commonly-used methods. The baseline without feedback reached 16 % whereas the relevance feedback with 20 images reached up to 26.35 % with three steps and when using 100 images up to 34.87 % in four steps. Most improvements occur in the first two steps of relevance feedback and then results start to become relatively flat. This might also be due to only using positive feedback as negative feeback often also improves results after more steps. The effect of relevance feedback in automatically spelling corrected and translated queries is investigated as well. Results without mistakes were better than spell-corrected results but the spelling correction more than double results over non-corrected retrieval. Multimodal relevance feedback has shown to be able to help visual medical information retrieval. Next steps include integrating semantics into relevance feedback techniques to benefit from the structured knowledge of ontologies and experimenting on the fusion of text and visual information. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
40. A systematic comparison of deep learning methods for Gleason grading and scoring.
- Author
-
Dominguez-Morales, Juan P., Duran-Lopez, Lourdes, Marini, Niccolò, Vicente-Diaz, Saturnino, Linares-Barranco, Alejandro, Atzori, Manfredo, and Müller, Henning
- Subjects
- *
ARTIFICIAL neural networks , *GLEASON grading system , *DEEP learning , *SUPERVISED learning , *CANCER patients - Abstract
Prostate cancer is the second most frequent cancer in men worldwide after lung cancer. Its diagnosis is based on the identification of the Gleason score that evaluates the abnormality of cells in glands through the analysis of the different Gleason patterns within tissue samples. The recent advancements in computational pathology, a domain aiming at developing algorithms to automatically analyze digitized histopathology images, lead to a large variety and availability of datasets and algorithms for Gleason grading and scoring. However, there is no clear consensus on which methods are best suited for each problem in relation to the characteristics of data and labels. This paper provides a systematic comparison on nine datasets with state-of-the-art training approaches for deep neural networks (including fully-supervised learning, weakly-supervised learning, semi-supervised learning, Additive-MIL, Attention-Based MIL, Dual-Stream MIL, TransMIL and CLAM) applied to Gleason grading and scoring tasks. The nine datasets are collected from pathology institutes and openly accessible repositories. The results show that the best methods for Gleason grading and Gleason scoring tasks are fully supervised learning and CLAM, respectively, guiding researchers to the best practice to adopt depending on the task to solve and the labels that are available. • We perform a systematic comparison of 12 training approaches on Gleason grading and scoring. • 9 highly-heterogeneous datasets were used, allowing evaluating the performance and the generalization of the methods over many datasets. • Self-supervision improves performance compared to using pre-trained weights. • Full supervision shows the highest performance in patch-level classification tasks. • CLAM obtains the highest performance in image-level classification tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. RegWSI: Whole slide image registration using combined deep feature- and intensity-based methods: Winner of the ACROBAT 2023 challenge.
- Author
-
Wodzinski, Marek, Marini, Niccolò, Atzori, Manfredo, and Müller, Henning
- Abstract
The automatic registration of differently stained whole slide images (WSIs) is crucial for improving diagnosis and prognosis by fusing complementary information emerging from different visible structures. It is also useful to quickly transfer annotations between consecutive or restained slides, thus significantly reducing the annotation time and associated costs. Nevertheless, the slide preparation is different for each stain and the tissue undergoes complex and large deformations. Therefore, a robust, efficient, and accurate registration method is highly desired by the scientific community and hospitals specializing in digital pathology. We propose a two-step hybrid method consisting of (i) deep learning- and feature-based initial alignment algorithm, and (ii) intensity-based nonrigid registration using the instance optimization. The proposed method does not require any fine-tuning to a particular dataset and can be used directly for any desired tissue type and stain. The registration time is low, allowing one to perform efficient registration even for large datasets. The method was proposed for the ACROBAT 2023 challenge organized during the MICCAI 2023 conference and scored 1st place. The method is released as open-source software. The proposed method is evaluated using three open datasets: (i) Automatic Nonrigid Histological Image Registration Dataset (ANHIR), (ii) Automatic Registration of Breast Cancer Tissue Dataset (ACROBAT), and (iii) Hybrid Restained and Consecutive Histological Serial Sections Dataset (HyReCo). The target registration error (TRE) is used as the evaluation metric. We compare the proposed algorithm to other state-of-the-art solutions, showing considerable improvement. Additionally, we perform several ablation studies concerning the resolution used for registration and the initial alignment robustness and stability. The method achieves the most accurate results for the ACROBAT dataset, the cell-level registration accuracy for the restained slides from the HyReCo dataset, and is among the best methods evaluated on the ANHIR dataset. The article presents an automatic and robust registration method that outperforms other state-of-the-art solutions. The method does not require any fine-tuning to a particular dataset and can be used out-of-the-box for numerous types of microscopic images. The method is incorporated into the DeeperHistReg framework, allowing others to directly use it to register, transform, and save the WSIs at any desired pyramid level (resolution up to 220k x 220k). We provide free access to the software. The results are fully and easily reproducible. The proposed method is a significant contribution to improving the WSI registration quality, thus advancing the field of digital pathology. • The article introduces a new method dedicated to automatic registration of whole slide images (WSIs). • The proposed method has superior generalizability and does not require any re-training or fine-tuning to particular dataset. • The quantitative results are very accurate, the algorithm is the best registration method on the ACROBAT and HyReCo datasets. • The source code is released and included in the DeeperHistReg framework, allowing end-users to use it in their research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Assessing radiomics feature stability with simulated CT acquisitions.
- Author
-
Flouris, Kyriakos, Jimenez-del-Toro, Oscar, Aberle, Christoph, Bach, Michael, Schaer, Roger, Obmann, Markus M., Stieltjes, Bram, Müller, Henning, Depeursinge, Adrien, and Konukoglu, Ender
- Subjects
- *
RADIOMICS , *COMPUTED tomography , *MACHINE learning , *DIAGNOSTIC imaging - Abstract
Medical imaging quantitative features had once disputable usefulness in clinical studies. Nowadays, advancements in analysis techniques, for instance through machine learning, have enabled quantitative features to be progressively useful in diagnosis and research. Tissue characterisation is improved via the "radiomics" features, whose extraction can be automated. Despite the advances, stability of quantitative features remains an important open problem. As features can be highly sensitive to variations of acquisition details, it is not trivial to quantify stability and efficiently select stable features. In this work, we develop and validate a Computed Tomography (CT) simulator environment based on the publicly available ASTRA toolbox (www.astra-toolbox.com). We show that the variability, stability and discriminative power of the radiomics features extracted from the virtual phantom images generated by the simulator are similar to those observed in a tandem phantom study. Additionally, we show that the variability is matched between a multi-center phantom study and simulated results. Consequently, we demonstrate that the simulator can be utilised to assess radiomics features' stability and discriminative power. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Hierarchical classification using a frequency-based weighting and simple visual features
- Author
-
Zhou, Xin, Depeursinge, Adrien, and Müller, Henning
- Subjects
- *
MEDICAL imaging systems , *DIAGNOSTIC imaging , *MEDICAL radiography , *MEDICAL care - Abstract
Abstract: This article describes the use of a frequency-based weighting scheme using low level visual features developed for image retrieval to perform a hierarchical classification of medical images. The techniques are based on a classical tf/idf (term frequency, inverse document frequency) weighting scheme of the GIFT (GNU Image Finding Tool), and perform classification based on kNN (k-Nearest Neighbors) and voting-based approaches. The features used by the GIFT are very simple giving a global description of the images and local information on fixed regions both for colors and textures. We reused a similar technique as in previous years of ImageCLEF to have a baseline for the retrieval performance over the three years of the medical image annotation task. This allows showing the clear increase in quality of participating research systems over the years. Subsequently, we optimized the retrieval results based on the simple technology used by varying the feature space, the classification method (varying number of neighbors, various voting schemes) and by adding new information such as aspect ratio, which has shown to work well in the past. The results show that the techniques we use have several problems that could not be fully solved through the applied optimizations. Still, optimizations improved results enormously from an error value of 228 to below 150. As a baseline to show the progress of techniques over the years it also works well. Aspect ratio shows to be an important factor to improve results. Performing classification on an axis level performs better than using the entire hierarchy code or not taking hierarchy into account at all. To further improve results, the use of more suitable visual features such as patch histograms or salient point features seems necessary. Small distortions of images of the same class have to be taken into account for very good results. Still, without using any learning technique and high level visual features, the approach performs reasonably well. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
44. Automatic medical image annotation in ImageCLEF 2007: Overview, results, and discussion
- Author
-
Deselaers, Thomas, Deserno, Thomas M., and Müller, Henning
- Subjects
- *
MEDICAL imaging systems , *DIAGNOSTIC imaging , *IMAGE processing , *MEDICAL care - Abstract
Abstract: In this paper, the automatic medical annotation task of the 2007 CLEF cross language image retrieval campaign (ImageCLEF) is described. The paper focusses on the images used, the task setup, and the results obtained in the evaluation campaign. Since 2005, the medical automatic image annotation task exists in ImageCLEF with increasing complexity to evaluate the performance of state-of-the-art methods for completely automatic annotation of medical images based on visual properties. The paper also describes the evolution of the task from its origin in 2005–2007. The 2007 task, comprising 11,000 fully annotated training images and 1000 test images to be annotated, is a realistic task with a large number of possible classes at different levels of detail. Detailed analysis of the methods across participating groups is presented with respect to the (i) image representation, (ii) classification method, and (iii) use of the class hierarchy. The results show that methods which build on local image descriptors and discriminative models are able to provide good predictions of the image classes, mostly by using techniques that were originally developed in the machine learning and computer vision domain for object recognition in non-medical images. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
45. Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.
- Author
-
Marini, Niccolò, Otálora, Sebastian, Müller, Henning, and Atzori, Manfredo
- Subjects
- *
GLEASON grading system , *CONVOLUTIONAL neural networks , *DEEP learning , *PROSTATE , *PHYSICIANS , *COMPUTER vision , *HISTOPATHOLOGY - Abstract
• Improved classification performance on several prostate datasets using pseudo-labels • Generalization of the models on several high heterogeneous datasets • Few locally annotated data used to generate large amount of pseudo-labels • Overfitting in transfer learning limited despite data come from several sources • Three training variants that combine strongly and weakly annotated data are proposed [Display omitted] Convolutional neural networks (CNNs) are state-of-the-art computer vision techniques for various tasks, particularly for image classification. However, there are domains where the training of classification models that generalize on several datasets is still an open challenge because of the highly heterogeneous data and the lack of large datasets with local annotations of the regions of interest, such as histopathology image analysis. Histopathology concerns the microscopic analysis of tissue specimens processed in glass slides to identify diseases such as cancer. Digital pathology concerns the acquisition, management and automatic analysis of digitized histopathology images that are large, having in the order of 100 ′ 000 2 pixels per image. Digital histopathology images are highly heterogeneous due to the variability of the image acquisition procedures. Creating locally labeled regions (required for the training) is time-consuming and often expensive in the medical field, as physicians usually have to annotate the data. Despite the advances in deep learning, leveraging strongly and weakly annotated datasets to train classification models is still an unsolved problem, mainly when data are very heterogeneous. Large amounts of data are needed to create models that generalize well. This paper presents a novel approach to train CNNs that generalize to heterogeneous datasets originating from various sources and without local annotations. The data analysis pipeline targets Gleason grading on prostate images and includes two models in sequence, following a teacher/student training paradigm. The teacher model (a high-capacity neural network) automatically annotates a set of pseudo-labeled patches used to train the student model (a smaller network). The two models are trained with two different teacher/student approaches: semi-supervised learning and semi-weekly supervised learning. For each of the two approaches, three student training variants are presented. The baseline is provided by training the student model only with the strongly annotated data. Classification performance is evaluated on the student model at the patch level (using the local annotations of the Tissue Micro-Arrays Zurich dataset) and at the global level (using the TCGA-PRAD, The Cancer Genome Atlas-PRostate ADenocarcinoma, whole slide image Gleason score). The teacher/student paradigm allows the models to better generalize on both datasets, despite the inter-dataset heterogeneity and the small number of local annotations used. The classification performance is improved both at the patch-level (up to κ = 0.6127 ± 0.0133 from κ = 0.5667 ± 0.0285), at the TMA core-level (Gleason score) (up to κ = 0.7645 ± 0.0231 from κ = 0.7186 ± 0.0306) and at the WSI-level (Gleason score) (up to κ = 0.4529 ± 0.0512 from κ = 0.2293 ± 0.1350). The results show that with the teacher/student paradigm, it is possible to train models that generalize on datasets from entirely different sources, despite the inter-dataset heterogeneity and the lack of large datasets with local annotations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
46. Expression of CDX2 and MUC2 in Barrett's mucosa
- Author
-
Steininger, Helmuth, Pfofe, Denis A., Müller, Henning, Haag-Sunjic, Gabriele, and Fratianu, Veronica
- Subjects
- *
ESOPHAGEAL cancer , *HYDROGEN-ion concentration , *EPITHELIUM , *ENDOTHELIUM - Abstract
Abstract: Barrett''s mucosa is a risk factor for esophageal adenocarcinoma and should be detected at an early stage. It is defined by the presence of columnar epithelium with goblet cells in the lower esophagus, but histologic diagnosis can be uncertain in the absence of distinct goblet cells. We investigated 55 biopsies from 48 patients with endoscopically plain Barrett''s esophagus and performed immunohistochemistry for CDX2 and MUC2. In addition, alcian blue (pH 2,5)/PAS staining was done. In histologically unequivocal Barrett''s mucosa, nuclear expression of CDX2 in goblet cells and many columnar cells, as well as cytoplasmic positivity for MUC2 in goblet cells, could be observed. Alcian blue (pH 2,5)/PAS stained acidic mucins in goblet cells and in some non-goblet columnar cells. In six cases, no definite Barrett''s mucosa was present, and no expression of MUC2 could be observed. In these biopsies, there was granular cytoplasmic and /or focal nuclear staining for CDX2 in non-goblet columnar epithelial cells, indicating their intestinal differentiation. We suggest that this peculiar mucosa is the precursor of unequivocal Barrett''s mucosa and would designate it early Barrett''s mucosa. Alcian blue for acidic mucins is inconsistent in this epithelium and does not reliably indicate early intestinal differentiation. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
47. A comparative study on wearables and single-camera video for upper-limb out-of-the-lab activity recognition with different deep learning architectures.
- Author
-
Zarzuela, Mario Martínez, González-Ortega, David, Antón-Rodríguez, Míriam, Díaz-Pernas, Francisco Javier, Müller, Henning, and Simón-Martínez, Cristina
- Subjects
- *
WEARABLE cameras , *DEEP learning , *PHYSICAL activity , *GAIT in humans , *CEREBRAL palsy , *WALKING , *MUSCLE strength - Abstract
The use of a wide range of computer vision solutions, and more recently high-end Inertial Measurement Units (IMU) have become increasingly popular for assessing human physical activity in clinical and research settings [1]. Nevertheless, to increase the feasibility of patient tracking in out-of-the-lab settings, it is necessary to use a reduced number of devices for movement acquisition. Promising solutions in this context are IMU-based wearables and single camera systems [2]. Additionally, the development of machine learning systems able to recognize and digest clinically relevant data in-the-wild is needed, and therefore determining the ideal input to those is crucial [3]. For upper-limb activity recognition out-of-the-lab, do wearables or single camera offer better performance? Recordings from 16 healthy subjects performing 8 upper-limb activities from the VIDIMU dataset [4] were used. For wearable recordings, the subjects wore 5 IMU-based wearables and adopted a neutral pose (N-pose) for calibration. Joint angles were estimated with inverse kinematics algorithms in OpenSense [5]. Single-camera video recordings occurred simultaneously. Joint angles were estimated with inverse kinematics algorithms in OpenSense. Single-camera video recordings occurred simultaneously, and the subject's pose was estimated with DeepStream [6]. We compared various Deep Learning architectures (DNN, CNN, CNN-LSTM, LSTM-CNN, LSTM, LSTM-AE) for recognizing daily living activities. The input to the different neural architectures consisted in a 2-second time series containing the estimated joint angles and their 2D FFT. Every network was trained using 2 subjects for validation, a batch size of 20, Adam as the optimizer, and combining early stopping and other regularization techniques. Performance metrics were extracted from 4-fold cross-validation experiments. In all neural networks, performance was higher with IMU-based wearables data compared to video. The best network was an LSTM AutoEncoder (6 layers, 700 K parameters; wearable data accuracy:0.985, F1-score:0.936 (Fig. 1); video data accuracy:0.962, F1-score:0.842). Remarkably, when using video as input there were no significant differences in the performance metrics obtained among different architectures. On the contrary, the F1 scores using IMU data varied significantly (DNN: 0.849, CNN: 0.889, CNN-LSTM: 0.879, LSTM-CNN: 0.904, LSTM: 0.920, LSTM-AE: 0.936). [Display omitted] Wearables and video present advantages and disadvantages. While IMUs can provide accurate information about the orientation and acceleration of body parts, body-to-segment calibration and drift can affect data reliability. Similarly, a single camera can easily track the position of different body joints, but the recorded data does not yet reliably represent the movement with all degrees of freedom. Our experiments confirm that despite the current limitations of wearables, with a very simple N-pose calibration, IMU data provides more discriminative features for upper-limb activity recognition. Our results are consistent with previous studies that have shown the advantages of IMUs for movement recognition [7]. In the future, we will estimate how these data compare to gold-standard systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. On-cloud decision-support system for non-small cell lung cancer histology characterization from thorax computed tomography scans.
- Author
-
Tomassini, Selene, Falcionelli, Nicola, Bruschi, Giulia, Sbrollini, Agnese, Marini, Niccolò, Sernani, Paolo, Morettini, Micaela, Müller, Henning, Dragoni, Aldo Franco, and Burattini, Laura
- Subjects
- *
NON-small-cell lung carcinoma , *LUNGS , *RECEIVER operating characteristic curves , *SQUAMOUS cell carcinoma , *HISTOLOGY , *HISTOLOGICAL techniques - Abstract
Non-Small Cell Lung Cancer (NSCLC) accounts for about 85% of all lung cancers. Developing non-invasive techniques for NSCLC histology characterization may not only help clinicians to make targeted therapeutic treatments but also prevent subjects from undergoing lung biopsy, which is challenging and could lead to clinical implications. The motivation behind the study presented here is to develop an advanced on-cloud decision-support system, named LUCY, for non-small cell LUng Cancer histologY characterization directly from thorax Computed Tomography (CT) scans. This aim was pursued by selecting thorax CT scans of 182 LUng ADenocarcinoma (LUAD) and 186 LUng Squamous Cell carcinoma (LUSC) subjects from four openly accessible data collections (NSCLC-Radiomics, NSCLC-Radiogenomics, NSCLC-Radiomics-Genomics and TCGA-LUAD), in addition to the implementation and comparison of two end-to-end neural networks (the core layer of whom is a convolutional long short-term memory layer), the performance evaluation on test dataset (NSCLC-Radiomics-Genomics) from a subject-level perspective in relation to NSCLC histological subtype location and grade, and the dynamic visual interpretation of the achieved results by producing and analyzing one heatmap video for each scan. LUCY reached test Area Under the receiver operating characteristic Curve (AUC) values above 77% in all NSCLC histological subtype location and grade groups, and a best AUC value of 97% on the entire dataset reserved for testing, proving high generalizability to heterogeneous data and robustness. Thus, LUCY is a clinically-useful decision-support system able to timely, non-invasively and reliably provide visually-understandable predictions on LUAD and LUSC subjects in relation to clinically-relevant information. • In this study, a decision-support system is developed entirely in the cloud. • This system non-invasively characterizes non-small cell lung cancer histology. • This system analyzes the spatial information of thorax computed tomography scans. • This system is lung mass segmentation free, machine independent and interpretable. • This system uses a challenging evaluation protocol in an highly-demanding task. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Variability of Muscle Synergies in Hand Grasps: Analysis of Intra- and Inter-Session Data.
- Author
-
Pale, Una, Atzori, Manfredo, Müller, Henning, and Scano, Alessandro
- Subjects
- *
MUSCLES , *CENTRAL nervous system , *HAND - Abstract
Background. Muscle synergy analysis is an approach to understand the neurophysiological mechanisms behind the hypothesized ability of the Central Nervous System (CNS) to reduce the dimensionality of muscle control. The muscle synergy approach is also used to evaluate motor recovery and the evolution of the patients' motor performance both in single-session and longitudinal studies. Synergy-based assessments are subject to various sources of variability: natural trial-by-trial variability of performed movements, intrinsic characteristics of subjects that change over time (e.g., recovery, adaptation, exercise, etc.), as well as experimental factors such as different electrode positioning. These sources of variability need to be quantified in order to resolve challenges for the application of muscle synergies in clinical environments. The objective of this study is to analyze the stability and similarity of extracted muscle synergies under the effect of factors that may induce variability, including inter- and intra-session variability within subjects and inter-subject variability differentiation. The analysis was performed using the comprehensive, publicly available hand grasp NinaPro Database, featuring surface electromyography (EMG) measures from two EMG electrode bracelets. Methods. Intra-session, inter-session, and inter-subject synergy stability was analyzed using the following measures: variance accounted for (VAF) and number of synergies (NoS) as measures of reconstruction stability quality and cosine similarity for comparison of spatial composition of extracted synergies. Moreover, an approach based on virtual electrode repositioning was applied to shed light on the influence of electrode position on inter-session synergy similarity. Results. Inter-session synergy similarity was significantly lower with respect to intra-session similarity, both considering coefficient of variation of VAF (approximately 0.2–15% for inter vs. approximately 0.1% to 2.5% for intra, depending on NoS) and coefficient of variation of NoS (approximately 6.5–14.5% for inter vs. approximately 3–3.5% for intra, depending on VAF) as well as synergy similarity (approximately 74–77% for inter vs. approximately 88–94% for intra, depending on the selected VAF). Virtual electrode repositioning revealed that a slightly different electrode position can lower similarity of synergies from the same session and can increase similarity between sessions. Finally, the similarity of inter-subject synergies has no significant difference from the similarity of inter-session synergies (both on average approximately 84–90% depending on selected VAF). Conclusion. Synergy similarity was lower in inter-session conditions with respect to intra-session. This finding should be considered when interpreting results from multi-session assessments. Lastly, electrode positioning might play an important role in the lower similarity of synergies over different sessions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Classification of diabetes-related retinal diseases using a deep learning approach in optical coherence tomography.
- Author
-
Perdomo, Oscar, Rios, Hernán, Rodríguez, Francisco J., Otálora, Sebastián, Meriaudeau, Fabrice, Müller, Henning, and González, Fabio A.
- Subjects
- *
RETINAL imaging , *OPTICAL coherence tomography , *RETINAL diseases , *DEEP learning , *RETINAL degeneration , *DIABETIC retinopathy , *AUTOMATIC classification - Abstract
• An end-to-end deep learning-based method for automatic classification of B-scans inside a volume for three retinal diseases. • The proposed model includes a feedback stage that highlights the areas of the scans to support the interpretation of the results. This information is potentially useful for a medical specialist while assessing the prediction produced by the model. • The proposed model tested on SERI+CUHK data set with healthy, DME and DR-DME patients obtained a precision of 0,93 and an AUC of 0,86. On the A2A SD-OCT data set the model outperformed the state-of-the-art methods with an AUC of 0.99 for AMD diagnosis. Background and objectives: Spectral Domain Optical Coherence Tomography (SD-OCT) is a volumetric imaging technique that allows measuring patterns between layers such as small amounts of fluid. Since 2012, automatic medical image analysis performance has steadily increased through the use of deep learning models that automatically learn relevant features for specific tasks, instead of designing visual features manually. Nevertheless, providing insights and interpretation of the predictions made by the model is still a challenge. This paper describes a deep learning model able to detect medically interpretable information in relevant images from a volume to classify diabetes-related retinal diseases. Methods: This article presents a new deep learning model, OCT-NET, which is a customized convolutional neural network for processing scans extracted from optical coherence tomography volumes. OCT-NET is applied to the classification of three conditions seen in SD-OCT volumes. Additionally, the proposed model includes a feedback stage that highlights the areas of the scans to support the interpretation of the results. This information is potentially useful for a medical specialist while assessing the prediction produced by the model. Results: The proposed model was tested on the public SERI-CUHK and A2A SD-OCT data sets containing healthy, diabetic retinopathy, diabetic macular edema and age-related macular degeneration. The experimental evaluation shows that the proposed method outperforms conventional convolutional deep learning models from the state of the art reported on the SERI+CUHK and A2A SD-OCT data sets with a precision of 93% and an area under the ROC curve (AUC) of 0.99 respectively. Conclusions: The proposed method is able to classify the three studied retinal diseases with high accuracy. One advantage of the method is its ability to produce interpretable clinical information in the form of highlighting the regions of the image that most contribute to the classifier decision. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.