9 results on '"Dunnhofer, M."'
Search Results
2. Deep Learning-Based Femoral Cartilage Automatic Segmentation in Ultrasound Imaging for Guidance in Robotic Knee Arthroscopy
- Author
-
Antico, M., Sasazawa, F., Dunnhofer, M., Camps, S.M., Jaiprakash, A.T., Pandey, A.K., Crawford, R., Carneiro, G., and Fontanarosa, D.
- Published
- 2020
- Full Text
- View/download PDF
3. AUTOMATIC QUALITY ASSESSMENT OF TRANSPERINEAL ULTRASOUND IMAGES OF THE MALE PELVIC REGION, USING DEEP LEARNING
- Author
-
Camps, S. M., Houben, T., Carneiro, G., Edwards, C., Antico, M., Dunnhofer, M., Martens, E. G. H. J., Baeza, J. A., Vanneste, B. G. L., van Limbergen, E. J., de With, P. H. N., Verhaegen, F., Fontanarosa, D., Camps, S. M., Houben, T., Carneiro, G., Edwards, C., Antico, M., Dunnhofer, M., Martens, E. G. H. J., Baeza, J. A., Vanneste, B. G. L., van Limbergen, E. J., de With, P. H. N., Verhaegen, F., and Fontanarosa, D.
- Abstract
Ultrasound guidance is not in widespread use in prostate cancer radiotherapy workflows. This can be partially attributed to the need for image interpretation by a trained operator during ultrasound image acquisition. In this work, a one-class regressor, based on DenseNet and Gaussian processes, was implemented to automatically assess the quality of transperineal ultrasound images of the male pelvic region. The implemented deep learning approach was tested on 300 transperineal ultrasound images and it achieved a scoring accuracy of 94%, a specificity of 95% and a sensitivity of 92% with respect to the majority vote of 3 experts, which was comparable with the results of these experts. This is the first step toward a fully automatic workflow, which could potentially remove the need for ultrasound image interpretation and make real-time volumetric organ tracking in the radio- therapy environment using ultrasound more appealing. (C) 2019 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
- Published
- 2020
4. The Eighth Visual Object Tracking VOT2020 Challenge Results
- Author
-
Kristan, M., Leonardis, A., Matas, J., Felsberg, Michael, Pflugfelder, R., Kämäräinen, J.-K., Danelljan, M., Zajc, L.C., Lukežic, A., Drbohlav, O., He, Linbo, Zhang, Yushan, Yan, S., Yang, J., Fernández, G., Hauptmann, A., Memarmoghadam, A., García-Martín, Á., Robinson, Andreas, Varfolomieiev, A., Gebrehiwot, A.H., Uzun, B., Yan, B., Li, B., Qian, C., Tsai, C.-Y., Micheloni, C., Wang, D., Wang, F., Xie, F., Järemo-Lawin, Felix, Gustafsson, F., Foresti, G.L., Bhat, G., Chen, G., Ling, H., Zhang, H., Cevikalp, H., Zhao, H., Bai, H., Kuchibhotla, H.C., Saribas, H., Fan, H., Ghanei-Yakhdan, H., Li, H., Peng, H., Lu, H., Khaghani, J., Bescos, J., Li, J., Fu, J., Yu, J., Xu, J., Kittler, J., Yin, J., Lee, J., Yu, K., Liu, K., Yang, K., Dai, K., Cheng, L., Zhang, L., Wang, L., Van, Gool L., Bertinetto, L., Dunnhofer, M., Cheng, M., Dasari, M.M., Wang, N., Zhang, P., Torr, P.H.S., Wang, Q., Timofte, R., Gorthi, R.K.S., Choi, S., Marvasti-Zadeh, S.M., Zhao, S., Kasaei, S., Qiu, S., Chen, S., Schön, T.B., Xu, T., Lu, W., Hu, W., Zhou, W., Qiu, X., Ke, X., Wu, X.-J., Zhang, X., Yang, X., Zhu, X., Jiang, Y., Wang, Y., Chen, Y., Ye, Y., Li, Y., Yao, Y., Lee, Y., Gu, Y., Wang, Z., Tang, Z., Feng, Z.-H., Mai, Z., Zhang, Z., Wu, Z., Ma, Z., Kristan, M., Leonardis, A., Matas, J., Felsberg, Michael, Pflugfelder, R., Kämäräinen, J.-K., Danelljan, M., Zajc, L.C., Lukežic, A., Drbohlav, O., He, Linbo, Zhang, Yushan, Yan, S., Yang, J., Fernández, G., Hauptmann, A., Memarmoghadam, A., García-Martín, Á., Robinson, Andreas, Varfolomieiev, A., Gebrehiwot, A.H., Uzun, B., Yan, B., Li, B., Qian, C., Tsai, C.-Y., Micheloni, C., Wang, D., Wang, F., Xie, F., Järemo-Lawin, Felix, Gustafsson, F., Foresti, G.L., Bhat, G., Chen, G., Ling, H., Zhang, H., Cevikalp, H., Zhao, H., Bai, H., Kuchibhotla, H.C., Saribas, H., Fan, H., Ghanei-Yakhdan, H., Li, H., Peng, H., Lu, H., Khaghani, J., Bescos, J., Li, J., Fu, J., Yu, J., Xu, J., Kittler, J., Yin, J., Lee, J., Yu, K., Liu, K., Yang, K., Dai, K., Cheng, L., Zhang, L., Wang, L., Van, Gool L., Bertinetto, L., Dunnhofer, M., Cheng, M., Dasari, M.M., Wang, N., Zhang, P., Torr, P.H.S., Wang, Q., Timofte, R., Gorthi, R.K.S., Choi, S., Marvasti-Zadeh, S.M., Zhao, S., Kasaei, S., Qiu, S., Chen, S., Schön, T.B., Xu, T., Lu, W., Hu, W., Zhou, W., Qiu, X., Ke, X., Wu, X.-J., Zhang, X., Yang, X., Zhu, X., Jiang, Y., Wang, Y., Chen, Y., Ye, Y., Li, Y., Yao, Y., Lee, Y., Gu, Y., Wang, Z., Tang, Z., Feng, Z.-H., Mai, Z., Zhang, Z., Wu, Z., and Ma, Z.
- Abstract
The Visual Object Tracking challenge VOT2020 is the eighth annual tracker benchmarking activity organized by the VOT initiative. Results of 58 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The VOT2020 challenge was composed of five sub-challenges focusing on different tracking domains: (i) VOT-ST2020 challenge focused on short-term tracking in RGB, (ii) VOT-RT2020 challenge focused on “real-time” short-term tracking in RGB, (iii) VOT-LT2020 focused on long-term tracking namely coping with target disappearance and reappearance, (iv) VOT-RGBT2020 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2020 challenge focused on long-term tracking in RGB and depth imagery. Only the VOT-ST2020 datasets were refreshed. A significant novelty is introduction of a new VOT short-term tracking evaluation methodology, and introduction of segmentation ground truth in the VOT-ST2020 challenge – bounding boxes will no longer be used in the VOT-ST challenges. A new VOT Python toolkit that implements all these novelites was introduced. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net ).
- Published
- 2020
- Full Text
- View/download PDF
5. The Seventh Visual Object Tracking VOT2019 Challenge Results
- Author
-
Kristan, M., Matas, J., Leonardis, A., Felsberg, M., Pflugfelder, R., Kämäräinen, J.-K., Čehovin Zajc, L., Drbohlav, O., Lukežič, A., Berg, A., Eldesokey, A., Käpylä, J., Fernández, G., Gonzalez-Garcia, A., Memarmoghadam, A., Lu, A., He, A., Varfolomieiev, A., Chan, A., Shekhar Tripathi, A., Smeulders, A., Suraj Pedasingu, B., Chen, B.X., Zhang, B., Wu, B., Li, B., He, B., Yan, B., Bai, B., Kim, B.H., Ma, C., Fang, C., Qian, C., Chen, C., Li, C., Zhang, C., Tsai, C.-Y., Luo, C., Micheloni, C., Tao, D., Gupta, D., Song, D., Wang, D., Gavves, E., Yi, E., Khan, F.S., Zhang, F., Wang, F., Zhao, F., De Ath, G., Bhat, G., Chen, G., Wang, G., Li, G., Cevikalp, H., Du, H., Zhao, H., Saribas, H., Jung, H.M., Bai, H., Hu, H., Peng, H., Lu, H., Li, H., Li, J., Fu, J., Chen, J., Gao, J., Zhao, J., Tang, J., Wu, J., Liu, J., Wang, J., Qi, J., Zhang, J., Tsotsos, J.K., Lee, J.H., van de Weijer, J., Kittler, J., Zhuang, J., Zhang, K., Wang, K., Dai, K., Chen, L., Liu, L., Guo, L., Zhang, L., Wang, L., Zhou, L., Zheng, L., Rout, L., Van Gool, L., Bertinetto, L., Danelljan, M., Dunnhofer, M., Ni, M., Kim, M.Y., Tang, M., Yang, M.-H., Paluru, N., Martinel, N., Xu, P., Zhang, P., Zheng, P., Torr, P.H.S., Zhang, Q., Wang, Q., Guo, Q., Timofte, R., Gorthi, R.K., Everson, R., Han, R., Zhang, R., You, S., Zhao, S.-C., Zhao, S., Li, S., Ge, S., Bai, S., Guan, S., Xing, T., Xu, T., Yang, T., Zhang, T., Vojíř, T., Feng, W., Hu, W., Wang, W., Tang, W., Zeng, W., Liu, W., Chen, X., Qiu, X., Bai, X., Wu, X.-J., Yang, X., Li, X., Sun, X., Tian, X., Tang, X., Zhu, X.-F., Huang, Y., Chen, Y., Lian, Y., Gu, Y., Liu, Y., Zhang, Y., Xu, Y., Wang, Y., Li, Y., Zhou, Y., Dong, Y., Wang, Z., Luo, Z., Zhang, Z., Feng, Z.-H., He, Z., Song, Z., Chen, Z., Wu, Z., Xiong, Z., Huang, Z., Teng, Z., Ni, Z., and Intelligent Sensory Information Systems (IVI, FNWI)
- Subjects
Source code ,Computer science ,business.industry ,media_common.quotation_subject ,Object tracking ,Performance evaluation ,VOT challenge ,020206 networking & telecommunications ,02 engineering and technology ,Visualization ,Datorseende och robotik (autonoma system) ,Robustness (computer science) ,Video tracking ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Computer Vision and Robotics (Autonomous Systems) ,media_common - Abstract
The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOT-ST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2019 focused on long-term tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard short-term, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website(1). Funding Agencies|Slovenian research agencySlovenian Research Agency - Slovenia [J2-8175, P2-0214, P2-0094]; Czech Science Foundation Project GACR [P103/12/G084]; MURI project - MoD/DstlMURI; EPSRCEngineering & Physical Sciences Research Council (EPSRC) [EP/N019415/1]; WASP; VR (ELLIIT, LAST, and NCNN); SSF (SymbiCloud); AIT Strategic Research Programme; Faculty of Computer Science, University of Ljubljana, Slovenia
- Published
- 2019
6. Automatic deep learning based quality assessment of transperineal ultrasound guided prostate radiotherapy
- Author
-
Camps, S.M., Houben, T., Carneiro, G., Edwards, C., Antico, M., Dunnhofer, M., Martens, E.G.H.J., Baeza, J.A., Vanneste, B.G.L., van Limbergen, E.J., de With, P.H.N., Verhaegen, F., Fontanarosa, D., Camps, S.M., Houben, T., Carneiro, G., Edwards, C., Antico, M., Dunnhofer, M., Martens, E.G.H.J., Baeza, J.A., Vanneste, B.G.L., van Limbergen, E.J., de With, P.H.N., Verhaegen, F., and Fontanarosa, D.
- Abstract
Ultrasound (US) is one of the imaging modalities that can be used for image‐guided radiotherapy (RT) workflows of prostate cancer patients. It allows real‐time volumetric tracking during the course of the RT treatment, which could potentially improve the precision of radiation dose delivery. However, intra‐fraction motion management using US image guidance is not yet widespread. This can be partially attributed to the need for image interpretation by a trained operator during or after US image acquisition.
- Published
- 2019
7. Visual Object Tracking in First Person Vision.
- Author
-
Dunnhofer M, Furnari A, Farinella GM, and Micheloni C
- Abstract
The understanding of human-object interactions is fundamental in First Person Vision (FPV). Visual tracking algorithms which follow the objects manipulated by the camera wearer can provide useful information to effectively model such interactions. In the last years, the computer vision community has significantly improved the performance of tracking algorithms for a large variety of target objects and scenarios. Despite a few previous attempts to exploit trackers in the FPV domain, a methodical analysis of the performance of state-of-the-art trackers is still missing. This research gap raises the question of whether current solutions can be used "off-the-shelf" or more domain-specific investigations should be carried out. This paper aims to provide answers to such questions. We present the first systematic investigation of single object tracking in FPV. Our study extensively analyses the performance of 42 algorithms including generic object trackers and baseline FPV-specific trackers. The analysis is carried out by focusing on different aspects of the FPV setting, introducing new performance measures, and in relation to FPV-specific tasks. The study is made possible through the introduction of TREK-150, a novel benchmark dataset composed of 150 densely annotated video sequences. Our results show that object tracking in FPV poses new challenges to current visual trackers. We highlight the factors causing such behavior and point out possible research directions. Despite their difficulties, we prove that trackers bring benefits to FPV downstream tasks requiring short-term object tracking. We expect that generic object tracking will gain popularity in FPV as new and FPV-specific methodologies are investigated., Supplementary Information: The online version contains supplementary material available at 10.1007/s11263-022-01694-6., (© The Author(s) 2022.)
- Published
- 2023
- Full Text
- View/download PDF
8. Deep convolutional feature details for better knee disorder diagnoses in magnetic resonance images.
- Author
-
Dunnhofer M, Martinel N, and Micheloni C
- Subjects
- Magnetic Resonance Imaging, Neural Networks, Computer
- Abstract
Convolutional neural networks (CNNs) applied to magnetic resonance imaging (MRI) have demonstrated their ability in the automatic diagnosis of knee injuries. Despite the promising results, the currently available solutions do not take into account the particular anatomy of knee disorders. Existing works have shown that injuries are localized in small-sized knee regions near the center of MRI scans. Based on such insights, we propose MRPyrNet, a CNN architecture capable of extracting more relevant features from these regions. Our solution is composed of a Feature Pyramid Network with Pyramidal Detail Pooling, and can be plugged into any existing CNN-based diagnostic pipeline. The first module aims to enhance the CNN intermediate features to better detect the small-sized appearance of disorders, while the second one captures such kind of evidence by maintaining its detailed information. An extensive evaluation campaign is conducted to understand in-depth the potential of the proposed solution. The experimental results achieved demonstrate that the application of MRPyrNet to baseline methodologies improves their diagnostic capability, especially in the case of anterior cruciate ligament tear and meniscal tear because of MRPyrNet's ability in exploiting the relevant appearance features of such disorders. Code is available at https://github.com/matteo-dunnhofer/MRPyrNet., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2022 Elsevier Ltd. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
9. Siam-U-Net: encoder-decoder siamese network for knee cartilage tracking in ultrasound images.
- Author
-
Dunnhofer M, Antico M, Sasazawa F, Takeda Y, Camps S, Martinel N, Micheloni C, Carneiro G, and Fontanarosa D
- Subjects
- Arthroscopy, Deep Learning, Female, Healthy Volunteers, Humans, Imaging, Three-Dimensional, Male, Cartilage, Articular diagnostic imaging, Image Processing, Computer-Assisted methods, Knee Joint diagnostic imaging, Neural Networks, Computer, Ultrasonography, Interventional methods
- Abstract
The tracking of the knee femoral condyle cartilage during ultrasound-guided minimally invasive procedures is important to avoid damaging this structure during such interventions. In this study, we propose a new deep learning method to track, accurately and efficiently, the femoral condyle cartilage in ultrasound sequences, which were acquired under several clinical conditions, mimicking realistic surgical setups. Our solution, that we name Siam-U-Net, requires minimal user initialization and combines a deep learning segmentation method with a siamese framework for tracking the cartilage in temporal and spatio-temporal sequences of 2D ultrasound images. Through extensive performance validation given by the Dice Similarity Coefficient, we demonstrate that our algorithm is able to track the femoral condyle cartilage with an accuracy which is comparable to experienced surgeons. It is additionally shown that the proposed method outperforms state-of-the-art segmentation models and trackers in the localization of the cartilage. We claim that the proposed solution has the potential for ultrasound guidance in minimally invasive knee procedures., (Crown Copyright © 2019. Published by Elsevier B.V. All rights reserved.)
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.