1. 2018 Robotic Scene Segmentation Challenge
- Author
-
Allan, Max, Kondo, Satoshi, Bodenstedt, Sebastian, Leger, Stefan, Kadkhodamohammadi, Rahim, Luengo, Imanol, Fuentes, Felix, Flouty, Evangello, Mohammed, Ahmed, Pedersen, Marius, Kori, Avinash, Alex, Varghese, Krishnamurthi, Ganapathy, Rauber, David, Mendel, Robert, Palm, Christoph, Bano, Sophia, Saibro, Guinther, Shih, Chi-Sheng, Chiang, Hsun-An, Zhuang, Juntang, Yang, Junlin, Iglovikov, Vladimir, Dobrenkii, Anton, Reddiboina, Madhu, Reddy, Anubhav, Liu, Xingtong, Gao, Cong, Unberath, Mathias, Kim, Myeonghyeon, Kim, Chanho, Kim, Chaewon, Kim, Hyejin, Lee, Gyeongmin, Ullah, Ihsan, Luna, Miguel, Park, Sang Hyun, Azizian, Mahdi, Stoyanov, Danail, Maier-Hein, Lena, and Speidel, Stefanie
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Robotics - Abstract
In 2015 we began a sub-challenge at the EndoVis workshop at MICCAI in Munich using endoscope images of ex-vivo tissue with automatically generated annotations from robot forward kinematics and instrument CAD models. However, the limited background variation and simple motion rendered the dataset uninformative in learning about which techniques would be suitable for segmentation in real surgery. In 2017, at the same workshop in Quebec we introduced the robotic instrument segmentation dataset with 10 teams participating in the challenge to perform binary, articulating parts and type segmentation of da Vinci instruments. This challenge included realistic instrument motion and more complex porcine tissue as background and was widely addressed with modifications on U-Nets and other popular CNN architectures. In 2018 we added to the complexity by introducing a set of anatomical objects and medical devices to the segmented classes. To avoid over-complicating the challenge, we continued with porcine data which is dramatically simpler than human tissue due to the lack of fatty tissue occluding many organs.
- Published
- 2020