Sorry, I don't understand your search. ×
Back to Search Start Over

FoodSAM: Any Food Segmentation

Authors :
Lan, Xing
Lyu, Jiayi
Jiang, Hanyu
Dong, Kun
Niu, Zehai
Zhang, Yi
Xue, Jian
Publication Year :
2023

Abstract

In this paper, we explore the zero-shot capability of the Segment Anything Model (SAM) for food image segmentation. To address the lack of class-specific information in SAM-generated masks, we propose a novel framework, called FoodSAM. This innovative approach integrates the coarse semantic mask with SAM-generated masks to enhance semantic segmentation quality. Besides, we recognize that the ingredients in food can be supposed as independent individuals, which motivated us to perform instance segmentation on food images. Furthermore, FoodSAM extends its zero-shot capability to encompass panoptic segmentation by incorporating an object detector, which renders FoodSAM to effectively capture non-food object information. Drawing inspiration from the recent success of promptable segmentation, we also extend FoodSAM to promptable segmentation, supporting various prompt variants. Consequently, FoodSAM emerges as an all-encompassing solution capable of segmenting food items at multiple levels of granularity. Remarkably, this pioneering framework stands as the first-ever work to achieve instance, panoptic, and promptable segmentation on food images. Extensive experiments demonstrate the feasibility and impressing performance of FoodSAM, validating SAM's potential as a prominent and influential tool within the domain of food image segmentation. We release our code at https://github.com/jamesjg/FoodSAM.<br />Comment: Code is available at https://github.com/jamesjg/FoodSAM

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.05938
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TMM.2023.3330047