FruitBin: a tunable large-scale dataset for advancing 6D pose estimation in fruit bin-picking automation
Abstract
Bin picking, essential in various industries, depends on accurate object segmentation and 6D pose estimation for successful grasping and manipulation. Existing datasets for deep learning methods often involve simple scenarios with singular objects or minimal clustering, reducing the effectiveness of benchmarking in bin picking scenarios. To address this, we introduce FruitBin, a dataset featuring over 1 million images and 40 million 6D poses in challenging fruit bin scenarios. FruitBin encompasses all main challenges, such as symmetric and asymmetric fruits, textured and non-textured objects, and varied lighting conditions. We demonstrate its versatility by creating customizable benchmarks for new scene and camera viewpoint generalization, each divided into four occlusion levels to study occlusion robustness. Evaluating three 6D pose estimation models-PVNet, DenseFusion, and GDRNPP-highlights the limitations of current state-of-the-art models and quantitatively shows the impact of occlusion. Additionally, FruitBin is integrated within a robotic software, enabling direct testing and benchmarking of vision models for robot learning and grasping. The associated code and dataset can be found on: https://gitlab.liris.cnrs.fr/gduret/fruitbin.
Fichier principal
ECCV_Workshop_BOP_compressed.pdf (1003.49 Ko)
Télécharger le fichier
Poster_ECCV.pdf (991.17 Ko)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|