Towards 3D Colored Mesh Saliency: Database and Benchmarks

Xiaoying Ding1,2 Zhao Chen2 Weisi Lin3 Zhenzhong Chen*2
1School of Information and Safety Engineering,Zhongnan University of Economics and Law, Wuhan, China
2School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, China
3School of Computer Science and Engineering, Nanyang Technological University, Singapore
Teaser
We conducted an eye-tracking experiment and collected eye-movement data to construct a novel 3D colored mesh saliency database (3DCMS). This database can help investigating human visual behavior towards color features on 3D surfaces and can also be used as the ground-truth for evaluating the performance of different 3D colored mesh saliency detection algorithms.

Abstract

While saliency detection for 3D meshes has been extensively studied in the past decades, only a little work considers color information, and most of existing 3D mesh saliency databases are collected using meshes without color information. The lack of publicly available 3D colored mesh saliency database hinders the research progress in 3D colored mesh saliency detection. In this article, we established a novel 3D colored mesh saliency database (3DCMS) based on an eye-tracking experiment and investigated subjects’ visual attention behavior towards 3D colored meshes. Based on the investigations, a novel 3D colored mesh saliency detection framework is proposed which takes both color and geometric features into consideration. To evaluate the performance of the proposed algorithm, we compare it with several relevant methods and apply it to 3D mesh simplification task. The quantitative and qualitative evaluation results demonstrate the superior performance of the proposed framework. The proposed 3DCMS database will be made publicly available.

Paper & Data

Paper
Original colored 3D models - 17 high quality 3D colored mesh.
Fixations - 204 files (MAT format), involving 17 3D colored models and 12 viewing orientations.
Code Coming soon.

Acknowledgements

We are deeply grateful to Anass Nouri, Christophe Charrier and Olivier Lézoray for providing high-quality 3D colored models. Moreover, we would like to express our sincere appreciation to all subjects for their active involvement in the experiment.
This work was supported in part by the Natural Science Foundation of Hubei Province, China under Grant 2022CFB984, in part by the China Postdoctoral Science Foundation under Grant 2022M713503, in part by the National Natural Science Foundation of China under Grant 62036005, and in part by the Fundamental Research Funds for the Central Universities, Zhongnan University of Economics and Law.