Lorenceau J, Benmussa F, Paradis A-L, 2011, "OB3D: A 3D object database for studying visuo-cognitive functions" Perception 40 ECVP Abstract Supplement, page 64
OB3D: A 3D object database for studying visuo-cognitive functions
J Lorenceau, F Benmussa, A-L Paradis
We shall present a database of natural (toys or small objects) laser-scanned objects available on-line (http://ob3d.risc.cnrs.fr/) for studying visuo-cognitive processes in healthy people or patients. Objects are versatile 3D clouds of X,Y,Z coordinates (and normals) allowing multiple transformations including motions (zooming, rotation, translations, deformations), mixing, morphing, partial viewing, scrambling etc. These 3D clouds can easily be imported into dedicated software for texturing, inclusion in virtual environments etc. These objects can be downloaded at the cost of providing feedback (new or derived objects, data, articles, etc.) that will be included as meta-data with each object. The aim of the OB3D project is thus to provide researchers not ontly with stimuli but also with a large data set from different disciplines (eg psychology, neurology, psychiatry, physiology, psycholinguistics) using different methodologies (eg psychophysics, Imaging techniques, electrophysiology, modeling) in different populations (children, adults, elderly, man, animal, artifacts) to address issues related to object processing, categorization, recognition, identification, form/motion interactions, etc. We shall present demonstrations of the way these objects were used in MEG/fMRI experiments together with frequency-tagging protocols (also see Benmussa et al, this ECVP Supplement). Although the database is still modest, we are willing, depending on demand and expression of interest, to develop it further.
These web-based abstracts are provided for ease of seaching and access, but certain aspects (such as as mathematics) may not appear in their optimum form. For the final published version of this abstract, please see
ECVP 2011 Abstract Supplement (complete) size: 2206 Kb
[Publisher's note: The abstracts in this year's ECVP supplement have been published with virtually no copy editing by Pion, thus the standards of grammar and style may not match those of regular Perception articles.]