Pandora Dataset
Head and Shoulder Pose Estimation
Pandora has been specifically created for head center localization, head pose and shoulder pose estimation and is inspired by the automotive context. A frontal fixed device acquires the upper body part of the subjects, simulating the point of view of camera placed inside the dashboard. Subjects also perform driving-like actions, such as grasping the steering wheel, looking to the rear-view or lateral mirrors, shifting gears and so on.
in addition to the head pose annotation, Pandora contains the ground truth data of the shoulder pose expressed as yaw, pitch and roll. Subjects perform wide head (70 roll, 100 pitch and 125 yaw) and shoulder (70 roll, 60 pitch and 60 yaw) movements.
Garments as well as various objects are worn or used by the subjects to create head and/or shoulder occlusions. For example, people wear prescription glasses, sun glasses, scarves, caps, and manipulate smartphones, tablets or plastic bottles.
Deep oriented: Pandora contains more than 250k full resolution RGB (1920x1080 pixels) and depth images (512x424) with the corresponding annotation; 110 annotated sequences using 10 male and 12 female actors. Each subject has been recorded five times.
Time-of-Flight data: a Microsoft Kinect One device is used to acquire depth data, with a better quality than other datasets created with the first Kinect version, as reported in the paper.
Data: each frame of the dataset is composed of the RGB appearance image, the corresponding depth map, the 3D coordinates of the skeleton joints corresponding to the upper body part, including the head center and the shoulders positions. For convenience's sake, the 2D coordinates of the joints on both color and depth frames are provided as well as the head and shoulder pose angles with respect to the camera reference frame. Shoulder angles are obtained through the conversion to Euler angles of a corresponding rotation matrix.
The paper "POSEidon: Face-from-Depth for Driver Pose Estimation " has been accepted in CVPR 2017, that will take place at the Hawaii Convention Center from July 21 to July 26, 2017 in Honolulu, Hawaii.
To download the dataset, we require to complete the form below. This will help us to keep in touch in case errors are found or as updates become available.
Dataset download and additional material (about 300 GB) is provided through Google Drive.
Submitting the form below, you agree with the following statement:
New! Now, we release also pre-processed Pandora data. In particular, you can download the following sets:
1. Cropped faces (depth 8-bit images, 100x100 pixels)
2. Cropped faces (depth 16-bit images, 100x100 picels)
3. Cropped faces (gray-level images, 100x100 pixels)
4. Cropped faces (RGB images, 100x100 pixels)
5. Cropped faces (Optical Flow images, 100x100 pixels)
6. Cropped shoulders (depth 8-bit images, 100x100 pixels)
7. Original depth images (depth 8-bit images, 512x424 pixels)
8. Cropped faces of Biwi dataset (depth 8-bit images, 100x100 pixels)
9. Cropped faces of Biwi dataset (RGB images, 100x100 pixels)
10. Cropped faces of Biwi dataset (Optical Flow images, 100x100 pixels)
Every subset contains 100 folders (20 subjects x 5 sequences for each, 2 subjects have been discarded).
In each directory you will find the file angles.txt with the dataset annotations.
Faces are croppped through the steps reported in the CVPR paper and resized to 100x100.
We hope that these subsets could help you!
To download the dataset, we require an email address where we'll send a download link to. This will help us to keep in touch in case errors are found or as updates become available.
Click here to view details about POSEidon: Face-from-Depth for Driver Pose Estimation.
[Work in progress]
Dipartimenti di Ingegneria Enzo Ferrari (DIEF)
Via Pietro Vivarelli, 10
guido.borghi[at]unimore.it
Special thanks to: Francesca, Chiara, Claretta, Sara, Silvia, Rebecca, Giorgia, Patrizia, Marcella, Chiara, Silvia, Giulia, Riccardo, Elia, Lorenzo, Niccolo, Andrea, Roberto, Fabrizio and Federico
We believe in open research and we are happy if you find this data useful.
If you use it, please cite our work.
@inproceedings{borghi2017poseidon, title={Poseidon: Face-from-depth for driver pose estimation}, author={Borghi, Guido and Venturelli, Marco and Vezzani, Roberto and Cucchiara, Rita}, booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, pages={5494--5503}, year={2017}, organization={IEEE} }