Pandora Dataset

Head and Shoulder Pose Estimation

The dataset

Pandora has been specifically created for head center localization, head pose and shoulder pose estimation and is inspired by the automotive context. A frontal fixed device acquires the upper body part of the subjects, simulating the point of view of camera placed inside the dashboard. Subjects also perform driving-like actions, such as grasping the steering wheel, looking to the rear-view or lateral mirrors, shifting gears and so on.

Shoulder and Head angles

in addition to the head pose annotation, Pandora contains the ground truth data of the shoulder pose expressed as yaw, pitch and roll. Subjects perform wide head (70 roll, 100 pitch and 125 yaw) and shoulder (70 roll, 60 pitch and 60 yaw) movements.

Challenging camouflage

Garments as well as various objects are worn or used by the subjects to create head and/or shoulder occlusions. For example, people wear prescription glasses, sun glasses, scarves, caps, and manipulate smartphones, tablets or plastic bottles.



Pandora features



Deep oriented: Pandora contains more than 250k full resolution RGB (1920x1080 pixels) and depth images (512x424) with the corresponding annotation; 110 annotated sequences using 10 male and 12 female actors. Each subject has been recorded five times.

Time-of-Flight data: a Microsoft Kinect One device is used to acquire depth data, with a better quality than other datasets created with the first Kinect version, as reported in the paper.

Data: each frame of the dataset is composed of the RGB appearance image, the corresponding depth map, the 3D coordinates of the skeleton joints corresponding to the upper body part, including the head center and the shoulders positions. For convenience's sake, the 2D coordinates of the joints on both color and depth frames are provided as well as the head and shoulder pose angles with respect to the camera reference frame. Shoulder angles are obtained through the conversion to Euler angles of a corresponding rotation matrix.

Click here to see readme file about Pandora data.

Accepted at CVPR 2017!


The paper "POSEidon: Face-from-Depth for Driver Pose Estimation " has been accepted in CVPR 2017, that will take place at the Hawaii Convention Center from July 21 to July 26, 2017 in Honolulu, Hawaii.

DOWNLOAD

To download the dataset, we require to complete the form below. This will help us to keep in touch in case errors are found or as updates become available.
Dataset download and additional material (about 300 GB) is provided through Google Drive.


Submitting the form below, you agree with the following statement:


You are hereby given permission to copy this data in electronic or hardcopy form for your own scientific use and to distribute it for scientific use to colleagues within your research group. Inclusion of rendered images or video made from this data in a scholarly publication (printed or electronic) is also permitted. In this case, credit must be given to the publication. However, the data may not be included in the electronic version of a publication, nor placed on the Internet. These restrictions apply to any representations (other than images or video) derived from the data, including but not limited to simplifications, remeshing, and the fitting of smooth surfaces. The making of physical replicas this data is prohibited, and the data may not be distributed to students in connection with a class. For any other use, including distribution outside your research group, written permission is required. Any commercial use of the data is prohibited. Commercial use includes but is not limited to sale of the data, derivatives, replicas, images, or video, inclusion in a product for sale, or inclusion in advertisements (printed or electronic), on commercially-oriented web sites, or in trade shows.

An email will be sent to you with the instruction to get the dataset.

New! Now, we release also pre-processed Pandora data. In particular, you can download the following sets:

    1. Cropped faces (depth 8-bit images, 100x100 pixels)
    2. Cropped faces (depth 16-bit images, 100x100 picels)
    3. Cropped faces (gray-level images, 100x100 pixels)
    4. Cropped faces (RGB images, 100x100 pixels)
    5. Cropped faces (Optical Flow images, 100x100 pixels)
    6. Cropped shoulders (depth 8-bit images, 100x100 pixels)
    7. Original depth images (depth 8-bit images, 512x424 pixels)
    8. Cropped faces of Biwi dataset (depth 8-bit images, 100x100 pixels)
    9. Cropped faces of Biwi dataset (RGB images, 100x100 pixels)
  10. Cropped faces of Biwi dataset (Optical Flow images, 100x100 pixels)

Every subset contains 100 folders (20 subjects x 5 sequences for each, 2 subjects have been discarded).
In each directory you will find the file angles.txt with the dataset annotations.
Faces are croppped through the steps reported in the CVPR paper and resized to 100x100.
We hope that these subsets could help you!

Download

To download the dataset, we require an email address where we'll send a download link to. This will help us to keep in touch in case errors are found or as updates become available.





POSEidon Framework


Click here to view details about POSEidon: Face-from-Depth for Driver Pose Estimation.

[Work in progress]


THANKS FOR YOUR SUPPORT




Contact Information

   Dipartimenti di Ingegneria Enzo Ferrari (DIEF)
   Via Pietro Vivarelli, 10
   guido.borghi[at]unimore.it

Acknowledgments

Special thanks to: Francesca, Chiara, Claretta, Sara, Silvia, Rebecca, Giorgia, Patrizia, Marcella, Chiara, Silvia, Giulia, Riccardo, Elia, Lorenzo, Niccolo, Andrea, Roberto, Fabrizio and Federico

Citations

We believe in open research and we are happy if you find this data useful.
If you use it, please cite our work.

							@inproceedings{borghi2017poseidon,
							  title={Poseidon: Face-from-depth for driver pose estimation},
							  author={Borghi, Guido and Venturelli, Marco and Vezzani, Roberto and Cucchiara, Rita},
							  booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
							  pages={5494--5503},
							  year={2017},
							  organization={IEEE}
							}
							

Created by AImagelab at University of Modena and Reggio Emilia, Italy.