HICO & HICO-DET
Benchmarks for Recognizing Human-Object Interactions in Images


Introduction


We introduce two new benchmarks for classifying and detecting human-object interactions (HOI) in images:

  1. HICO (Humans Interacting with Common Objects)
  2. HICO-DET

Key features:

  1. A diverse set of interactions with common object categories
  2. A list of well-defined, sense-based HOI categories
  3. An exhaustive labeling of co-occurring interactions with an object category in each image
  4. The annotation of each HOI instance (i.e. a human and an object bounding box with an interaction class label) in all images

Tasks


Task 1: HOI Classification

The input is an image and the output is a set of binary labels, each representing the presence or absense of an HOI class.


Sample annotations in the HICO benchmark

Task 2: HOI Detection

The input is an image and the output is a set of bounding box pairs, each localizes a human plus an object and predicts an HOI class label.


Riding a horse
 

Feeding a horse
 

Eating an apple
 

Cutting an apple


Sample annotations in the HICO-DET benchmark

Paper


Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, and Jia Deng.
HICO: A Benchmark for Recognizing Human-Object Interactions in Images.
Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015.
[pdf] [supplementary material] [poster] [bibtex]


Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng.
Learning to Detect Human-Object Interactions.
arXiv preprint arXiv:1702.05448, 2017.
[pdf] [bibtex]

Dataset


HICO version 20150920 7.5GB
Images and annotations for the HOI classification task.

HICO-DET version 20160224 7.5GB
Images and annotations for the HOI detection task.

Note:

  1. HICO and HICO-DET share the same set of HOI categories.
  2. HICO-DET is a superset of HICO in terms of the image set.

Source Code


hico_benchmark
Source code for reproducing the empirical results in the ICCV 2015 paper.

ho-rcnn (coming soon)
Source code for reproducing the empirical results in the arXiv paper.

Video Spotlight


People


Contact


Send any comments or questions to Yu-Wei Chao: ywchao@umich.edu.


Last updated on 2017/02/19