ABOT 2.0
Robot Social Interaction Potential Database
❓ What is ABOT 2.0?
ABOT 2.0 is an updated and extended version of the original ABOT (Anthropomorphic roBOT) Database, developed by Phillips, Zhao, Ullman, and Malle (2018, HRI). While the original ABOT focused on categorizing the human-likeness of robot appearance through visual features, ABOT 2.0 shifts the emphasis toward a more functionally relevant question: how a robot's appearance shapes users' beliefs about its social interaction capabilities.
The new version includes 300 robot images: the original 251 robots from ABOT (pre-2018) and 49 newly collected robots (mostly 2023+) with a broader variety of forms, functionalities, and contexts. Building on the concept of social affordances—visual features that communicate potential for social interaction—we developed the RoSIP (Robot Social Interaction Potential) scale. RoSIP captures how users infer a robot's social interaction potential from its appearance-based affordances, measuring two key dimensions: Perceptual Potential (can it receive information?) and Behavioral Potential (can it respond to me?).
🔍 What is difference between ABOT and ABOT 2.0?
| Aspect | ABOT | ABOT 2.0 |
|---|---|---|
| Primary Goal | Understand and quantify human-likeness in robot appearance | Evaluate perceived social interaction potential from appearance |
| Data Set | 251 robot images | 300 robot images (251 original + 49 new) |
| Evaluation Focus | Appearance-based anthropomorphism | Appearance-based social interaction potential |
| Dimensionality | 4 PCA-derived dimensions: ① Body-Manipulators (torso, arms, legs, etc.) ② Surface Look (skin, hair, nose, etc.) ③ Facial Features (eyes, mouth, face, etc.) ④ Mechanical Locomotion (wheels, treads) |
2 affordance-based dimensions: ① Perceptual Potential (user's belief that the robot can receive information) ② Behavioral Potential (user's belief that the robot can respond to them) |
| Measurement Tool | PCA on human-likeness ratings | RoSIP scale based on user judgments of social potential |
🎯 RoSIP (Robot Social Interaction Potential) Scale
🔍 Perceptual Potential
Refers to users' belief that the robot can receive information. This includes the perceived ability of the robot to see, feel, or smell. This dimension is further divided into two subscales:
- Perceptual Reception: The extent to which users believe the robot can autonomously detect external stimuli.
- Perceptual Expression: The extent to which users believe the robot can visibly express that it has detected external stimuli.
🤝 Behavioral Potential
Reflects users' belief that the robot can interact with them. Rather than emphasizing autonomous action planning or agency, this dimension focuses on the robot's capability to provide interactive feedback to human users.
The full RoSIP (Robot Social Interaction Potential) scale includes 10 items: 6 items for Perception and 4 items for Behavior.
You can download the full scale below (available in both English and Chinese versions).
📚 Citation
We welcome the use of the RoSIP scale and database for research purposes. If you use them in your work, please cite our paper:
Hu, X., Hu, Q., Yu, T., Shen, M., & Zhou, J. (2026, March). RoSIP: A Scale for Measuring Appearance-Based Social Interaction Potential in Robots. In Companion of the 2026 ACM/IEEE International Conference on Human-Robot Interaction.
*Formal citation with DOI will be updated upon publication.