This project aims at developing emotion recognition technologies and algorithms for applications in social robotics. First, we will advance human pose estimation methods to generate more accurate human mesh representations. This is particularly important for our task as with more accurate estimations we can identify subtle body movements that contain information on a person’s emotional state. Second, due to limitations in dataset size and the inherent subjectivity of emotion annotations, we will build upon previous semi-supervised methods for noisy label training. Finally, we will explore the role social context plays in emotion understanding for the currently unexplored tasks of emotion localization and interaction classification. Together these three areas of research will be integrated into a full emotion recognition pipeline which we will demonstrate at Amazon’s Annual Robotics Symposium.