My primary research interests are in the area of machine learning, computer vision, and machine perception of social behavior. Within these areas, my work focuses on developing novel strategies to accurately sense and interpret human social signals and social context.
Studies suggested that in human-human interactions more than a half of the messages exchanged are based on the way people move (e.g., posture, facial expression and gestures). However, machines have a poor understanding of these nonverbal cues. During social interactions, non-verbal behaviour conveys a continuous flow of signals about feelings, mental state, personality, and other traits of people.
My current research focuses on bringing deep learning solutions to tease out the structure of the elaborate code behind social interactions (Human-Human & Human-Robot), making it possible for machines to read and write human body language. Next-generation computing needs to include the essence of social intelligence in order to become more effective and possibly to understand a facet of our communication better than we do ourselves.
We are working on analyzing and understanding dynamic scenes using deep learning techniques to efficiently detect the pose of multiple people. Long short-term memory networks are used to model socially relevant factors, such as posture and gestures, to infer whether the human is open to an interaction, or they are paying attention to the robot, and other sort of social signals that we use in our daily conversations.