My primary research interests are in the area of machine learning, computer vision, social signal processing and machine perception of social behavior. Within these areas, my work focuses on developing novel strategies to accurately sense and interpret human social signals and social context.
Studies suggested that in human-human interactions more than a half of the messages exchanged are based on the way people move (e.g., posture, facial expression and gestures). However, machines have a poor understanding of these nonverbal cues. During social interactions, non-verbal behaviour conveys a continuous flow of signals about feelings, mental state, personality, and other traits of people.
My current research focuses on bringing deep learning solutions to tease out the structure of the elaborate code behind social interactions (Human-Human & Human-Robot), making it possible for machines to read and write human body language. Next-generation computing needs to include the essence of social intelligence in order to become more effective and possibly to understand a facet of our communication better than we do ourselves.
Within the EPSRC Socially Competent Robots project, we are studying the benefits of social robots for adults with Autism Spectrum Disorders. Social robots can be used to help people with autism by identifying social cues and emotional reactions that they can sometimes struggle to express and interpret themselves. One of the main goals is the development of a Robot Training Buddy, that can help people with ASD to socialize on a more regular basis identifying their mood and reacting appropriately.
Currently, we are working on analyzing and understanding dynamic scenes using deep learning techniques to efficiently detect the pose of multiple people. Long short-term memory networks are used to model socially relevant factors, such as posture and gestures, to infer whether the human is open to an interaction, or they are paying attention to the robot, and other sort of social signals that we use in our daily conversations.