Daniel Rakita

I am a first year PhD student working in both the HCI lab and visual computing lab in robotics and animation.  My work attempts to bridge the gap between humans and robots using a synthesis of animation, machine learning, computer vision, and HCI principles, creating robots that can learn tendencies of a human through motion cues, and in turn better adapt, react, and communicate intent back to the human collaborator.  My current research involves capturing human arm motion using a vision-based motion capture system and motion data glove to control a robot using natural arm motions.  Previous work has involved using trajectory optimization techniques to generate motions that more clearly indicate where a robot arm will set an object down, and a gaze-inference system that analyzes human motion capture data and a 3D environment to discern what objects the actor was looking at at particular times.

Back to Robotics Contributers