Trevor Ablett

Ph.D. Student
Department:

Machine learning, and specifically deep learning, has generated incredible advancements in robotics in the last 5-10 years. However, deep learning techniques suffer from being extremely data hungry, often requiring millions of training examples to achieve reasonable results. In contrast, humans typically require very little training or practice to achieve many skills that would be highly nontrivial for a robot.

 

This conundrum naturally gives rise to a number of questions. How can we combine the representational power of the non-linear models represented by deep learning with the strong inherent manipulation capabilities of humans? Can humans “teach” a robot to complete complex tasks that traditional, purely model-based approaches to robotics tend to fail in? Can this be done while minimizing the amount of required data we need? And, finally, can we do this while still ensuring safety by monitoring our model’s uncertainty about its capabilities?

 

Trevor’s current research attempts to chip away at some of these difficult questions by combining Imitation Learning techniques with existing planners and controllers, by using Bayesian principles, and by attempting to bridge the gap between Imitation Learning and Reinforcement Learning. He has also worked previously on techniques for improving the versatility and usability of mobile manipulators, such as self-calibration.

 

Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction


Our contact-based self-calibration procedure exclusively uses its immediate environment and on-board sensors.

Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction
Oliver Limoyo, Trevor Ablett, Filip Mari─ç, Luke Volpatti and Jonathan Kelly
ICRA (2018).