Archives

Valentin Peretroukhin

The deep learning revolution has lead to significant advances in the state-of-the-art in computer vision and natural language processing. For mobile robotics to benefit from the fruits of this research, roboticists must ensure these predictive algorithms are not only accurate in dynamic environments, inclement weather and under adverse lighting conditions, but that they also provide some consistent measure of uncertainty. In many cases, what is sufficient in a computer vision context is significantly deficient for use in mobile robotics, and vice-versa.

 

For example, an object classification algorithm with an accuracy of 95% may be sufficient to reach the state-of-the-art on some computer vision datasets, but may be completely unusable for safety-critical mobile autonomy applications. Conversely, an algorithm with an accuracy of 30% may be deemed unsatisfactory for many computer vision tasks, but may be more than enough for mobile vehicles, if it operates at high frequency, and produces consistent uncertainty estimates that can be used to eliminate poor classifications.

 

Valentin’s research focuses on bridging the gap between classical probabilistic state estimation and modern machine learning. He has worked on several projects including:

DPC-Net: Deep Pose Correction for Visual Localization


DPC-Net corrects classical VO estimates by learning SE(3) corrections from input images.

DPC-Net: Deep Pose Correction for Visual Localization
Valentin Peretroukhin and Jonathan Kelly
Robotics and Automation Letters (RAL) and ICRA (2018).

Sun-BCNN: Sun sensing through Bayesian CNNs


Sun-BCNN regressed the 3D direction of the sun to improve stereo VO.

Inferring sun direction to improve visual odometry: A deep learning approach
Valentin Peretroukhin, Lee Clement, and Jonathan Kelly
IJRR (2018).
Reducing Drift in Visual Odometry by Inferring Sun Direction using a Bayesian Convolutional Neural Network
Valentin Peretroukhin, Lee Clement, and Jonathan Kelly
ICRA 2017. Singapore.

Predictive Robust Estimation


PROBE maps visual landmarks into a prediction space.

PROBE-GK: Predictive Robust Estimation using Generalized Kernels
Valentin Peretroukhin, William Vega-Brown, Nicholas Roy, and Jonathan Kelly
ICRA 2016. Stockholm, Sweden.PROBE: Predictive Robust Estimation for visual-inertial navigation
Valentin Peretroukhin, Lee Clement, Matthew Giamou and Jonathan Kelly
IROS 2015. Hamburg, Germany.

Lee Clement

As robotics enters the “robust perception age”, a major focus of modern robotics research will be the design of perception systems capable of operating over extended periods of time in a broad range of environments. Visual perception in particular holds great promise in this area due to the wealth of information available from standard colour cameras. Indeed, we humans rely heavily on vision for navigating our daily lives. But how can we use vision to build persistent maps and localize against them when the appearance of the world is always changing?

 

Lee’s research focuses on developing ways for robots to reason about more than just the geometry of their environment by incorporating information about illumination and appearance into the mapping and localization problem. In particular, he is interested in using machine learning algorithms to create robust data-driven models of visual appearance, and using these models as an enabler of long-term visual navigation.

 

Projects he has worked on include:

 

 

Appearance Modelling for Long-term Visual Localization


CAT-Net learns to transform images to correspond to a previously-seen reference appearance.

Visual Sun Sensing


Sun-BCNN regresses the 3D direction of the sun to improve stereo VO.

Inferring sun direction to improve visual odometry: A deep learning approach
Valentin Peretroukhin, Lee Clement, and Jonathan Kelly
IJRR 2018.
Reducing Drift in Visual Odometry by Inferring Sun Direction using a Bayesian Convolutional Neural Network
Valentin Peretroukhin, Lee Clement, and Jonathan Kelly
ICRA 2017. Singapore.
Improving the Accuracy of Stereo Visual Odometry Using Visual Illumination Estimation
Lee Clement, Valentin Peretroukhin, and Jonathan Kelly
ISER 2016. Tokyo, Japan.

Monocular Visual Teach & Repeat


MonoVT&R is capable of retracing human-taught routes with centimetre accuracy using only a monocular camera.

Robust Monocular Visual Teach and Repeat Aided by Local Ground Planarity and Colour-Constant Imagery
Lee Clement, Jonathan Kelly, and Timothy D. Barfoot
JFR 2017.
Monocular Visual Teach and Repeat Aided by Local Ground Planarity
Lee Clement, Jonathan Kelly, and Timothy D. Barfoot
FSR 2015. Toronto, Canada.

Brandon Wagstaff

Brandon is interested in using low-cost sensors (such as cameras or inertial sensors) localization. In particular, he is investigating methods to extract useful information from complex data using machine learning, which can then be integrated into classical state estimation techniques. Ultimately, this work is intended to produce localization algorithms that are able to operate within challenging environments, where classical algorithms are prone to failure.
 

For example, classical algorithms commonly rely on parameter tuning/calibration, which is highly sensitive to the agent’s motion, or to the environment that the agent operates within. One of my goals is to obviate the need for calibration or parameter tuning by replacing the sensitive components of the system with more robust learning-based models. In doing so, these systems can be employed without requiring time-consuming calibration, and will be able to operate within continuously changing environments over longer periods of time.
 
 

Foot-Mounted Inertial Navigation for Indoor Localization

Improving foot-mounted inertial navigation through real-time motion classification
Brandon Wagstaff, Valentin Peretroukhin and Jonathan Kelly
Indoor Positioning and Indoor Navigation Conference (2017).
LSTM-based zero-velocity detection for robust inertial navigation
Brandon Wagstaff and Jonathan Kelly
Indoor Positioning and Indoor Navigation Conference (2018).

Trevor Ablett

Machine learning, and specifically deep learning, has generated incredible advancements in robotics in the last 5-10 years. However, deep learning techniques suffer from being extremely data hungry, often requiring millions of training examples to achieve reasonable results. In contrast, humans typically require very little training or practice to achieve many skills that would be highly nontrivial for a robot.

 

This conundrum naturally gives rise to a number of questions. How can we combine the representational power of the non-linear models represented by deep learning with the strong inherent manipulation capabilities of humans? Can humans “teach” a robot to complete complex tasks that traditional, purely model-based approaches to robotics tend to fail in? Can this be done while minimizing the amount of required data we need? And, finally, can we do this while still ensuring safety by monitoring our model’s uncertainty about its capabilities?

 

Trevor’s current research attempts to chip away at some of these difficult questions by combining Imitation Learning techniques with existing planners and controllers, by using Bayesian principles, and by attempting to bridge the gap between Imitation Learning and Reinforcement Learning. He has also worked previously on techniques for improving the versatility and usability of mobile manipulators, such as self-calibration.

 

Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction


Our contact-based self-calibration procedure exclusively uses its immediate environment and on-board sensors.

Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction
Oliver Limoyo, Trevor Ablett, Filip Marić, Luke Volpatti and Jonathan Kelly
ICRA (2018).

 

Oliver Limoyo

Reinforcement learning offers a promising framework to develop algorithms that can reproduce hard-to-model behaviours in robotics. Recently, there have been many success stories where reinforcement learning has been used to solve problems which were previously considered prohibitively difficult for traditional AI techniques. Unfortunately, it is still not clear how to transfer these methods to robotic systems, where problems involve high-dimensional and continuous state and action spaces, that are also often not completely observable (nor noise-free).

 

Oliver is interested in investigating how robotic platforms can successfully reason and act in response to noisy sensor readings by learning useful representations of perceptual data. Specifically, he is interested in developing methods which learn to integrate multiple perception modalities, including underused modalities such as contact or force sensing, within reinforcement learning frameworks.

 

 

 

Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction


Our contact-based self-calibration procedure exclusively uses its immediate environment and on-board sensors.

Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction
Oliver Limoyo, Trevor Ablett, Filip Marić, Luke Volpatti and Jonathan Kelly
ICRA (2018).

Filip Maric

Motion planning is one of key challenges in robotics today. When determining how a robot should perform a task, both environmental and performance factors should be considered. For example, obstacles in the environment should be avoided in order to prevent collision, while also taking into account sensor measurement uncertainty and energy consumption restrictions. Currently, there are many different methods of approaching this problem, ranging from classical optimization to deep learning.

 

Filip is developing motion planning algorithms with a focus on mobile manipulators, whose functionality encompasses a wide range of tasks. Currently, he is exploring the connection between state estimation and motion planning in collaboration with the LAMoR group at the University of Zagreb.

Matthew Giamou

What performance guarantees exist for algorithms running on complex robot systems that operate in dynamic environments shared with humans and other autonomous agents? This critical question motivates Matt’s work on safe robotic estimation and planning. Matt completed his Master’s degree in Aeronautical engineering at MIT, where he researched resource-efficient simultaneous localization and mapping (SLAM) with the Aerospace Controls Laboratory. His work focused on optimal communication and computation for multi-robot systems using SLAM in challenging missions like wilderness search and rescue.

 

Currently, Matt is applying global polynomial optimization techniques to various estimation and planning problems involving 3D position and orientation. This will lead to robots that are able to verify the quality of their model of the world and take action to correct any shortcomings. Matt is also interested in deriving bounds on measurement noise that ensure observability and fast, globally optimal solutions to key robotic estimation problems. These optimization methods, when combined with state-of-the-art learning-based solutions to problems, will form a high-performance and provably safe architecture for mobile autonomous systems. Matt is a Vector Institute Post-Graduate Affiliate, and the recipient of a 2019 Royal Bank of Canada Fellowship. Matt has worked on several projects including:

 

Global Polynomial Optimization for Robot Kinematics

Screen Shot 2016-03-13 at 1.04.41 AM


Convex relaxations for polynomial formulations of inverse kinematics.

Inverse Kinematics for Serial Kinematic Chains via Sum of Squares Optimization
Filip Marić, Matthew Giamou, Soroush Khoubyarian, Ivan Petrović, Jonathan Kelly
arXiv pre-print.

Certifiably Globally Optimal Estimation via Convex Relaxations

Screen Shot 2016-03-13 at 1.04.41 AM


Dual SDP relaxation for extrinsic calibration.

Sparse Bounded Degree Sum of Squares Optimization for Certifiably Globally Optimal Rotation Averaging
Matthew Giamou, Filip Maric, Valentin Peretroukhin, Jonathan Kelly
arXiv pre-print.
Certifiably Globally Optimal Extrinsic Calibration from Per-Sensor Egomotion
Matthew Giamou, Ziye Ma, Valentin Peretroukhin, Jonathan Kelly
IEEE RA-L 2019.

Sensor Calibration for Robotic Systems

Screen Shot 2016-03-13 at 1.04.41 AM


Self-calibration between sensors.

Entropy-Based  Calibration of 2D Lidars to Egomotion Sensors
Jacob Lambert, Lee Clement, Matthew Giamou, Jonathan Kelly
MFI 2016. Baden-Baden.
Certifiably Globally Optimal Extrinsic Calibration from Per-Sensor Egomotion
Matthew Giamou, Ziye Ma, Valentin Peretroukhin, Jonathan Kelly
IEEE RA-L 2019.

Resource-Efficient Communication for Multi-Robot SLAM

RSS 2018.


Measurement exchange graph for multi-robot SLAM.

Talk Resource-Efficiently to Me: Optimal Communication Planning for Distributed SLAM Front-Ends
Matthew Giamou, Kasra Khosoussi, Jonathan How
ICRA 2018. Brisbane.
Near-Optimal Budgeted Data Exchange for Distributed Loop Closure Detection
Yulun Tian, Kasra Khosoussi, Matthew Giamou, Jonathan How, Jonathan Kelly
RSS 2018. Pittsburgh.

Emmett Wise

Modern, reliable autonomous navigation requires the fusion of data from multiple sensors to ensure that a vehicle’s positioning error is always bounded. The choice of on-board sensors has yet to be fully determined—the door is open for cutting edge fusion techniques to dramatically improve navigation accuracy. At present, vehicle sensor packages typically include cameras, LIDARs, and GNSS-INS systems. However, these sensor packages are often not sufficiently accurate or robust to inclement weather, such as snow or rain. The development of robust solutions is imperative for autonomous vehicles operating in Canada, where navigation in harsh weather conditions is a necessity.

 

Emmett has a passion for investigating the application of novel sensors in the field of mobile robotics. Currently, he is examining the use of ground penetrating radar (GPR) to estimate the pose of a vehicle in inclement weather. Emmett’s goal is to improve upon the positional accuracy of state-of-the-art GPR localization algorithms, while performing a rigorous evaluation of GPR’s suitability as an all-weather solution for vehicle localization.

 

Localization with Ground Penetrating Radar


The ground penetrating radar antenna panels are mounted in the cavity. Calibration is soon to follow!

This project aims to leverage ground penetrating radar’s robustness to inclement weather for vehicle localization.

Olivier Lamarre

Mobility in extra-terrestrial environments, such as on the surface of Mars, permits humanity to actively extend our reach beyond low Earth orbit, and will play a key role in future robot-robot or robot-human collaborative missions. Navigation in such distant environments, however, is subjected to severe operational and energetic constraints.

 

Olivier works on optimal long-distance mobility planning for solar-powered rovers in Martian environments. More specifically, his research focusses on methods to find global paths that will minimize the navigation-related energy consumption of solar-powered rovers. This work primarily makes use of orbital data to operate in a predictive regime, which is a necessity to achieve optimal management of energy resources over the long term.

 

 

 

 

Energy-Aware Planning for Planetary Navigation


Orbital imagery and elevation model of the Canadian Space Agency’s Analogue Terrain.

Overcoming the Challenges of Solar Rover Autonomy: Enabling Long-Duration Planetary Navigation
Olivier Lamarre and Jonathan Kelly
International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS, 2018).