Mission Statement

Dr. Kelly unboxing our mobile manipulator.

We envision a future in which robotic systems are pervasive, persistent, and perceptive:

  • pervasive: widely deployed in assistive roles across the spectrum of human activity (robots everywhere!)
  • persistent: able to operate reliably and independently for long durations, on the order of days, weeks, or more, and
  • perceptive: aware of their environment, and capable of acting intelligently when confronted with new situations and experiences.

Towards this end, the STARS Laboratory carries out research at the nexus of sensing, planning, and control, with a focus on the study of fundamental problems related to perception, representation, and understanding of the world. Our goal is to enable robots to carry out their tasks safely in challenging, dynamic environments, for example, in homes and offices, on road networks, underground, underwater, in space, and on remote planetary surfaces. We work to develop power-on-and-go machines that are able to function from day one without a human in the loop.

To make long-term autonomy possible, we design probabilistic algorithms that are able to deal with uncertainty, about both the environment and the robot’s own internal state over time. We use tools from estimation theory, learning, and optimization to enable perception for efficient planning and control. Our research relies on the integration of multiple sensors and sensing modalities – we believe that rich sensing is a necessary component for the construction of truly robust and reliable autonomous systems. An important aspect of our research is the extensive experimental validation of our theoretical results, to ensure that our work is useful in the real world. We are committed to robotics as a science, and emphasize open source contributions and reproducible experimentation.

Research Directions

The Laboratory is actively involved in a variety of research projects and collaborative undertakings with our industrial partners. New projects are always in development. Details on several active projects are provided below. If a project interests you, please consider joining us!

  1. A learned Canonical Appearance Transformation improves visual localization under illumination change.

    Appearance Modelling for Long-term Visual Localization
    Environmental appearance change presents a significant impediment to long-term visual localization, whether due to illumination variation over the course of a day, changes in weather conditions, or seasonal appearance variations. We are exploring the use of deep learning to help solve the difficult problem of establishing appearance-robust geometric correspondences, while still retaining the accuracy and generality of classical model-based localization algorithms. One approach is to learn a Canonical Appearance Transformation (CAT) that transforms images to correspond to a consistent canonical appearance, such as a previously-seen reference appearance. The transformed images can then be used in combination with existing localization techniques. We are investigating several formulations of this problem and their usefulness for long-term autonomy applications.
  2. A Deep Pose Correction network corrects a classical localization algorithm.

    Deep Pose Correction for Visual Localization
    We are working on ways to fuse the representational power of deep networks with classical model-based probabilistic localization algorithms. In contrast to methods that completely replace a classical visual estimator with a deep network, we propose an approach that uses deep neural networks to learn difficult-to-model corrections to the estimator from ground-truth training data. We name this type of network Deep Pose Correction (DPC-Net) and train it to predict corrections for a particular estimator, sensor and environment. To facilitate this training, we are exploring novel loss functions for learning SE(3) corrections through matrix Lie groups and different network structures for probabilistic regression on constrained surfaces.
  3. Orbital imagery and elevation model of the Canadian Space Agency’s Analogue Terrain.

    Energy-Aware Planning for Planetary Navigation
    Over the decade and beyond, solar-powered rovers that will be sent to Mars will be required to drive long distances in short amounts of time. Since energy availability has always been an important constraint in planetary exploration, proper energy management and clever navigation planning will be essential to the success of theses missions.At present, high-resolution orbital imagery and topography data of the Martian surface is primarily interpreted by human operators as part of long-term activity planning. We are investigating how such high-resolution data can be used to automatically plan long-distance traverses that will minimize navigation energy consumption. This research aims at enabling global energy-efficient path planning to better inform tactical and precise ground planetary navigation.
  4. Ground penetrating radar design.

    Localization with Ground Penetrating Radar
    Recent developments in the field of autonomous vehicle navigation point to a future where humans are no longer behind the wheel (there may not be a wheel). However, current autonomous vehicle navigation sensor packages are not robust to inclement weather, including driving rain and snow. This flaw prevents widespread commercialization in climates where this type of weather occurs frequently, such as Canada. In collaboration with the Reconfigurable Antenna Laboratory, we are constructing a ground penetrating radar (GPR) array for autonomous vehicle localization. Due to the operational wavelength of GPRs, the array is capable of collecting accurate navigation data in weather conditions that render current sensor packages inoperable. We are evaluating the stability of GPR maps across freeze-thaw cycles to explore its potential role in autonomous vehicle navigation.
  5. Sun-BCNN uses deep learning to infer the direction of the sun.

    Visual Sun Sensing
    Observing the direction of the sun can help mobile robots orient themselves in unknown environments. Although it possible to deploy hardware sun sensors for this task, we are interested in developing machine learning techniques that can infer the direction of the sun from a single RGB image. With such an approach, any platform that already uses RGB images for visual localization (e.g., for visual odometry) can extract global orientation information without any additional hardware. Towards this end, we have published three papers on classical and modern deep-learning-based methods that can reliably extract the direction of the sun, and reduce dead-reckoning error during visual localization.
  6. PROBE maps visual landmarks into a prediction space.

    Machine Learning for Predictive Noise Modelling
    Many robotic algorithms for tasks such as perception and mapping treat all measurements as being equally informative. This is typically done for reasons of convenience or ignorance – it can be very difficult to model noise sources and sensor degradation, both of which may depend on the environment in which the robot is deployed. In contrast, we are developing a suite of techniques (under the moniker PROBE, for Predictive RObust Estimation) that intelligently weight observations based on a predicted measure of informativeness. We learn a model of information content from a training dataset collected within a prototypical environment and under relevant operating conditions; the learning algorithm can make use of ground truth or rely on expectation maximization. The result is a principled method for taking advantage of all relevant sensor data.
  7. Autonomous mobility device navigating indoors.

    Low-cost Navigation Systems for Near-Term Assistive Applications
    Simultaneous localization and mapping (SLAM) has been intensively studied by the robotics community for more than 20 years, and yet there are few commercially deployed SLAM systems operating in real-world environments. We are interested in deploying low cost, robust SLAM solutions for near-term consumer and industry applications; there are numerous challenges involved in building reliable systems under severe cost constraints. Our initial focus is on assistive devices for wheelchair navigation, with the goal of dramatically improve the mobility of users with, e.g., spinal cord injuries. There are significant opportunities to positively affect the lives of 1000s of individuals, while at the same time creating advanced robotic technology.
  8. Screen Shot 2016-03-13 at 1.04.41 AM

    Self-calibration between sensors.

    Robot Self-Calibration for Power-On-and-Go Operation
    Multisensor systems offer a variety of compelling benefits, including improved task accuracy and robustness. However, correct data fusion requires precision calibration of the sensors involved, which is typically a time-consuming and difficult. We have designed and are continuing to research intrinsic and extrinsic spatial and temporal self-calibration algorithms for various combinations of sensors (e.g. LIDAR-IMU, LIDAR-camera, camera-IMU, camera-manipulator). The aim of this work is to achieve full automatic calibration of multisensor systems in arbitrary environments, removing the burden of manual calibration and enabling long-term operation.
  9. The Clearpath Ridgeback base with the UR10 arm.

    Collaborative Mobile Manipulation in Dynamic Environments
    Cobots, or collaborative robots, are a class of robots intended to physically interact with humans in a shared workspace. We have recently begun exploring research problems related to various tasks for cobots, including collaborative manipulation, transport, and assembly. We are working to develop high performance, tightly-coupled perception-action loops for these tasks, making use of rich multimodal sensing. Experiments are carried out on an advanced mobile manipulation platform based on a Clearpath Ridgeback omnidirectional mobile base and a Universal Robotics UR10 arm.

    Our manipulator is a state-of-art platform, which is unique in Canada at present. Much of our testing takes place in an on-site Vicon motion capture facility, which allows for ground truth evaluation of our algorithms, We have also invested substantial effort in developing methods to self-calibrate these lower-cost platforms automatically, with an eye towards long-term deployment in a wide range of environments.

Sponsors
We graciously acknowledge support from the following organizations: