Tobias completed his Master’s at STARS lab on exchange from ETH Zurich. He worked on human-robot cooperative manipulation.
In any multi-sensor system, correct data fusion requires calibration of, for example, 6-DOF inter-sensor spatial transforms. This permits the individual sensors’ data to be combined into a single common reference frame. Manual measurement of the inter-sensor transforms is inaccurate for two reasons. First, obstructions including the sensor mounts and other hardware may be in the way. Second, the true origin of a sensor’s reference frame within its enclosure may be unknown.
For these reasons, calibration methods that make use of the sensors’ own data are typically more accurate. The canonical methods for calibration almost always make use of a specific calibration target, such as a checkerboard or several planar surfaces. A method that works in an arbitrary environment, absent any specific calibration target, allow for re-calibration of a long-lifespan platform in the field, for example, a rover on the surface of Mars, or a UAV in a forest environment.
Jordan’s novel calibration routine estimates the extrinsic calibration and the time delay between sensor clocks for a suite containing at least one planar lidar and one egomotion sensor. The calibration parameters are found by minimizing the Renyi Quadratic Entropy (RQE) of the lidar point cloud. RQE has a number of attractive properties, the foremost of which is that it allows calibration in non-planar environments and with non-overlapping sensor fields of view, as in the photo on the left.
David worked on autonomous docking of our open source tail-sitter aerial vehicle. Now a M.A.Sc. student at ETH Zurich.
Jason worked on computer vision algorithms for the autonomous tail-sitter aerial vehicle.
Jenni worked on modifying our deep network for sun detection, Sun-BCNN. Now at Realtime Robotics in Boston, Massachusetts.
Luke worked on improving and expanding our suite of sensor calibration tools and updating our demo software.
Xinyi built navigation software for our self-driving wheelchair. Last at Zhejiang University.
Lingzhu’s research focused on visual navigation and its application in dynamic environments. Last at Google in Mountain View, California.
Currently a Ph.D. student in Prof. Aaron Ames’ AMBER lab at Caltech, Los Angeles.
Last at Shanghai Jiao Tong University.