Team PLUTO at the SubT Tunnel Circuit, Part 1

Anthony Cowley, 2019-09-13

Team PLUTO at SubT

The GRASP Lab at University of Pennsylvania, Exyn Technologies, and Ghost Robotics comprised Team PLUTO at the DARPA Subterranean Challenge (SubT) Tunnel Circuit event in Pittsburgh, PA in August, 2019. This article introduces my perspective on the challenge, and some of our team’s preparations.

The Challenge

The great value of DARPA challenges (perhaps the most famous being the Grand Challenge) is the way they can provide a reality check on research. Researchers’ efforts to highlight their accomplishments can obscure more objective measures of progress You will sometimes see a video of the one time a system worked properly. A viewer might extrapolate that the proposed design “works”, though this would be over optimistic. Is this a lie on the part of whoever published the video? No. It is valuable to know that a design can work: it means that no aspect of the design is impossible. . But objective measures intended to be representative of real needs must be frequently re-imagined, or else clever engineers will optimize for the test, sometimes to the detriment of more general performance. DARPA challenges are generally designed not to showcase what robots can do, but what they will do when faced with a challenge their designers could not specifically plan for.

The Subterranean (SubT) Challenge’s particular angle is that it takes place… under ground This Popular Mechanics article provides a nice overview of the contest and how it all went. . That general setting means that GPS is unavailable, and radio communications are limited. Not wanting to spoil the details of the courses the robots would encounter, DARPA informed teams that possibilities included wet, uneven terrain at multiple heights, and openings as small as 1m for the robots to squeeze through. Performance would be evaluated along multiple axes, but the most important was detecting artifacts in a kind of scavenger hunt: a collection of objects identified beforehand that would be placed along the courses.

backpack.gif This red backpack is an example of an artifact the robots had to search for.

A detection submitted to the contest server would score points for the team if it correctly identified the object and located it within 5m of its true location as determined by DARPA’s own careful surveying of the course.

Sounds easy

Suppose you are walking along a long, straight corridor. Estimating how far you’ve walked is tough; you’ll probably be wrong by at least a couple meters after you’re 100m from the entrance. But 2% error, say, seems pretty good, right? Suppose also that your estimate of the slope of the ground was wrong by 1\(^\circ\), now your estimate of your change in height from where you started is off by almost 2m. That means that 100m in, you’re at the point where when you see the scavenger hunt fire extinguisher in a corner behind a truck, if your estimate of where it is with respect to you is off by 1m, you might not receive any points. And that’s just the first 100m of courses promised to be over 1km long.

These back-of-the-envelope guesstimates show that there is not a lot of wiggle room. On top of that, you have pathological scenarios to worry about. The problem of motion estimation in general is beset on all sides: a robot that relies on GPS will have no idea how it is moving under ground, a robot that relies on odometry – usually via rotary encoders on wheel axles – might struggle with muddy ground on which its wheels slip, a robot that relies on cameras to track color and texture features will get lost if the robot is placed in an environment with walls all painted the same color, and a robot relying on laser range measurements will suffer if there is not enough geometric detail for it to detect differences in its surroundings from one position to the next.

Back in the long smooth corridor of our example, you might use a laser range finder to measure distances to the walls. Those measurements let you know precisely where you are with respect to the center-line of the tunnel. But now I give you another set of measurements and ask how far along the tunnel you have moved. You might receive measurements that place you in exactly the same location relative to the center-line, but your motion along the center-line is ambiguous.

smooth-tunnel.gif A robot at position \(p_1\) in a smooth tunnel might obtain the set of range measurements shown in red. When that same robot moves to position \(p_2\), it obtains a new set of range measurements, shown in green, that are identical to the first. These measurements alone are insufficient for the robot to estimate its motion.

This is always a concern for roboticists as you can find pathological scenarios for machine perception all around you. For instance, sometimes a particularly boring hallway in an office environment can be sufficient to introduce error in motion estimation. In this situation, the wily researcher will glue a sign to the wall to give their cameras something to look at, or place an unassuming cardboard box against a wall to introduce some geometric handhold for a laser to track. The reality is that robots are easy to flummox, and you can’t let that flummox you.

If you can’t engineer the environment, you have to see more. Add more cameras pointed in different directions – always with sufficient lighting – so that if there is any texture detail available, you will capture it. Add more lasers, or mount your laser on a rotating mount, so that if there is any geometric detail available, you will be able to take advantage of it. But these additions are not free. They cost battery life, they cost CPU and GPU time, and their complexity can introduce errors you didn’t have before (i.e. more difficult calibration). To find the right balance, you have to run tests.

What’s mine is yours

Our team’s efforts were greatly aided by the wonderful folks at the Number 9 Coal Mine in Lansford, PA. This is a terrific environment for testing robots intended to deal with rough terrain: loose rocks, water dripping through the ceiling, dust, mud, trenches with running water (tip: wear a hood and a hat). The mine has a large elevator room with a high ceiling, moderately sized primary tunnels, and narrow, jagged tunnels linking everything into a nice circuit.

mine-walls.jpg Mine walls show signs of blasting and rough construction

We collected a lot of camera and LiDAR data here to crunch on back at the lab. The images served to train the machine learning system tasked with object detection; the LiDAR data kept me up all night with the variety of scales and ambiguous distinctions of what we want to classify as a tunnel-like environment as opposed to a hole in the wall/ceiling or just an oddly-shaped elongated room. The robot was continuously adjusting to balance on the rocks; no surfaces were flat; nothing was straight. It’s pretty glorious from a geometric point of view. Next time we’ll look at some of the particularly challenging scenarios we found.