Last week, the team that took second place in a DARPA competition called the Subterranean Challenge released two videos recorded from runs last year through the Louisville Mega Cavern. These videos, recorded as part of the competition to find, identify, and map underground spaces, offer a glimpse into the world of machine-generated mapping of three-dimensional spaces.
Each video also looks like how we might imagine a world as perceived by a robot. Rooms are mapped in dots of color, with real spaces assembled the way a computer perceives space. It is the visual equivalent of running a hand along stucco—texture rendered into form.
For the Subterranean Challenge, held in late September of 2021, an international array of teams built and then used robots to explore underground obstacle courses. These courses, set in place by DARPA beforehand (the exact parameters were concealed from participants until the start of the competition), simulated the kind of rescue and military work robots may undertake in the future.
For the final challenge, robots had to navigate environments stimulating tunnels, natural caves, and built underground urban environments. As DARPA described it, the “Challenge seeks novel approaches to rapidly map, navigate, and search underground environments during time-sensitive combat operations or disaster response scenarios.”
The second-place team in the challenge, called CSIRO Data61, is part of Australia’s national science agency; the question of how robots can better explore underground spaces is a truly global one.
What that means in practice is not just robots that can explore an underground space, but robots that can render a useful map for the humans who will follow the robots into the dark.
Robots see in lasers and light, and sometimes employ other sensors like radar, too. To make this information useful to human observers, a robot must then convert the numbers from that data back into something visual, rendering a map and a model of its immediate surroundings.
One way of converting that data to a map is Simultaneous Localization and Mapping, or SLAM. It’s a process whereby a robot creates a map, making note of where it is in relation to its surroundings.
“Our fleet uses a common sensing, mapping and navigation system across all robots, built around our Wildcat SLAM technology,” Navinda Kottege of CSIRO told IEEE spectrum. “This is what enables coordination between robots, and provides the accuracy required to locate detected objects. This had allowed us to easily integrate different robot platforms into our fleet.”
In the SLAM fly-through, data from four different robots is stitched together into one coherent whole. Rooms, hallways, and obstacles are all revealed through a pointillist pattern of laser scans, through lidar mounted on the robots. It feels like an archeological excavation, which is a common use for lidar technology.
Using a processing technique called PaintCloud, a more realistically colored map presentation can be built on top of the existing lidar point clouds.
In the PaintCloud version, the lidar structure of the map is coated in the browns and grays of the actual physical space. Lights stand out more clearly, while objects hauntingly merge with their surroundings. In one section, a thermal mannequin can be clearly seen propped against a wall. Wearing a bright yellow-green high visibility jacket and with a blue head, the mannequin is clearly visible in the paint. Yet because this is a paint scheme applied to a lidar model, the mannequin’s form is made of points of reflected light. It blends uncomfortably with the cavern wall, as the sharp delineation between body and surrounding is still beyond the mapping tool’s capabilities.
For humans sent after a robot, both techniques create a usable picture and guide. Even with the structural weirdness of putting color on lidar-measured distance points, a human following along could be able to use the map to identify and, hopefully, rescue a person in a high-vis jacket.
As a bonus, mapping the cave in such an uncanny way means the humans going in would see a sight less haunting than that already revealed by robot.
Watch the PaintCloud video below:
Credit: Source link