Although it sometimes appears as though machines are smarter than humans, there are a few things that we can do better. For example, navigation is actually much easier for humans than it is for computers. This is due to the way we learn to navigate. For example, if you prefer to use landmarks, such as buildings, you can typically find your way around. Reverse your position and you’re able to find your way back using the same landmarks.
Computers however don’t process information that way – that type of non-linear “thinking” is simply not that easy to program. Typically, computers must be programmed with specifics – Point A to Point B. However, thanks to a new study, robotic navigation has taken a huge leap forward. A team of engineers at MIT have come up with a very unique way to assist computers with navigation.
Julian Straub, a graduate student in electrical engineering and computer science at MIT, is the lead author of the research paper. Other authors include John Fisher, a senior research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory, and John Leonard, a professor of mechanical and ocean engineering, as well as Oren Freifeld and Guy Rosman, both postdocs in Fisher’s Sensing, Learning, and Inference Group.
IEEE Conference on Computer Vision and Pattern Recognition in June, these researchers will present a new algorithm that has the capability of changing the way that computers are programmed for navigation.
“Most of classical statistics is based on linearity and Euclidean distances, so you can take two points, you can sum them, divide by two, and this will give you the average,” Oren Freifeld says. “But once you are working in spaces that are nonlinear, when you do this averaging, you can fall outside the space.”
“Think about how you navigate a room,” says John Fisher. “You’re not building a precise model of your environment. You’re sort of capturing loose statistics that allow you to complete your task in a way that you don’t stumble over a chair or something like that.”
This new algorithm will use 3-D data, much like the type captured by the Microsoft Kinect and other similar devices. The software will then identify specific points in that 3-D data and develop a sphere with these navigation points. A city, the team is starting with Manhattan, can then be laid out in this sphere and the algorithm will aid the computer in navigating along those points and finding its way back.
This is a huge accomplishment for engineering and offers a fascinating look at the way we can program machines to “think.” For students of engineering, this study shows just how much can be accomplished with a simple idea and a plan to come up with a better way.
Want more information about exciting careers in the field of engineering? The experts at Solopoint Solutions are here to help!