Researchers from the Massachusetts Institute of Technology (MIT) developed a team of three robots serving beer in a makeshift bar, and this technology could spread to robotic systems aimed at working in hospitals and hazardous environments, not only in bars and restaurants.
The team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) faced an inconvenient aspect in regards to the potential inability of robots to adapt to everyday human interaction, as their sensors aren’t that accurate. For instance, these “bartenders” couldn’t face unpredictable situations, such as dropped items, due to their incapacity to communicate with each other because of noise or not being close to one another.
However, the scientific team came up with a new approach to “teach” the droids to perceive the environment they’re placed in as humans would do. The team described these enhancements as “macro-actions”, their idea being that robots fulfill a general task with various consequences without the need of being assisted all the way.
So, these macro-actions imply programming the robots to finalize a common, general task that would include multiple actions. For example, when a “waiter” enters the bar, it needs to be prepared for the situation where the bartender-bot is serving another waiter robot.
Ariel Anders, MIT graduate student, reported that it would be desirable to inform one robot to go to one room, and another robot to get the drinks, without monitoring their every move in the process.
The team’s new approach was put to use by turning their workspace into a temporary bar. Two Turtlebots (robots on wheels) implemented with coolers made haste in taking orders from the eager scientists. The team would push a button on the robot’s surface to request a refreshing beverage, reminding the droid to return to the bar, where a PR2 robot was waiting to manage the order.
Some inconveniences showed up, as the bartender-bot could only serve one waiter-bot at a time, and the robots weren’t able to communicate to one another if they weren’t in the same area range.
Thus, Anders said that he and his team were forced to think of and elaborate more complex algorithms that would allow the robots to reason on a superior level, regarding their behavior, location and status.
Chris Amato, lead author of the research paper, concluded that many real-world problems display some form of uncertainty, maybe minor flaws, roughly put.
Photo Credits i.ytimg.com/