Speaker
Description
Alice Violaine Saletta, Gustavo Hernan Diaz, Shreya Santra and Kazuya Yoshida
Department of Aerospace Engineering, Tohoku University, Japan
“Teaching by showing” is a topic that has been widely explored in the history of robotics research. The idea of having a system that can understand what action is performed by a human using its sight and make a robot able to reproduce it is indeed an interesting approach. If the past decades’ research was limited by the technologies available at the time, it is our purpose to exploit the current AI state-of-the-art algorithms along with up-to-date computer vision systems to bring it to the next level. Our main goal is to use the “teaching by showing” technique in an AI scenario for assembling applications of space structures by benefitting today’s tools’ advanced state.
The most important aspect is to be sure that the robot does not just repeat the same movement it sees, but it must understand the action it is performing. For example, in the case of an assembling task, the robot is supposed to perform it no matter what the initial conditions of the single pieces or the position of the final assembled piece; it should act as close as possible to a human brain, being able to recognize the single pieces and understanding the correct way to join them. What is required, then, is a semantic and logical understanding of the connections.
For this purpose, we will develop our AI system in charge of identifying an action from a demonstration. Then we will integrate its output command into our controller and motion planner to have the robot achieve the assembling task.
charge of identifying an action from a demonstration. Then we will integrate its output command into our controller and motion planner to have the robot achieve the assembling task.