4th WCSET-2015 at Japan
Computer Science and Electrical Engineering:
Title:
Multimodal Human Robot Interaction
Authors:
Vivek Singh Sikarwar
Abstract:
Multimodal human-robot interaction is an unsolved
problem in the robotic research field. There are various
existing techniques, given by different experts to
provide human-robot interaction, but any of them can not
able to provide a new way for human-robot interaction as
efficient as human-human interaction. It is an effort to
enhance human-robot interaction for some extent by using
multimodality. This paper work describes a new model for
a robot to additive learn by multiple inputs e.g.
speech, vision and gestures to identified objects and
persons. It is very difficult for a machine to interact
with human as another human being in the real world
environment without human responses due to complexity of
natural language and vision. Mathematical tools, lexical
tools and Probabilistic graphical models are used to
learn features from multiple types of inputs provided by
human. The robot also acquire affiliation framework that
enables the description of words with visual
observations allowing for better natural human-robot
interaction. Moreover, the robot applies the multimodal
framework to describe new objects and augment object
descriptions by laying natural language queries for
manual feedback.
Keywords: Multimodal
Human Robot Interaction, Social Robotics, Human
Assistant Robot
Pages:
339-342