In order for robots to perform tasks in real world, they
need to be able to understand our natural language commands. While there is a
lot of past research that went into the task of language parsing, they often
require the instructions to be spelled out in full detail which makes it
difficult to use them in real world situations. Our goal is to enable a robots
to even take an ill-specified instruction as generic as “Make a cup of coffee”
and be able to figure out how to fill a cup with milk or use one if it already
has milk etc. depending on how the environment looks.
You can find the details of our research, publications and
videos in the research & video section. A demo of our robot working on
VEIL-200 dataset can be found here. We look forward to your support in
producing more data by playing with our simulator that will help our robots to
be more accurate.