The focus of my research is to design and investigate effective and efficient methods for human-swarm interaction. Communicating with a robot or a machine has been an intriguing field of research since Nikola Tesla engineered the first remote controlled motorboat. Since then many researchers have been coming up with interaction devices, methods, and schemes. Now imagine, instead of one robot, a human is instructed to interact with a swarm/group of robots; would this be as simple or intuitive as interacting with one robot? Can the traditional interfaces used for interacting with one robot be sufficient enough to interact with a swarm of robots? Is it efficient to interact with each member of a swarm or interact with them as a whole or with some other mode of interaction? To find an answer to these questions, I am investigating and designing interface and methods for interacting with a robot swarm.

During my graduate studies, I have also completed projects on object detection using computer vision, robot navigation, and localization, robot interfaces, and robotic simulation platform. My M.S. thesis involved designing an indoor localization and navigation of a smart wheelchair. You can check out my YouTube Channel for some research videos.

Key Projects

  • PresentJan 2016

    Human-Swarm Interaction for Disaster Management

    The project targets the study of interaction methods and devices for human-swarm interaction. These interfaces include EMG Band, Augmented Reality, and Point-and-click interfaces. Collective transport is used to test the framework. (VIDEO)

  • Dec 2016Jun 2016

    Author Age and Gender Prediction from Written Samples

    Age and gender prediction of an author were tested and implemented using K-Nearest Neighbors, Decision Tree and Support Vector Machine algorithms.

  • May 2016July 2015

    Gesture-Based Navigation and Localization of a Smart Wheelchair using Fiducial Markers (MS Thesis)

    The project revolved around designing an EMG interface for navigation of a smart wheelchair for specially-abled people. The project also included an April-tag indoor localization technique for the wheelchair and to ensure the patient's safety. (VIDEO)

  • Dec 2015July 2015

    Gesture-based sensor fusion SLAM

    EMG and Camera Data to localize the robot position while generating the map of the environment. (VIDEO)

  • May 2015Jan 2015

    ROS-GAZEBO Simulation of Neurosurgery Robot

    The project was to create a GAZEBO-based simulator for testing and analyzing the neurosurgery robot. The project enabled the robot developers to plan the trajectory of the robot in an MRI machine while avoiding obstacles. (VIDEO)

  • May 2013Jan 2012

    Automated Reliable Effective and Intelligent Security System (B.Engg. Final Project)

    Implemented a security system that triggers the doors and windows on intruder alert ensuring the safety of all the people in the house. This system can also unlock/lock doors and windows from anywhere in the world with a simple phone call. A patent application is filled for the project. (VIDEO)


  • Submitted to ICRA 2020

    Improving Human Performance Using Mixed Granularity of Control in Multi-Human Multi-Robot Interaction

    Due to the potentially large number of units involved, the interaction with a multi-robot system is likely to exceed the limits of the span of apprehension of any individual human operator. In previous work, we studied how this issue can be tackled by interacting with the robots in two modalities --- environment-oriented and robot-oriented. In this paper, we study how this concept can be applied to the case in which multiple human operators perform supervisory control on a multi-robot system. While the presence of extra operators suggests that more complex tasks could be accomplished, little research exists on how this could be achieved efficiently. In particular, one challenge arises --- the out-of-the-loop performance problem caused by a lack of engagement in the task, awareness of its state, and trust in the system and in the other operators. Through a user study involving 28 human operators and 8 real robots, we study how the concept of mixed granularity in multi-human multi-robot interaction affects user engagement, awareness, and trust while balancing the workload between multiple operators. Paper Link.

    ICRA 2019

    Mixed-Granularity Human-Swarm Interaction

    We present an augmented reality human-swarm interface that combines two modalities of interaction: environment-oriented and robot-oriented. The environment-oriented modality allows the user to modify the environment (either virtual or physical) to indicate a goal to attain for the robot swarm. The robot-oriented modality makes it possible to select individual robots to reassign them to other tasks to increase performance or remedy failures. Previous research has concluded that environment-oriented interaction might prove more difficult to grasp for untrained users. In this paper, we report a user study which indicates that, at least in collective transport, environment-oriented interaction is more effective than purely robot-oriented interaction, and that the two combined achieve remarkable efficacy. Paper Link. VIDEO.