Nathan Tsoi

Nathan Tsoi

In order to advance robot learning, I have done research on machine learning methods and integrated systems that enable embodied social agents to cooperate with people and communicate with them over multiple modalities. My work in machine learning focuses on training neural networks to optimize what users actually care about [NeurIPS'22, CVPR'19]. I have developed algorithms and systems for learning social context [RA-L'22] in the domain of social navigation [THRI]. To align users' desired goals to robot behaviors, I have also worked on methods of collecting human feedback about robot behaviors at scale [IROS'21] and incorporated modalities beyond ground-plane motion [HRI'23]. I envision a future where general-purpose robotic platforms are commonplace and can seamlessly learn to work collaboratively with nearby humans in any social situation.

Currently, I am a PhD student at Yale University in robotics and a member of the Interactive Machines Group advised by Marynel Vázquez. Previously, I have done research at Stanford University in the Stanford Vision and Learning Lab under Silvio Savarese and have worked on machine learning and data engineering at Sequoia. For fun, I enjoy designing hardware and embedded systems programming. I am currently on the academic job market.

News and Travel

Date
Event
Location
July 19, 2024Speaking at the Data Generation for Robotics Workshop, presenting my work at TU Delft, and co-organizing the 2024 Workshop on Unsolved Problems in Social Robot Navigation in conjunction with RSS'24.TU Delft, Netherlands
March 12, 2024Presenting my work at CU Boulder and attending HRI'24.Boulder, CO, USA
November 4th, 2023Poster presentation at NERC'23.New Haven, CT, USA
October 5th, 2023Co-organizing the The 2nd Workshop on Social Robot Navigation: Advances and Evaluation.Detroit, MI, USA
March 13, 2023Attending HRI'23 (web chair).Stockholm, Sweden
December 9th, 2022Poster presentation of Bridging the Gap: Unifying the Training and Evaluation of Neural Network Binary Classifiers at NeurIPS'22.New Orleans, LA, USA
October 24, 2022Oral presentation of SEAN 2.0 (RA-L) at IROS'22.Kyoto, Japan
May 27, 2022Co-orgainzing the SEANavBench Workshop at ICRA'22.Kyoto, Japan
March 8, 2021Attending HRI Pioneers '21.Virtual

Research

Principles and Guidelines for Evaluating Social Robot Navigation
                Algorithms
Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Anthony Francis, Claudia Pérez-D'Arpino, Chengshu Li, Fei Xia, Alexandre Alahi, Rachid Alami, Aniket Bera, Abhijat Biswas, Joydeep Biswas, Rohan Chandra, Hao-Tien Lewis Chiang, Michael Everett, Sehoon Ha, Justin Hart, Jonathan P. How, Haresh Karnan, Tsang-Wei Edward Lee, Luis J. Manso, Reuth Mirksy, Sören Pirk, Phani Teja Singamaneni, Peter Stone, Ada V. Taylor, Peter Trautman, Nathan Tsoi, Marynel Vázquez, Xuesu Xiao, Peng Xu, Naoki Yokoyama, Alexander Toshev, Roberto Martín-Martín
ACM Transactions on Human-Robot Interaction (THRI)
A major challenge to deploying robots widely is navigation in human-populated environments, commonly referred to as social robot navigation. While the field of social navigation has advanced tremendously in recent years, the fair evaluation of algorithms that tackle social navigation remains hard because it involves not just robotic agents moving in static environments but also dynamic human agents and their perceptions of the appropriateness of robot behavior. In contrast, clear, repeatable, and accessible benchmarks have accelerated progress in fields like computer vision, natural language processing and traditional robot navigation by enabling researchers to fairly compare algorithms, revealing limitations of existing solutions and illuminating promising new directions. We believe the same approach can benefit social navigation. In this paper, we pave the road towards common, widely accessible, and repeatable benchmarking criteria to evaluate social robot navigation. Our contributions include (a) a definition of a socially navigating robot as one that respects the principles of safety, comfort, legibility, politeness, social competency, agent understanding, proactivity, and responsiveness to context, (b) guidelines for the use of metrics, development of scenarios, benchmarks, datasets, and simulators to evaluate social navigation, and (c) a design of a social navigation metrics framework to make it easier to compare results from different simulators, robots and datasets.
Influence of Simulation and Interactivity on Human Perceptions of a Robot During Navigation Tasks
Nathan Tsoi, Rachel Sterneck, Xuan Xao, and Marynel Vázquez
ACM Transactions on Human-Robot Interaction (THRI)
In Human-Robot Interaction, researchers typically utilize in-person studies to collect subjective perceptions of a robot. In addition, videos of interactions and interactive simulations (where participants control an avatar that interacts with a robot in a virtual world) have been used to quickly collect human feedback at scale. How would human perceptions of robots compare between these methodologies? To investigate this question, we conducted a 2x2 between-subjects study (N=160), which evaluated the effect of the interaction environment (Real vs. Simulated environment) and participants' interactivity during human-robot encounters (Interactive participation vs. Video observations) on perceptions about a robot (competence, discomfort, social presentation, and social information processing) for the task of navigating in concert with people. We also studied participants' workload across the experimental conditions. Our results revealed a significant difference in the perceptions of the robot between the real environment and the simulated environment. Furthermore, our results showed differences in human perceptions when people watched a video of an encounter versus taking part in the encounter. Finally, we found that simulated interactions and videos of the simulated encounter resulted in a higher workload than real-world encounters and videos thereof. Our results suggest that findings from video and simulation methodologies may not always translate to real-world human-robot interactions. In order to allow practitioners to leverage learnings from this study and future researchers to expand our knowledge in this area, we provide guidelines for weighing the tradeoffs between different methodologies.
Principles and Guidelines for Evaluating Social Robot Navigation
                Algorithms
How Do Robot Experts Measure the Success of Social Robot Navigation?
Nathan Tsoi, Jessica Romero, Marynel Vázquez
Companion of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2024
We interviewed 8 individuals from industry and academia to better understand how they valued different aspects of social robot navigation. Interviewees were asked to rank the importance of 10 measures commonly used to evaluate social navigation policies. Interviewees were then asked open-ended questions about social navigation, and how they think about evaluating the challenges they face. Our interviews with industry and academic experts in social navigation revealed that avoiding collisions was the only universally important measure. Beyond the safety consideration of avoiding collisions, roboticists have varying priorities regarding social navigation. Given the high priority interviewees placed on safety, we recommend that social navigation approaches should first aim to ensure safety. Once safety is ensured, we recommend that each social navigation algorithm be evaluated using the measures most relevant to the intended application domain.
SEAN-VR: An Immersive Virtual Reality Experience for Evaluating Social Robot Navigation
SEAN-VR: An Immersive Virtual Reality Experience for Evaluating Social Robot Navigation
Qiping Zhang
*
, Nathan Tsoi
*
, Marynel Vázquez
Companion of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2023
We propose a demonstration of the Social Environment for Autonomous Navigation with Virtual Reality (VR) for advancing research in Human-Robot Interaction. In our demonstration, a user controls a virtual avatar in simulation and performs directed navigation tasks with a mobile robot in a warehouse environment. Our demonstration shows how researchers can leverage the immersive nature of VR to study robot navigation from a user-centered perspective in densely populated environments while avoiding physical safety concerns common with operating robots in the real world. This is important for studying interactions with robots driven by algorithms that are early in their development lifecycle.
Perceptions of the Helpfulness of Unexpected Agent Assistance
Perceptions of the Helpfulness of Unexpected Agent Assistance
Kate Candon, Zoe Hsu, Yoony Kim, Jesse Chen, Nathan Tsoi, Marynel Vázquez
Proceedings of the International Conference on Human-Agent Interaction (HAI) 2022
Much prior work on creating social agents that assist users relies on preconceived assumptions of what it means to be helpful. For example, it is common to assume that a helpful agent just assists with achieving a user's objective. However, as assistive agents become more widespread, human-agent interactions may be more ad-hoc, providing opportunities for unexpected agent assistance. How would this affect human notions of an agent's helpfulness? To investigate this question, we conducted an exploratory study (N=186) where participants interacted with agents displaying unexpected, assistive behaviors in a Space Invaders game and we studied factors that may influence perceived helpfulness in these interactions. Our results challenge the idea that human perceptions of the helpfulness of unexpected agent assistance can be derived from a universal, objective definition of help. Also, humans will reciprocate unexpected assistance, but might not always consider that they are in fact helping an agent. Based on our findings, we recommend considering personalization and adaptation when designing future assistive behaviors for prosocial agents that may try to help users in unexpected situations.
Bridging the Gap
Bridging the Gap: Unifying the Training and Evaluation of Neural Network Binary Classifiers
Nathan Tsoi, Kate Candon, Deyuan Li, Yofti Milkessa, Marynel Vázquez
Advances in Neural Information Processing Systems (NeurIPS) 2022
While neural network binary classifiers are often evaluated on metrics such as Accuracy and $F_1$-Score, they are commonly trained with a cross-entropy objective. How can this training-evaluation gap be addressed? While specific techniques have been adopted to optimize certain confusion matrix based metrics, it is challenging or impossible in some cases to generalize the techniques to other metrics. Adversarial learning approaches have also been proposed to optimize networks via confusion matrix based metrics, but they tend to be much slower than common training methods. In this work, we propose a unifying approach to training neural network binary classifiers that combines a differentiable approximation of the Heaviside function with a probabilistic view of the typical confusion matrix values using soft sets. Our theoretical analysis shows the benefit of using our method to optimize for a given evaluation metric, such as $F_1$-Score, with soft sets. Also, our extensive experiments show the effectiveness of our approach in several domains.
SEAN 2.0: Formalizing and Generating Social Situations for Robot Navigation
Nathan Tsoi, Alec Xiang, Peter Yu, Samuel S. Sohn, Greg Schwartz, Subashri Ramesh, Mohamed Hussein, Anjali W. Gupta, Mubbasir Kapadia, and Marynel Vázquez
IEEE Robotics and Automation Letters (RA-L) 2022
We present SEAN 2.0, an open-source system designed to advance social navigation via the training and benchmarking of navigation policies in varied social contexts. A key limitation of current social navigation research is that policies are often trained and evaluated considering only a few social contexts, which are fragmented across prior work. Inspired by work in psychology, we describe navigation context based on social situations, which encompass the robot task and environmental factors, and propose logic-based classifiers for five common examples. SEAN 2.0 allows a robot to experience these social situations via different methods for specifying and generating pedestrian motion, including a novel Behavior Graph method. Our experiments show that when data collected using the Behavior Graph method is used to learn a robot navigation policy, that policy outperforms others trained using alternative methods for pedestrian control. Also, social situations were found to be useful for understanding performance across social contexts. Other components of SEAN 2.0 include vision and depth sensors, several physical environments, different means of specifying robot tasks, and a range of evaluation metrics for social robot navigation. User feedback for SEAN 2.0 indicated that the system was “easier to navigate and more user friendly” than SEAN 1.0.
An Approach to Deploy Interactive Robotic Simulators on the Web for HRI Experiments: Results in Social Robot Navigation
Nathan Tsoi, Mohamed Hussein, Olivia Fugikawa, J.D. Zhao, Marynel Vázquez
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
Evaluation of social robot navigation inherently requires human input due to its qualitative nature. Motivated by the need to scale human evaluation, we propose a general method for deploying interactive, rich-client robotic simulations on the web. Prior approaches implement specific web-compatible simulators or provide tools to build a simulator for a specific study. Instead, our approach builds on standard Linux tools to share a graphical desktop with remote users. We leverage these tools to deploy simulators on the web that would typically be constrained to desktop computing environments. As an example implementation of our approach, we introduce the SEAN Experimental Platform (SEAN-EP). With SEAN-EP, remote users can virtually interact with a mobile robot in the Social Environment for Autonomous Navigation, without installing any software on their computer or needing specialized hardware. We validated that SEAN-EP could quickly scale the collection of human feedback and its usability through an online survey. In addition, we compared human feedback from participants that interacted with a robot using SEAN-EP with feedback obtained through a more traditional video survey. Our results suggest that human perceptions of robots may differ based on whether they interact with the robots in simulation or observe them in videos. Also, they suggest that people perceive the surveys with interactive simulations as less mentally demanding than video surveys.
Bridging the Gap
Improving the Robustness of Social Robot Navigation Systems
Nathan Tsoi, Marynel Vázquez
Companion of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2021
Our aim is to advance the reliability of autonomous social navigation. We have researched how simulation may advance this goal via crowdsourcing. We recently proposed the Simulation Environment for Autonomous Navigation (SEAN) and deployed it at scale on the web to quickly collect data via the SEAN Experimental Platform (SEAN-EP). Using this platform, we studied participants' perceptions of a robot when seen in a video versus interacting with it in simulation. Our current research builds on this prior work to make autonomous social navigation more reliable by classifying and automatically detecting navigation errors.
Challenges Deploying Robots During a Pandemic: An Effort to Fight Social Isolation Among Children
Nathan Tsoi, Joe Connolly, Emmanuel Adéníran, Amanda Hansen, Kaitlynn Taylor Pineda, Timothy Adamson, Sydney Thompson, Rebecca Ramnauth, Marynel Vázquez, Brian Scassellati
ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2021
The practice of social distancing during the COVID-19 pandemic resulted in billions of people quarantined in their homes. In response, we designed and deployed VectorConnect, a robot teleoperation system intended to help combat the effects of social distancing in children during the pandemic. VectorConnect uses the off-the-shelf Vector robot to allow its users to engage in physical play while being geographically separated. We distributed the system to hundreds of users in a matter of weeks. This paper details the development and deployment of the system, our accomplishments, and the obstacles encountered throughout this process. Also, it provides recommendations to best facilitate similar deployments in the future. We hope that this case study about Human-Robot Interaction practice serves as inspiration to innovate in times of global crises.
SEAN: Social Environment for Autonomous Navigation
Nathan Tsoi, Mohamed Hussein, Jeacy Espinoza, Xavier Ruiz, Marynel Vázquez
Proceedings of the 8th International Conference on Human-Agent Interaction
Social navigation research is performed on a variety of robotic platforms, scenarios, and environments. Making comparisons between navigation algorithms is challenging because of the effort involved in building these systems and the diversity of platforms used by the community; nonetheless, evaluation is critical to understanding progress in the field. In a step towards reproducible evaluation of social navigation algorithms, we propose the Social Environment for Autonomous Navigation (SEAN). SEAN is a high visual fidelity, open source, and extensible social navigation simulation platform which includes a toolkit for evaluation of navigation algorithms. We demonstrate SEAN and its evaluation toolkit in two environments with dynamic pedestrians and using two different robots.
DANTE
Improving Social Awareness Through DANTE: Deep Affinity Network for Clustering Conversational Interactants
Mason Swofford, John Peruzzi, Nathan Tsoi, Sydney Thompson, Roberto Martín-Martín, Silvio Savarese, Marynel Vázquez
Proceedings of the ACM on Human-Computer Interaction
We propose a data-driven approach to detect conversational groups by identifying spatial arrangements typical of these focused social encounters. Our approach uses a novel Deep Affinity Networkto predict the likelihood that two individuals in a scene are part of the same conversational group, considering their social context. The predicted pair-wise affinities are then used in a graph clustering framework to identify both small (e.g., dyads) and large groups. The results from our evaluation on multiple, established benchmarks suggest that combining powerful deep learning methods with classical clustering techniques can improve the detection of conversational groups in comparison to prior approaches. Finally, we demonstrate the practicality of our approach in a human-robot interaction scenario. Our efforts show that our work advances group detection not only in theory, but also in practice.
Prompting Prosocial Human Interventions in Response to Robot Mistreatment
Prompting Prosocial Human Interventions in Response to Robot Mistreatment
Joe Connolly, Viola Mocz, Nicole Salomons, Joseph Valdez, Nathan Tsoi, Brian Scassellati, Marynel Vázquez
ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2020
Inspired by the benefits of human prosocial behavior, we explore whether prosocial behavior can be extended to a Human-Robot Interaction (HRI) context. More specifically, we study whether robots can induce prosocial behavior in humans through a 1x2 between-subjects user study ($N=30$) in which a confederate abused a robot. Through this study, we investigated whether the emotional reactions of a group of bystander robots could motivate a human to intervene in response to robot abuse. Our results show that participants were more likely to prosocially intervene when the bystander robots expressed sadness in response to the abuse as opposed to when they ignored these events, despite participants reporting similar perception of robot mistreatment and levels of empathy for the abused robot. Our findings demonstrate possible effects of group social influence through emotional cues by robots in human-robot interaction. They reveal a need for further research regarding human prosocial behavior within HRI.
GIOU
Generalized Intersection over Union
Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, Silvio Savarese
Computer Vision and Pattern Recognition (CVPR) 2019
Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axis-aligned 2D bounding boxes, it can be shown that $IoU$ can be directly used as a regression loss. However, $IoU$ has a plateau making it infeasible to optimize in the case of non-overlapping bounding boxes. In this paper, we address the weaknesses of $IoU$ by introducing a generalized version as both a new loss and a new metric. By incorporating this generalized $IoU$ ($GIoU$) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, $IoU$ based, and new, $GIoU$ based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO.

Awards

Nathan Hale
Nathan Hale Associates: Teresa and Joshy Joseph Scholar
The Nathan Hale Associates program, founded in 1994, recognizes the many generous donors whose leadership annual gifts make possible the dynamic, diverse, and creative environment that defines the Yale experience.
Nathan Hale
HRI 2021 Best Paper Award Candidate
For the work: Challenges Deploying Robots During a Pandemic: An Effort to Fight Social Isolation Among Children by N. Tsoi, J. Connolly, E. Adéníran, A. Hansen, K. T. Pineda, T. Adamson, S. Thompson, R. Ramnauth, M. Vázquez, B. Scassellati
HAI 2020 Best Poster
HAI 2020 Best Poster Award - Runner Up
For the work: SEAN: Social Environment for Autonomous Navigation by N. Tsoi, M. Hussein, J. Espinoza, X. Ruiz, and M. Vázquez
Alan J. Perlis
Alan J. Perlis Graduate Fellowship Recipient
This fellowship was established at Yale in 2006 through generous gifts from various donors in honor of Professor Alan J. Perlis (1922-1990), a pioneer of programming language research, the first winner of the Association for Computing Machinery's (ACM) Turing Award, and the founding chair of Yale's Computer Science Department.

Projects

Robots For Good
Robots For Good: Fighting Social Isolation with Robots
Robotic telepresence for elementary-age children during social distancing.
Multi-sourced 2D and 3D Sensor Fusion and Person Tracking Pipeline
For research in imitation learning, creating motion policies for social navigation
Darkboard
Darkboard
A Tensorboard-like visual interface for Darknet, available as part of g-darknet

Service and Activities

Yale Logo
2023-24 Yale Computer Science Climate and Diversity Committee Member
HRI 2023
Web Chair ACM/IEEE International Conference on Human-Robot Interaction 2023
SEANavBench
SEANavBench @ ICRA 2022 Workshop Co-Organizer
HRI2022
HRI Pioneers 2022 Publicity/Web Chair