Virtual First Responder

The digital assistant that’s a literal life-saver.

Industry: Medical
My Role: UX Researcher
Timeline: November 2024 - May 2025
Client: Safety, Human Factors, and Resilience Engineering Lab
[Usability Testing] [Survey] [A/B Testing] [Content Analysis] [AI]

Results: Stabilized emergencies as fast as trained professionals

Groundwork


Context In the event of emergencies, offline technology could act as a form of medical assistance in rural areas where cellular service is absent.Challenge
Humans speak to robots differently than they do humans, but many virtual assistants are designed to be spoken to as though they were human. This slows down collaboration, which could result in deadly consequences under high stakes situations.

SolutionThe development of a voice-based communication system designed around human speech preferences for communication with robots.

Process



[1] User ResearchPain points in human-robot collaboration:

Team Communication: Rapport building
Cognitive Load: Limiting information overload
Trust: Faith in robot teammates

Self-Efficacy:
Encouragement to succeed



[2] Literature Review

Mental model match: Large Language Models
Traditional human-robot interactions are scripted to follow specific patterns. By leveraging AI technology, a virtual assistant that reacts and adjusts to the current conversation can be created. Prompting can be used to tailor the experience further. Voice-based communication lets the user work hands-free.
  

[3] High-Fidelity Prototype Through a ChatGPT wrapper and integration of a voice input API, the conversational agent itself was created.
Specific prompting was used to compensate for human-robot communication pain points.

“For this session, act as a 911 call taker. I will act as the 911 caller. Your goal is to ask me questions to assess the emergency I report and provide me with authoritative, lifesaving instruction that will help me respond to the emergency until first responders arrive. When you respond, always use less than 30 words. Always ask a follow-up question. Always ask one question at a time. Provide follow-up instructions based on my answers to your questions. Encourage me to keep talking by asking questions needed to assess the situation and provide lifesaving instruction appropriate to the situation. Always be courteous, polite, and professional.”



[4] User Testing
[4.1] Usability Testing
56 users (novices) were given two challenges to complete, both associated with high mortality rates.
Bleeding: Apply a tourniquet.

Choking: Perform the Heimlich Maneuver.
[4.2] A/B Testing
Participants were given one of two partners to undertake these challenges with.
AI Teammate           

                              Human Teammate

[4.3] Survey
After tests, participants shared their feelings regarding:                               
1. Cognitive Load
2. Trust
3. Self-Efficacy

[4.4] Content Analysis
For team communication, the 1568 lines of speech from the sessions were thematically coded and arranged into four themes:
Acknowledgements, commands, information requests, and updates.

[4.4] Results
  • AI teams worked just as fast as human teams.
  • AI teammates received fewer acknowledgements from humans than human teammates.
  • No difference in cognitive load between teams.
  • Humans trusted AI slightly less than humans, but not by much.
  • No difference in self-efficacy between teams.

Reflection


Future Iterations
  • Visual indications on device screen to accompany voice instruction.
  • Development of an offline version of the virtual agent.


Learnings
  • Strategically designed communication is key for user experience.
  • Voice interaction requires accounting for context and user capabilities.