Virtual First Responder
The digital assistant that’s a literal life-saver.My Role: UX Researcher
Timeline: November 2024 - May 2025
Client: Safety, Human Factors, and Resilience Engineering Lab
[Usability Testing] [Survey] [A/B Testing] [Content Analysis] [AI]
Results: Stabilized emergencies as fast as trained professionals
Groundwork
Humans speak to robots differently than they do humans, but many virtual assistants are designed to be spoken to as though they were human. This slows down collaboration, which could result in deadly consequences under high stakes situations.
Process
[1] User ResearchPain points in human-robot collaboration:
Team Communication: Rapport building
Cognitive Load: Limiting information overload
Trust: Faith in robot teammates
Self-Efficacy: Encouragement to succeed
[2] Literature Review
Mental model match: Large Language Models
Specific prompting was used to compensate for human-robot communication pain points.
“For this session, act as a 911 call taker. I will act as the 911 caller. Your goal is to ask me questions to assess the emergency I report and provide me with authoritative, lifesaving instruction that will help me respond to the emergency until first responders arrive. When you respond, always use less than 30 words. Always ask a follow-up question. Always ask one question at a time. Provide follow-up instructions based on my answers to your questions. Encourage me to keep talking by asking questions needed to assess the situation and provide lifesaving instruction appropriate to the situation. Always be courteous, polite, and professional.”
[4] User Testing
[4.1] Usability Testing
56 users (novices) were given two challenges to complete, both associated with high mortality rates.
Bleeding: Apply a tourniquet.
Choking: Perform the Heimlich Maneuver.
Participants were given one of two partners to undertake these challenges with.
AI Teammate
[4.3] Survey
After tests, participants shared their feelings regarding:
1. Cognitive Load
2. Trust
[4.4] Content Analysis
For team communication, the 1568 lines of speech from the sessions were thematically coded and arranged into four themes:
Acknowledgements, commands, information requests, and updates.
[4.4] Results
- AI teams worked just as fast as human teams.
- AI teammates received fewer acknowledgements from humans than human teammates.
- No difference in cognitive load between teams.
-
Humans trusted AI slightly less than humans, but not by much.
- No difference in self-efficacy between teams.
Reflection
- Visual indications on device screen to accompany voice instruction.
- Development of an offline version of the virtual agent.
- Strategically designed communication is key for user experience.
- Voice interaction requires accounting for context and user capabilities.