- In 4 months how do you design and deploy an interface to operate 5 or more autonomous robots?
- How do you lower the cognitive load for the sole mission operator?
- How do you facilitate teaming between the mission operator and the planning algorithm, and how do you establish trust in the planning algorithm?
- How do you inform the mission operator of the necessary robot and subsystems metrics and warnings?
- How do you visualize incoming lidar information to map the challenge arena as it’s being explored?
- How do you visualize and interact with search and rescue artifacts to score points?
- How do you intervene when a robot does something it shouldn’t be?
- How can you speed up artifact submission while lowering the rate of error?
Outcome:
- Designed and deployed a mission interface to be used by a single operator, which was used to operate 11 autonomous robots at once.
- Leverage video game design techniques to lower cognitive load, and increase situational awareness.
- Used a default top-down view that the operator could click and move around the mapped search and rescue area
- The operator can orient the 3D mapped area in any way to see details at different angles, and then click to set it back to default.
- Created an already-seen folder for artifacts. This paired with custom hot keys allowed the operator to quickly review images from the robots without risking deleting artifacts.
- A bucketing system was used to presort artifacts based on machine vision score of how likely the artifact was and what it identified it as.
- The main interface view is split into three core sections
- The left of the interface shows each robot and its metrics. Each robot card has an E-Stop.
- The center column shows what the planner is doing with each robot. The planner will warn the operator if there is a warning or possible error that needs to be reviewed.
- The right side of the interface has a 3D interactive map of the explored space and its lidar data.