top of page
header.png

MIT Lincoln Lab

AI Solutions for the U.S. Coast Guard Command Center

ABOUT
AI & The Department of Defense

As a design engineer working with the MIT Lincoln Laboratory's AI Architecture & Algorithm's Group, I played a crucial role in advancing human-AI teaming for mission-critical applications in the Department of Defense (DoD).

​

My work focused on unique problems and opportunities within the USCG’s Search and Rescue (SAR) mission and described XAI solution prototypes collaboratively generated with USCG command center watch standers. Due to the confidentiality of this project, some details and images have been extracted.

 

This project has been published in the AHFE International Journal: 

Link to Publication

ROLE

Design Engineer

Technology Policy

DELIVERABLE

Prototypes

Policy Agenda

Published Paper

TIMELINE

Summer 2024

1
AI in the Pubic Sector
Responding to President Biden’s 2023 Executive Order on the use of AI 

80% of AI projects fail for several reasons: misunderstanding the problem to be solved with AI, project teams prioritizing a technology solution over solving real problems for end users, and a lack of high-quality data to effectively train models.

​

As the field of artificial intelligence advances, there is increased responsibility to ensure that deployed AI solutions are ethical, useful, and safe.

 

Guided by President Biden’s Executive Order governing the use and development of artificial intelligence (White House, 2023), the United States Department of Homeland Security (DHS) highlighted the critical need for explainability in all of its Science and Technology Directorate initiatives (Department of Homeland Security, 2024).

MIT Lincoln Lab Team User Research Team Photo

2
Learn

Research Planning and Insights from Sector Boston

Before each field visit, the MIT LL team prepared research plans with objectives, questions, methods, and adaptable agendas to suit the dynamic nature of command center activities. These plans were updated after visits based on synthesized findings that influenced project decisions and shaped new research goals for subsequent visits.

​

During an initial visit to Sector Boston, the team learned about the US Coast Guard's use of SAROPS, a Monte Carlo-based software for maritime search planning, by observing and mapping the search and rescue (SAR) process from initial report to case closure.

Timeline overview of our research objectives, research questions, research methods, and outcomes for each field visit.

3
Design

Ideation and Simulation: Shaping the Future of SAR

During a "SAR of the Future" ideation session, MIT LL and USCG participants brainstormed over ten ideas each on enhancing future Search and Rescue (SAR) operations using AI or automation, which were then organized into thematic groups using affinity diagramming. Participants prioritized these ideas through forced ranking, voting on which topics they found most interesting and impactful for the USCG.

Additionally, in the absence of real SAR cases, the team observed and interviewed command center personnel, including the Command Duty Officer and Operations Unit, as they engaged in a simulated SAR scenario, gathering insights on command center roles and responsibilities.

Documentation of “SAR of the Future” ideation post-up, affinity diagramming, and voting, showing the top-voted category of data entry automation.

4
Refine

Harnessing Creativity for Human-Centered AI Prototyping in SAR

To foster creativity and build upon diverse ideas, our brainstorming sessions welcomed concepts beyond AI, leading to the shortlisting of AI-required use cases believed to be feasible and impactful. Command center watchstanders then storyboarded three AI scenarios: creating narratives, analyzing calls for targeted questions, 

and populating vessel information, both when AI functions well and poorly. Based on these storyboards and insights into implementation, ethics, and explainability, MIT LL developed prototypes using Axure RP that incorporated best practices for human-centered AI design, focusing on user control, feedback mechanisms, and informed use of data.

Documentation of storyboarding use case “AI populates vessel information and property outcomes” – with a successful AI story on the left, and an unsuccessful AI implementation on the right.

5
Prototype

 Prototypes That Inspire and Evolve

Overall, the prototypes based on successful AI implementation storyboards received very positive feedback. Watchstanders expressed high interest in using the prototyped features, if developed, and expressed positive feedback regarding some of the explainability features: “I really like how you can double check if the information generated is accurate.

 

You can go directly to the source in the transcript. ... I like that the radio buttons say ‘suggested.’ It gives me confidence that I can verify the options.”

Suggestions for improvement included removing the call transcription during an incoming call to minimize distraction and bias, reformatting question hierarchy, pulling more information from other systems, showing additional formats for latitude/longitude conversion, and incorporating i911, a platform that allows mariners to share their location with USCG.

​

Further immersive scenario testing with real data would be needed to identify necessary and desirable functionality changes.

Digital prototype that shows how a command center watchstander answering a SAR radio call can receive tailored follow-up questions, suggested answers

Screenshot of a digital prototype displays how a call narrative (one of the many time-consuming inputs into a USCG documentation tool) can be generated from transcribed command center calls, highlighting in blue the key excerpts that influenced the generated text “35FT F/V NO OARS”.

5
Conclusion

Advancing Human-Centered Design for Explainable AI: Insights from USCG Command Center Operations

As successful adoption of new XAI tools necessitates designing “with” and not just “for,” this paper summarizes the application of human- centered, participatory design in partnership with USCG watchstanders.

​

Our work included collaborative generation of desirable use cases, elicitation of explainability requirements, and creation of wireframe prototypes to illustrate the utility of applying AI and XAI techniques to a common, time- consuming, high-consequence event: receiving and responding to a SAR call at a USCG command center. Iterations of prototypes yielded a proposed system that the participants judged to be of potentially high operational value.

​

Importantly, the prototypes contained several explainability features to help watchstanders determine whether or not AI models are working as expected. Future work would include creating functional software prototypes to test the effectiveness of the proposed explainability features.

bottom of page