AI Studio: AI Adjudication Prototype
Project Overview
Part of the AI Studio’s Defense & Security vertical, this project focused on designing an AI-assisted review experience that improves transparency and confidence in how analysts interpret automated insights.
I led the UX direction from concept to delivery, defining the user flow, facilitating stakeholder sessions, and collaborating with AI engineers to turn complex reasoning into a clear and explainable interface.
Using Figma Make and prompt engineering, I created multiple layout directions, refined prompts based on feedback, and helped stakeholders align quickly on a final design. This workflow reduced early concept time and influenced how the AI Studio approaches ideation across verticals.
A walkthrough of the AI-assisted review flow highlighting document intake, risk scoring visualization, and explainability features.
Problem & Context
Analysts were spending extensive time manually reviewing and cross-checking long reports with little visibility into how automated models scored or flagged risks.
The goal was to design a workflow that made AI reasoning visible, traceable, and easy to validate without adding complexity for the analyst.
Figma • Figma Make • Jira
UX Designer, UX/CX Consultant, Full Stack Engineers, Product Manager
My Role
Led UX design within the AI Studio’s D&S vertical
Defined the end-to-end experience from document intake to result summary
Partnered with AI engineers to align model outputs with explainable design patterns
Created layouts, interactions, and data visualization logic
Co-led stakeholder workshops and testing sessions to validate direction
Design Goals
Transparency: Show how AI decisions are derived and supported
Clarity: Present risk information in a simple and verifiable format
Speed: Reduce review time while preserving control and accuracy
Process
Discovery
Reviewed existing analyst workflows and documentation methods
Identified areas where AI could support decision-making without replacing human oversight
Concept Exploration
Used Figma Make to generate two design directions: structured dashboard and conversational layout
Applied prompt engineering to refine structure, hierarchy, and tone
Gathered stakeholder feedback and aligned on a hybrid layout combining both directions
Reduced the concept phase by more than half compared to previous projects
Design Development
Integrated explainability through rationale snippets, field mapping, and source references
Created a color-coded scoring system across 13 evaluation categories for quick scanning
Designed a conversational side panel for real-time clarification and recalculation
Extended design system patterns to fit D&S use cases, adapting global AI Studio standards
Testing & Iteration
Conducted internal reviews for clarity, speed, and comprehension
Refined hierarchy, contrast, and interaction states based on user feedback
Key Design Solutions
Explainability Panel
Displays the reasoning behind each AI insight, linking scores to data fields for quick context.
Interactive Scoring Visualization
Color-coded indicators communicate confidence levels and patterns clearly.
Conversational Feedback Loop
Analysts can clarify or adjust information through a side panel, triggering instant recalculation.
Traceability View
Each insight links back to its source, ensuring accountability and verification.
Outcome
Delivered an AI-powered workflow that demonstrated how design can make automated insights trustworthy and practical for analytical work.
Received strong feedback for clarity, transparency, and adaptability, and is now referenced across the AI Studio as a model for explainable AI experiences.
Next Steps
Continue refining prompt engineering for faster stakeholder alignment
Apply learnings from this project to improve collaboration speed across future AI Studio workflows
Reinforce design standards for explainability and traceability across verticals
