AI Builder: Improving the AI Model Creation Experience
During my time at Microsoft, I had the opportunity to work on their AI tools, specifically focusing on AI Builder—a prediction model software integrated within the Microsoft Power Platform.
Overview
Role: Lead UX Designer
Responsibilities:
UX Research
Product Strategy
Information Architecture
Interaction Design / Prototyping
Visual Design
Company: Microsoft
Project: AI Builder – A low-code AI platform within Microsoft Power Platform
Objectives
AI Builder allows businesses to integrate AI capabilities into Power Apps and Power Automate workflows without requiring deep coding knowledge. My goal was to enhance the user experience by addressing key pain points in the AI model creation process.
Key Objectives:
Reduce user drop-off: Streamline the steps required to create a prediction model, minimizing friction throughout the process.
Improve model consumption: Make it easier for users to consume and leverage AI models within Power Apps.
Enhance model scoring metrics: Improve the clarity and usefulness of scoring metrics displayed on the model details page.
Challenges & Pain Points
The AI Builder platform presented several challenges that were impacting the user experience:
Technical Language: Many users found the language used in the interface too technical, creating confusion and deterring progress.
Data Sample Selection: Choosing the right data sample for training the model was a major pain point. Users often struggled to understand which sample would be most effective, resulting in roadblocks.
Lack of Guidance: Users had insufficient information to determine the best data sample to train the model, leading to uncertainty and errors.
Unclear Scoring Metrics: Users didn’t fully understand the meaning of the scoring metrics displayed on the model details page, leading to confusion about the model’s performance.
Design Kick-Off
Design Kick-Off
At Microsoft, design kick-offs followed a collaborative approach called the “Trio” model, where representatives from product management, design, content, and engineering came together. We began each session with a whiteboarding exercise to ensure alignment and open communication. The objective was to capture knowns, unknowns, and raise questions that could guide the next steps. This process ensured that all stakeholders were aligned on both the problems and the direction we wanted to take.
UX Flows & Discovery
Since AI Builder was an existing product, our initial task was to assess the current user flows, identify friction points, and determine where users typically dropped off or faced difficulties. We gathered insights through stakeholder interviews, user feedback, and data analysis, which helped us map out the existing flow and pinpoint the areas requiring intervention.
User Testing & Iteration
Once initial prototypes were designed, our product owners took them out for user testing. The feedback we received focused on the improvements in usability and clarity, particularly around data sample selection and scoring metrics. User completion rates for creating a prediction model showed significant improvement, validating our design decisions.
Working within Microsoft’s fast-paced, engineering-driven culture, we quickly moved into finalizing the design. During this phase, I collaborated closely with engineering and content teams to ensure language was user-friendly and that any new engineering requirements were properly accounted for.
Key Design Solutions
1. Simplified Language & Clear Instructions
We reduced technical jargon and replaced it with clear, simple language, tailored to the knowledge level of the typical user. The goal was to make AI more accessible to business users with limited technical expertise.
2. Enhanced Data Sample Guidance
We introduced additional context around the data samples, such as data descriptions, sample size, and example use cases, to help users make more informed decisions when selecting training data.
3. Transparent Scoring Metrics
The scoring system was overhauled to provide clearer, more actionable insights. We provided users with explanations of what the scores meant and how they could use them to improve model accuracy.
Final Results & Impact
By addressing the pain points through targeted UX improvements, we achieved significant enhancements to the AI Builder experience:
Reduced drop-off rate between each step of the prediction model creation process.
Increased user satisfaction with the AI model consumption feature in Power Apps.
Improved understanding of model performance, as evidenced by more confident use of scoring metrics.
The collaboration across design, engineering, and content teams, along with feedback from real users, resulted in a streamlined and user-friendly experience for building AI models.
Conclusion
Through close collaboration with cross-functional teams, thorough user research, and iterative design, we were able to transform the AI Builder experience into something more intuitive and accessible. By breaking down technical barriers and guiding users through the model creation process, we helped empower businesses to better leverage AI within their workflows.