May 24, 2025

Why Designing AI system feels So Hard (And What We Can Do About It)

"I get what AI does. But I just can’t figure out how to design for it."

That was my reaction after wrestling with a seemingly simple AI feature. All I wanted was to design a chatbot that gives helpful responses. But the more I tried to map out interactions and edge cases, the more the whole thing felt like trying to sketch a tornado. Every time I thought I understood what the system would do, it would surprise me. Sound familiar?

Then I stumbled on a research paper from CHI 2020 titled "Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design". It felt like someone had looked inside my brain and put my confusion into structured, articulate words. Here’s what I learned.


The Big Question: Why is designing for AI so hard?

At first glance, it doesn’t seem like AI should be a design nightmare. After all, UX designers have worked with complex systems for years. But the paper lays out why AI is a different beast.

The authors say there are two big reasons:


1. Capability Uncertainty: It's like designing for a shape-shifting tool

With most tech, you know what the system can and can’t do. A button opens a dialog. A form submits data. Easy peasy.

But with AI? Imagine trying to design a hammer, except you're not sure if it's going to be a hammer, a wrench, or a cheese grater tomorrow.

AI systems learn and evolve. What they can do today might not be true tomorrow. They can surprise you, both in good and bad ways. And as a designer, it's tough to create thoughtful interactions when you don't know what the system will be capable of in the future.


2. Output Complexity: The AI doesn’t just change; it reacts

Some AI systems have simple outputs. A spam filter, for example, just says "spam" or "not spam." You can design around that.

But what about systems that generate open-ended responses? Think of Siri, Google Search, or Spotify recommendations. The outputs are like improv comedy — varied, reactive, and often unpredictable.

You can't sketch or wireframe every possible response. And if the AI makes a mistake, it’s not just annoying—it could break trust.


A Helpful Framework: The 4 Levels of AI Design Complexity

The researchers propose a model to categorize AI systems based on how hard they are to design for. Here it is:

Level 1: Simple and predictable

  • Example: A toxicity detector that flags profane comments.

  • Easy to design for because outputs are limited and known.

Level 2: Predictable, but a wider output range

  • Example: Route recommendation systems.

  • Still manageable, but trickier to anticipate all edge cases.

Level 3: Learning systems with simple outputs

  • Example: Adaptive menus that learn what you click most.

  • The system evolves, but the output isn't too wild.

Level 4: Learning systems with complex, open-ended outputs

  • Example: Siri, FaceTagging in photo apps.

  • Super hard to design for. The system keeps changing, and its outputs are nuanced.

Most traditional design tools and processes work well for Level 1 and 2. But when you hit Level 3 and 4, you're no longer designing for a tool—you’re designing for a co-pilot that thinks and grows.


So What Do We Do About It?

The paper doesn’t leave us hanging. It offers several ways forward:

1. Acknowledge the AI is "alive" (kind of)

Stop treating AI like a static product. Think of it as a living, evolving system. That mindset shift alone helps us accept that prototypes won’t be perfect.

2. Design with "unknowns" in mind

When we design for AI, we should assume variability. Build in ways to recover from errors gracefully. Offer explanations. Give users control. Design the guardrails, not just the main road.

3. Embrace new tools and techniques

Tools like Wizard-of-Oz simulations, interactive machine learning, and even rule-based mockups can help us play with AI behavior before it’s fully built.

4. Collaborate closely with AI engineers

Designers can’t work in isolation. We need tight loops with data scientists and engineers to understand the limitations and possibilities of models in real time.

5. Treat fairness, ethics, and trust as core UX issues

Don’t bolt on fairness after launch. Bias, accessibility, and error impact should be considered from the first wireframe.


Final Thoughts

AI feels magical until it doesn’t. As designers, researchers, and builders, it’s on us to bridge the gap between AI’s technical wizardry and human experience.

This paper reminded me that it’s okay to feel overwhelmed by AI. The uncertainty and complexity aren’t signs that you’re bad at your job. They’re signs that the job has changed. And the only way forward is to evolve how we design.

Not with more control. But with more curiosity, humility, and collaboration.

balance cost, quality and deadlines with TestZeus' Agents.

Come, join us as we revolutionize software testing with the help of reliable AI.

© 2025. All Rights Reserved. Privacy Policy

balance cost, quality and deadlines with TestZeus' Agents.

Come, join us as we revolutionize software testing with the help of reliable AI.

© 2025. All Rights Reserved. Privacy Policy

balance cost, quality and deadlines with TestZeus' Agents.

Come, join us as we revolutionize software testing with the help of reliable AI.

© 2025. All Rights Reserved. Privacy Policy