Something strange is happening in design hiring right now.

Traditional UX roles have slowed down. You've seen the layoffs, the freezes, the "we're restructuring the design org" emails. But if you look at where new roles are actually opening up, there's a clear pattern. Almost all of them involve AI in some way.

OpenAI's design team has grown from a handful of people to dozens. Google DeepMind, Anthropic, Midjourney, Runway, and hundreds of startups are all hiring designers. Not to make AI look pretty, but to make it actually usable. And it's not just AI companies. Banks, airlines, healthcare providers, retailers. Everyone is bolting AI into their products, and most of them are learning the hard way that the UX is the hard part.

The engineering is impressive. The interfaces? Often confusing, unpredictable, and difficult to trust.

That gap between what AI can do and what users can actually make sense of is where designers come in. And right now, there aren't enough of us who know how to think about this stuff properly.

This post is a practical guide to changing that. Whether you're a junior designer trying to stand out, a senior IC looking to specialise, or a design lead trying to figure out what your team should be learning, there's something here for you.

Why AI UX is a different kind of design problem

Before we get into the how, it's worth understanding why designing for AI isn't just regular UX with a chatbot stuck on top.

Most of what we know about interface design assumes deterministic systems. You press a button, something predictable happens. The same input gives the same output every time. We've built decades of design patterns around that assumption.

AI breaks it. Outputs are probabilistic. The same prompt can give different results. The system might be confidently wrong. It might work brilliantly for one user and fail completely for another depending on how they phrase things.

That changes everything. Information architecture works differently when the content is generated on the fly. Error states are more important when the system regularly gets things wrong. Trust becomes a design material you have to actively shape, not something you can take for granted.

The designers who are thriving in AI roles aren't necessarily the most technically skilled. They're the ones who've rewired how they think about the relationship between a user and a system. They've developed instincts for questions like: how much should the AI explain itself? When should a human stay in the loop? How do you help someone calibrate their trust in a system that's right 85% of the time?

If you can learn to think this way, you'll be in demand. So let's talk about how to get there.

Step 1: Reshape what you're reading and watching

You don't need to enrol in a machine learning course tomorrow. You need to start soaking in how people are thinking about AI as a design problem.

The single best starting point is Google's People + AI Guidebook at pair.withgoogle.com/guidebook. It's free, it's thorough, and it was written by designers who've actually shipped AI products at scale. It covers mental models, setting expectations, handling errors gracefully, and designing for trust. I'd genuinely recommend bookmarking it and revisiting it every few months as your understanding deepens.

Apple's Human Interface Guidelines for Machine Learning is another great resource. It covers when to show AI confidence, when to let users correct the model, and how to handle mistakes without destroying the experience. The principles apply well beyond Apple's ecosystem.

Microsoft's HAX Toolkit is worth your time too, especially their design patterns for co-pilot style experiences. Given how many products are shipping that exact pattern right now, understanding the thinking behind it is immediately useful.

For ongoing reading, Maggie Appleton's essays at maggieappleton.com are some of the most original writing on language model interfaces out there. Jakob Nielsen has been writing a lot about how AI changes fundamental usability rules, and whether or not you agree with all of his takes, they'll make you think. Growth Design does brilliant visual teardowns of AI product experiences that are perfect for building pattern recognition.

On YouTube, NN Group puts out research-backed content on AI UX patterns regularly. Figma's Config talks are worth watching, especially the sessions where their own team discusses designing AI features within Figma.

And follow people doing interesting work in the space. Amelia Wattenberger (former GitHub) does incredible explorations of LLM interface design. Kai Wong wrote "Design in the Age of AI" and posts consistently about how designers should be adapting. Both are worth following on LinkedIn and X.

As you consume all of this, you'll start noticing a set of recurring themes. Transparency and explainability. User control and agency. Error handling and graceful failure. Trust calibration. Progressive disclosure of complexity. These aren't just academic concepts. They're the vocabulary of AI design, and getting comfortable with them is the foundation for everything else.

Step 2: Add some structure to your learning

Reading and watching builds intuition, but at some point you need structured learning to fill in the gaps. Think of it less like going back to school and more like spending a few focused weekends getting your head around things your engineering teammates talk about daily.

The goal isn't to become a data scientist. It's to build enough technical literacy that you can have productive conversations about what's actually possible, ask better questions in sprint planning, and understand the constraints you're designing within.

AI for Designers by IDEO U is specifically made for creative professionals and focuses on thinking about AI as a design material. AI for Everyone by Andrew Ng on Coursera is the most accessible introduction to what AI can and can't do, and it takes maybe a weekend to get through. Elements of AI by the University of Helsinki is free, interactive, and surprisingly well-paced.

If you want to go deeper, Google's Machine Learning Crash Course is more technical but will transform your ability to follow along when engineers explain model limitations. You don't need to absorb everything. Focus on understanding the types of problems ML solves, common failure modes, and the vocabulary. Conversational Design by Google is essential if you're interested in chatbot or voice interfaces, which is where a huge amount of AI UX work is happening right now.

For the genuinely ambitious, CS50's Introduction to AI with Python from Harvard is challenging but even the first few weeks will give you a level of understanding that sets you apart from most designers.

Here's the key though. Don't just passively watch lectures. For every concept you learn, sketch out a design implication. Learn about classification models? Sketch a UI that shows classification confidence to a user. Learn about training data bias? Write a design principle for how your team might audit AI outputs. Keep a journal where you pair technical concepts with design applications. This becomes raw material for your portfolio later.

And a quick note on certifications. They're fine for providing structure, but nobody's getting hired because of a certificate. You'll get hired because you can show that you've designed thoughtful AI experiences. Focus on understanding over credential-collecting.

Step 3: Build something (this is the part that actually matters)

This is where most designers stall. They read the articles, take the courses, nod along, and then never actually make anything.

I looked through dozens of job descriptions for AI design roles at companies like Google, Anthropic, OpenAI, Figma, and Adobe. The pattern is obvious. They want to see that you've done the work, not that you've watched the videos.

Typical requirements look something like: "Portfolio demonstrating shipped AI features with evidence of user-centred design process" or "Strong portfolio showing end-to-end design of AI-powered experiences, including evidence of research, prototyping, and iteration."

The good news is that the barrier to building AI products has never been lower. And as a designer, you have an advantage that a lot of engineers don't: you can make things feel considered and polished, not just functional.

There are three solid approaches here.

Build your own AI product. Tools like Cursor, Claude, Bolt, v0.dev, and Replit mean you don't need to be a developer to ship something real. The key is to pick a project where the design decisions are genuinely interesting. A meal planning assistant that learns preferences over time. A writing feedback tool for a specific niche. An accessibility checker that uses AI to audit websites. A design system documentation generator. The design challenges in these projects (how do you show the AI learning? how do you present critique without demoralising someone? how do you handle hundreds of suggestions without overwhelming the user?) are what make your portfolio stand out. Nobody wants to see another ChatGPT wrapper with a nice skin on it.

Redesign an existing AI product. This is the designer's classic move, and it's particularly effective right now because so many AI products have terrible UX. Pick something well-known, identify a real usability problem (even informal testing with a few friends counts), document where users struggle, propose a redesign grounded in AI UX principles, prototype it, and test it. ChatGPT's conversation management, Midjourney's prompting experience, Google's AI Overviews, Notion AI's integration patterns. All of these have obvious UX problems worth tackling.

Contribute to an open-source or startup project. Many AI startups have more engineering talent than design talent. Reaching out with something like "I love your product, I noticed this usability issue, I'd love to help redesign that flow" is a surprisingly effective message. You get a portfolio piece and a reference. They get better design.

Whatever you build, document everything. The process matters as much as the output. Screenshot your research. Record your user tests. Save your iterations. Write up your rationale. The best AI UX portfolios show the thinking, not just the screens. Something like "I chose to show the AI's confidence score because user research revealed that participants over-trusted the output when no indicator was present" is the kind of sentence that gets you hired.

Step 4: Make your portfolio tell the right story

If you're positioning yourself as someone who can design AI products, your portfolio needs to show that you understand what makes AI UX different from regular UX.

Design leads at AI companies consistently talk about looking for a few specific things.

Systems thinking. AI features don't exist in isolation. They want to see that you think about how AI integrates into the broader product, how it affects adjacent features, and how the experience changes over time as the model learns or the user's behaviour shifts.

Comfort with ambiguity. AI products involve more uncertainty than traditional products. Show projects where the requirements weren't clear, where you had to define the problem space yourself, where you explored multiple directions before converging.

Technical fluency. Not expertise, fluency. You don't need to explain backpropagation. But a sentence like "the model had a 15% error rate on edge cases, so we designed a human-review step for low-confidence outputs" shows that you understand the constraints you're designing within. Engineers want to work with designers who get this.

AI-specific research methods. Standard usability testing is necessary but not sufficient. Show that you've thought about Wizard of Oz testing (simulating AI behaviour before the model exists), longitudinal studies, trust measurement, and mental model research.

Ethical consideration. Bias, privacy, transparency, autonomy. These aren't nice-to-haves in AI design. If your portfolio doesn't address any of them, it reads as naive.

For structure, each case study should cover the context (what was the product, who were the users, what was the AI capability and its limitations), the design challenge (what made this hard, specifically because AI was involved), your research (what you learned about users' mental models of AI), your design exploration (show range, especially around different levels of AI autonomy and transparency), your solution and rationale, and the impact.

Three to four strong case studies is plenty. At least two should involve AI. The others can showcase transferable skills like design systems, accessibility, or complex interaction design.

Step 5: Connect with the right people

AI design is still a small community, which means the right connections have an outsized impact.

ADPList is great for finding mentors at companies you're interested in. Config (Figma's conference) regularly features talks on AI design. AI UX meetups are popping up in cities and online. If there isn't one near you, starting one is its own form of credibility. On LinkedIn, share your learning, post about AI design principles you're exploring, write about what you're building. Hiring managers are scrolling and looking for exactly this.

The most effective networking comes from contributing, not asking. Share a teardown of an AI product's UX. Offer a usability audit to someone building something. Comment on people's work with specific, substantive observations. "I'd love to pick your brain about AI design" doesn't open doors. "I noticed your team redesigned the Copilot suggestion UI and I'm curious how you handled the tension between inline suggestions and focus disruption" does.

And don't limit yourself to other designers. The designers who thrive in AI environments are the ones who can talk to ML engineers, data scientists, and researchers. Join AI communities that aren't design-specific. Attend ML meetups. Lurk in research Discords. You don't need to understand everything. You need to understand enough to ask good questions. Engineers love working with designers who meet them partway.

Step 6: Get yourself in the room

When you're ready to start looking, know that AI design roles don't always advertise themselves clearly.

Check career pages directly at companies like Anthropic, OpenAI, Google DeepMind, Figma, Adobe, Runway, and Hugging Face. Use AI-specific job boards like ai-jobs.net and Y Combinator's Work at a Startup. Set up LinkedIn alerts for terms like "AI UX Designer," "AI Product Designer," and "ML Product Designer."

But also read between the lines. Many AI design roles are listed as generic "Product Designer" or "UX Designer" positions with AI mentioned somewhere in the description. If a company is shipping AI features, and almost everyone is now, their design roles probably involve AI work.

Position yourself clearly. Your headline shouldn't just say "UX Designer." Something like "Product Designer | AI & ML Experiences" or "UX Designer Specialising in Human-AI Interaction" immediately signals that you understand this space.

Reframe past experience through an AI lens where it's honest to do so. Did you design a recommendation feature? That's AI. Search relevance? That involves ML. Data-heavy dashboards? Adjacent to AI analytics. You probably have more relevant experience than you think.

And don't underestimate the speculative application. If there's a company you want to work for, don't wait for the perfect listing. Send a concise message to their Head of Design with a link to a relevant portfolio piece. "I noticed users struggle with [specific problem in your product]. Here's how I'd approach it." At smaller, fast-growing AI companies, this works more often than you'd expect.

Step 7: What to expect in the interview

AI design interviews have a specific flavour worth preparing for.

The design challenge will almost certainly come up, either as a take-home or a live whiteboard session. You'll get prompts like "design an AI-powered feature for our product" or "how would you handle onboarding for an AI writing assistant." The thing that separates strong candidates from everyone else isn't visual polish. It's showing that you've thought about what makes AI different. How does the user understand what the AI is doing? What happens when it's wrong? How does the user stay in control? How does trust develop over time? What are the edge cases? Candidates who treat the AI as a magic black box and only present the happy path get filtered out fast.

In the portfolio review, expect probing questions about why you made specific decisions around transparency or user control, how you collaborated with engineers on technical constraints, and how you measured success (this is genuinely hard for AI features, and acknowledging that complexity honestly is better than pretending you had it all figured out).

Many interviews also include a round with an ML engineer or PM. You don't need to whiteboard an algorithm, but you should be able to discuss the difference between supervised and unsupervised learning conceptually, why training data matters for the user experience, and common failure modes like bias, hallucination, and distribution shift and what they mean for design.

Be opinionated but curious. This is a young field. There aren't established right answers for a lot of these questions, and teams want people who have a point of view but stay open to learning. Show that you've actually used the company's product and have specific observations about the experience. Ask thoughtful questions. "How does the design team influence model behaviour?" and "What does your process look like when the model's capabilities are still being developed?" signal that you understand the territory.

Where to start based on where you are

If you're a student or career changer, focus on building a portfolio from scratch. Go hard on Steps 1 through 3. Build two or three AI-focused projects, take the foundational courses, and start sharing your work publicly on LinkedIn. Apply for junior roles, internships, and startup gigs where you can get hands-on fast.

If you're early in your career (0-3 years), you've got the design foundations. Now you need AI depth. Volunteer for AI feature work at your current company, even if it's the messy stuff nobody else wants. If that's not an option, build side projects and contribute to open-source AI tools.

If you're mid-career (3-8 years), you have the design maturity. That's your advantage. A lot of people in AI roles are technically strong but design-weak. Go deeper on technical fluency, reposition your existing experience, and lean into the fact that you bring rigour and craft to a space that desperately needs it.

If you're a design leader, your job isn't to push pixels on AI features. It's to set the strategic direction. Understand capabilities, limitations, and implications well enough to guide your team, hire the right people, and build relationships with ML leadership. Write and speak about AI design publicly.

A few mistakes worth avoiding

Confusing "using AI tools" with "designing AI products." Using Midjourney for mood boards or ChatGPT for copy is not AI design experience. It's using tools. Designing the interfaces through which other people interact with AI is a completely different skill. Don't mix these up on your CV.

Prioritising visual polish over interaction design. AI products succeed or fail based on their interaction patterns, not their aesthetics. A beautiful chatbot that doesn't handle errors well is a bad product. Focus on flows, states, and feedback loops first.

Skipping the ethical questions. Bias, privacy, transparency, and user autonomy are core design challenges in AI, not afterthoughts. If your portfolio doesn't engage with any of them, it looks incomplete.

Getting the technical balance wrong. You're not an ML engineer and shouldn't pretend to be. But you also can't wave your hand and say "the engineers deal with that." The sweet spot is knowing enough to design well and ask the right questions.

Waiting until you feel ready. Nobody in this space feels fully prepared. It's moving too fast. Start building now, start applying now, and learn as you go.

Only targeting AI-native companies. OpenAI and Anthropic get all the attention, but some of the most interesting AI design challenges are at established companies integrating AI into existing products. Fintech, healthcare, education, travel, e-commerce. The design problems are often more nuanced because you're balancing AI capabilities with existing user expectations and workflows.

The opportunity is real

The path to becoming a designer who's genuinely good at AI UX isn't complicated. It's just a series of deliberate steps. Reshape what you consume. Build your understanding. Make things. Show your thinking. Talk to the right people.

The field is young enough that the patterns are still being written. The best practices are still being figured out. That's not a barrier to entry. That's what makes it exciting. There's room to not just join the conversation but to help shape how this whole thing works.

Start this week. Pick up the Google PAIR guidebook. Sign up for one of those foundational courses. Start sketching out a project idea. By this time next year, this won't be something you're thinking about getting into. It'll be what you do.