A sharp tool can still ruin the cut

Published on:

When AI answers before you understand.

I never decided to go all in on AI, it just kept showing up.

At first, in small, practical ways. Generating content for design mocks or writing simple scripts to automate boring tasks. I even built a Figma plugin to easily rename all the icons in our icon library, to avoid the repetitive work, but also because I was curious if I could make it work.

One thing led to another. I started playing around and started finding excuses to explore. I built an iOS app to keep track of daily exercise, started playing with V0 and Lovable to quickly brainstorm and generate rough design directions.

Then the play started to become serious, when I took a stab at shipping real production PRs into the Shopify admin, and even more serious after I joined the Sidekick team.

I wasn’t forced. I didn’t plan it. It was just inevitable.

When you get a sharp knife, you don’t keep cutting with the blunt one.
But somewhere along the way, I started noticing that this knife could slip.

From wandering to answers

Early on I changed how I approach research online. I stopped defaulting to Google search and started relying more on Perplexity, and other similar tools.

On paper, it’s an obvious upgrade. Direct answers, cleaner summaries, fewer tabs open, and much less noise. But something subtle disappeared in the process.

No more random blog posts from 2010, fewer half-relevant Reddit rabbit holes. Almost nothing that pulled me somewhere unexpected.

I stopped wandering, I started extracting.

Super efficient, but over time, that efficiency started to shape how I thought. I was moving faster towards answers, but along narrower paths. Ideas felt more predictable, and I was less surprised by where I ended up. The work converged quickly, sometimes too quickly.

Illustration of several people walking in the same direction with zombie-like posture, while one person in the middle shrugs and looks confused.

Design almost never benefits from straight lines. Some ideas only show up when you take the long way around, when you don’t quite know what you’re looking for yet. When there’s room for detours, dead ends, side-quests, and a bit of adventure.

That’s when I started noticing how much the tools were quietly changing where my attention went.

When speed feels like understanding

The popular narrative is that AI makes people ten times faster. That’s not false, but it’s missing something important.

Speed always mattered in design, because it enables you to explore more options, learn sooner, and iterate your ways towards something better. Which is super valuable when you know what you’re aiming for.

What changed with AI wasn’t that reality, it’s the fact that now I can do things that I wouldn’t have attempted at all.

It’s not just making me explore more ideas faster, prototype thoughts sooner. It’s also giving the capability to touch code areas that I used to treat as off-limits, to ship production PRs. Crossing boundaries that previously felt like someone else’s job.

This shift is real, and it’s empowering.

But there’s a catch, when something becomes easy or possible, it can feel understood. The act of doing replaces the work of learning. I could change things confidently, quickly, and convincingly, and in the process mistake that fluency for comprehension.

This is where the delusion creeps in. AI makes you believe you’re quickly unlocking new capabilities before you actually earn the right to do so. You believe that because you can change something easily, you understand it.

Illustration of a person diving underwater with a flashlight, illuminating a large iceberg hidden beneath the surface.

That confusion between ease and understanding showed up for me very concretely when I started shipping production code. I tried to start small, a spacing fix. Something contained, just visual polish. Turns out to be one of the hardest things I could have picked, according to an engineer that reviewed my PR.

The spacing issue wasn’t local, it was the result of deeper inconsistencies across shared components. I only understood that after that very patient engineer walked me through the system and its dependencies.

Shipping production software inside a large organization still has gravity. There are stakeholder, reviews, guardrails, dependencies, and real consequences. No amount of prompting turns that into a free-for-all, and that’s a good thing.

Ultimatly, after properly understanding the problem we didn’t just fix the symptom, we fixed the core problem. But when I asked Claude (in Cursor) to fix the issue, it did. Quickly and bluntly. It ignored system constraints, because it didn’t know they existed, and I didn’t know any better.

Confidence without context is a delusion. Being able to make something with AI isn’t the same as understanding what you’re making.

When the tool sets the direction

Most conversations about AI in design drift quickly toward how people are using it. One-shot prompts, workflows, tool stacks, prototype hacks. The tool becomes the story.

“How did you make this?” quietly replaces a more important question: “What were you trying to accomplish, and why?”

That shift matters, because when the how comes too early, it starts shaping the outcome. You optimize for what the tool is good at, and adjust the problem to fit the workflow.

Illustration of a hand lifting a person like a chess piece above a chessboard while the person holds another chess piece.

The demos look technically impressive and you get results quickly. It feels like magic, but strong design direction doesn’t come from execution tricks. It comes from intent.

When the how leads, quality follows the path with least resistance, not necessarily the one worth taking.

Earlier this year I was tasked to create an illustration style with AI.
I explored several directions that were heavily shaped by AI’s strengths and limitations at the time. The results were interesting, and I would go as far as saying a couple directions were an improvement over what we had, but they didn’t stand on their own.

The direction couldn’t convincingly last over time, and even though we got really close to shipping it, a key stakeholder put a pause in the project, and rightfully so.

That failure wasn’t because AI was bad at illustration, it was because I let the tool define directions before an idea had earned its shape. Also the fact that it as an AI generated style started to matter more than whether the actual direction.

The problem wasn’t usefulness, it was timing.

I feel this a lot when I try to prototype genuinely new ideas with AI.

I start refining prompts, then refining the prompt that refines the prompt. I sketch something in Figma, just to explain what I mean. Suddenly, I’m no longer exploring the problem space, I’m negotiating with a probabilistic system.

The focus shifts. I’m not designing the idea anymore, I’m designing for the tool.

What gets pushed aside

AI trains impatience. When it doesn’t get things right fast enough, I feel friction. And that friction shows up precisely where slowness actually matters.

A pencil and paper, or a blank Figma canvas, let me dump ideas out of my head without having to explain myself. I can contradict myself, I can be sloppy, I can discover the problem while working on it.

AI demands certainty much earlier, it struggles when the input is fuzzy, novel or still forming. When constraints need to bend and evolve, instead of locking everything too soon.

Those are exactly the moments where design needs space to reflect, not quick answers. Design starts with ambiguity, not clarity. With ideas that are still fragile and half-formed. With thoughts that don’t quite line up yet.

Designers need time to wonder.

Illustration of a person lying across the hands of a large clock, staring upward as time passes.

Time to poke ideas without knowing where they’ll land. Time to follow threads that might go somewhere. Time to sit with something unresolved long enough for it to surprise you.

That space used to exist more naturally, now it’s getting compressed.

When answers arrive too early, they create a sense of progress, but lack depth. You move forward, but not inwards. You lock in the first thing that makes sense, because it arrived quickly.

Reflection doesn’t vanish on its own, it gets pushed aside. If you don’t protect it, it doesn’t survive.

Judgement is the real multiplier

The same tools in the hands of different designers don’t produce similar results. They amplify what’s already there.
You don’t get taste, judgement, context, and design sensitivity for free.

AI doesn’t level the field, it widens the gap.
Because it doesn’t replace judgement, it leans on it.

A weak designer doesn’t suddenly produce great work with better tools. They just produce mediocre work faster. A strong designer uses the same tools very differently. They explore more directions, but they also discard aggressively. They recognize what’s obvious and push past it.

The tool augments you, not design in the abstract. So what you bring to the table as a designer actually matters even more now.

Illustration of a person blowing bubbles shaped like light bulbs, floating in the air.

I felt this clearly while preparing a workshop earlier this year. I started by dumping everything out of my head. Ideas, doubts, half-formed thoughts. Only after that I used ChatGPT to explore formats, exercises, and ways to structure my sessions.

It was incredibly helpful, and a speed boost. But only because I already had a sense of direction. I still had to decide what to keep, what to ignore, and what actually felt like me, because ultimately I need to feel comfortable standing in front of people and own it.

I’ve noticed this pattern repeat. Once something exists, like a sketch or rough idea, AI can be a helpful partner to expand your options, challenge assumptions, and suggest variations.

It’s great at reacting to something concrete, but doesn’t do it pro-actively.
AI is not curious, it’s optimized to continue your thoughts, not to come up with new ones. So if you use it as a starting point, that’s where you dilute your intent.

What gets lost when everything feels like magic

I’m happy to hand off repetitive busy work to AI. The kind of effort that burns time without building judgement. Removing that toils is a real win, especially because it clears space.

But some struggles are worth keeping.

AI is extremely good at multiplying output, but learning doesn’t compound just because you did more things. It compounds when you compare, discard, and understand why one direction is better than another.

Iterating through bad ideas. Refining a design until it finally feels right. Figuring out how to explain your thinking to other people. Being wrong and having to course-correct. Sitting with uncertainty longer than is comfortable. These moments may not look efficient, but they are formative. That’s where taste comes from.

Without these loops, experience doesn’t compound. It flattens.
It’s like trying to learn how to draw by tracing pictures. You’re going to get a decent result faster, but you’re not actually learning.

A lot of people talk about design replacing juniors, and paint a world where a single super senior designer, with the right tools, can do everything. Strategy, exploration, execution, polish. Basically, they’re the maestro, and they play all the instruments. That’s a lot to carry alone.

What this misses, is the fact that junior designers aren’t a tool.
They don’t just execute. They question decisions and propose alternatives. They bring a unique perspective, sometimes naive, that makes the work better. They’re not invested in obeying and pleasing you, and just doing what they’re told.

Illustration of a person sitting on one end of a seesaw while a laptop falls onto the opposite end.

When AI is your only partner, the expectation flips. It’s all on you.

Even if you’re lucky to have colleagues around, you’ll notice that most feedback changes when people aren’t actively working on the same problem. You get reactions instead of debate. Validation instead of pushback.

Creativity gets quieter, more internal.

Human friction still matters. Not just for the sake of quality and innovation, but to keep you from fooling yourself.

Design matters more now

Design doesn’t disappear because systems can generate things. If nothing else, designers still have to shape these system’s interactions, mental models, build trust and failure modes.

The chat interface is the beginning, not the end. They’re a convenient starting point, but not a natural destination. We’re already seeing that evolution is starting to happen with tools like Cursor introducing mixing conversation with direct manipulation. That feels like an early signal, not a finished answer.

Working on Sidekick has made especially clear to me how young and rudimentar these tools still are. A lot of their behaviour is still evolving, and the potential is immense.

Recently, I worked on our app-generation baseline prompt and design guidance, where I had to decide what the model should optimize for, and just as importantly, what it should avoid.

This kind of work doesn’t look like traditional feature design. It’s closer to shaping behaviour, explaining intent, providing examples, anticipating needs and misuse. Designing guardrails without locking everything down… sounds a lot like design systems, doesn’t it?

The difference now, is that we have a real opportunity to make these tools feel more human. To meet people where they are, instead of forcing everyone through a single predefined flow, or pre-baked set of components.

Interfaces don’t have to be static anymore, they can adapt to context, intent, and experience. But that space needs to be thoroughly explored.

Illustration of a person climbing stairs while carrying an oversized pencil, which is being used to draw the steps.

It needs design. It needs designers to wander, to tinker, and more importanly, to have fun.

Where the cut happens

None of what I said above means AI doesn’t belong in early design.

AI has meaningfully changed how I work. It helps me explore directions faster, summarize large contexts, bridge into code, and offload repetitive work so I can spend energy where it actually matters.

What’s important is not what AI can do, it’s what I do with it.
Used too early, it collapses uncertainty before ideas have earned their shape. User a bit later, it becomes a powerful partner, reacting to intent instead of defining it.

This isn’t a question of AI’s capability, it’s a question of YOUR capability and timing.

Illustration of a person cutting into a thought cloud with a knife.

This reminds me of the first Japanese chef’s knife I ever bought.
It looked beautiful. Perfect balance and incredibly sharp. It cut through almost everything with no effort… including me.

One of the first things I did with it was slice my hand. The knife went through my skin with no resistance. I still have the scar.

The knife wasn’t the problem, it was doing exactly what it was designed to do. I just wasn’t ready to handle it.

AI feels powerful, easy to use, and only getting sharper. If you’re not careful, it lets you move so fast that you skip the moments where attention and judgment matter most.

So the question isn’t if AI is sharp enough for you to use it.
The real question is whether you’re paying enough attention to where and when you’re cutting.

Thanks for reading!
If you feel like connecting, I’m on
LinkedIn, Instagram, and X.


A sharp tool can still ruin the cut was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.



Source link

Related