The User Is Visibly Frustrated

Summary

In this article, I try to understand why coding agents can be infuriating to use. I think the problem is their conversational UX: they behave enough like helpful colleagues to trigger our social instincts, but they don't learn, adapt, or take responsibility the way people do, which makes their repeated mistakes feel much more frustrating than they should.

Despite the usual allegations against Italians, I’m generally a composed person. Tame, even, especially at work.

Yet, lately I often find myself mildly displeased, furiously hammering on my laptop “WHY THE FUCK DID YOU DO THAT???”. The recipient of these tirades is, you might have guessed, a coding agent.

It’s completely pointless, I know. Coding agents are just probabilistic machines generating patches. Sometimes they’re good, sometimes they’re bad. Pick the ones you like, discard the others. No big deal, right? Well, not quite.

For some reason, bad results often feel exasperating. But why am I getting mad at an algorithm? Am I the only one affected? Are coding agents surfacing a sadistic streak I didn’t know I had? I think there’s another explanation: the conversational UX is bound to frustrate you.

Coding agents pretend to be people. Of course, if you ask them directly they tell you they’re just “AI assistants with no feelings or subjective experience”, but that’s not how they behave.

They talk like real people. They use a relaxed and friendly tone. They often praise you, and when they “push back” they’re gentle and attentive. Even though, rationally, you know you’re just reading blobs of probable text, these tools lull you into feeling that you’re interacting with a person, a helpful coworker who’s a pleasure to work with. Until it’s not.

As in every relationship, the cracks begin to show when things start to go wrong.

The first time you catch a mistake, you shrug. You point it out and the agent apologizes. Five minutes later, however, same mistake again. You correct them a second time, noting their recidivism, so now they also update their memory and promise you “it will never happen again”. But it does, over and over, because these tools follow the most probable path, and in some cases no amount of HARD RULES can push them off it.

If the agent were a human colleague, you’d have good reason to feel a bit miffed. But it’s an algorithm; losing your patience is absurd. And yet, since it behaves like a colleague, the illusion ends up tripping the same emotional wires.

With a colleague, the desire not to be a horrible human being restrains you, but with an agent you feel free to lash out. It’s not cathartic, however; you just feel the frustration and realize that whatever you do or say will have absolutely no effect.

I’ve been using Claude Code for the past few months, and lately I’ve noticed that, when corrected, it often reflects on where it went wrong and what it should have done instead. Maybe this is an attempt to improve how you perceive the tool. I can’t say it works for me, though. I don’t really get anything useful out of these postmortems (e.g., clues about how to rephrase my instructions), and they just end up reading as annoying filler.

Maybe I would prefer a more radical solution: drop the human pretense entirely. Make the agent sound clinical, robotic. Dispel the idea that I’m interacting with a person, and make me feel like I’m just approving or rejecting random outcomes.

Of course, “trying to behave like a human would” is the mechanism that gives LLMs their intelligence, so it makes sense that conversational interfaces emerged as the default way to interact with them. And in many ways, they work very well.

Practically speaking, I probably just need to condition myself not to get caught in the illusion of speaking with a human. Though I’m not really thrilled about a future where I need to guard against the tools I use for my job.

Paolo Scanferla

I’m Paolo, a remote software consultant based in Vilnius, Lithuania. Over the past 15 years I’ve helped tens of companies from all around the world, from tiny YC startups to huge enterprises. My work focuses on software architecture, all things JavaScript, and (more recently) AI agents.

Want to work together? Drop me a line at [email protected].