You can't AI optimize trust

TL;DR

There are certain things that AI just cannot optimize, especially things where we don't just want output. Experts have 'skin in the game', judgment and accountability, which is stuff that AI will never be able to replicate, and that's because we humans will never trust an unaccountablity machine. We'll explore why in certain areas the human element is simply irreplaceable.

You can’t AI optimise a product whose value comes from human expertise. Regardless of AI advances, let’s think about the fields where human connection matters more than output. When faced with a problem that you need solved, who would you turn to?

Before AI, your choices were limited to experts, communities, or your own skill and grit. You’d search the internet for websites and posts on how others solved similar problems, and after tons of research you’d be confident enough to make a decision.

Now you live in a rapidly evolving tech world, and you’re struggling to keep up. Automating processes with AI seems compelling, almost the only strategic choice you have. Should you really just automate everything away though?

Output isn’t everything

If your child were to fall ill, would you entrust their care to a hospital that has replaced its doctors with AI, or one renowned for its exceptional physicians and patient-centered care?

If you needed legal counsel for a high-stakes matter, would you seek out a firm boasting “lawyer AI agents,” or one with a proven track record of successful human-led cases and genuine client relationships?

If you had to choose a school, would you prefer one that fired all teachers and replaced them with AI, or would you pick the school with the best teachers in their subject?

I want to highlight a fundamental truth, not everything we value is reducible to processes we can automate and reap output from. There are domains where you have to depend on expertise and authority built over decades. Would you trust an AI to take major decisions for you? Would you follow through with the judgment of an AI that cannot understand or feel the impact of its conclusions?

Accountability isn’t simple

When an engineer designs a critical system, or a doctor makes a diagnosis, what’s the one thing that makes you trust them? Maybe it’s their reputation, their professional standing. Maybe it’s because their livelihood is tied to the outcomes. What is it that makes you reach out to a human when you want advice? It’s accountability. Accountability is why we want human experts taking decisions. If something goes wrong, you want to be able to ask the person responsible to make things better. You want someone who has “skin in the game” and can catch you before you fall.

However, in the whole pursuit of seeking output, we have forgotten something very important: People good at simple tasks often aren’t trusted with high-stakes problems. You need to look beyond being good at what you do. You need to be the person people can trust when the pizza hits the ceiling.

A senior engineer mentoring a junior team member doesn’t just impart technical knowledge, but also builds a relationship, offers guidance, shares experiences, and builds confidence. Similarly, a business leader making strategic decisions weighs not only market data but also team morale, company culture, and stakeholder relationships.

Relationship can’t be automated

Beyond accountability, we humans thrive on relationship. Automating processes that involve these human-centric elements risks collaboration, innovation, and human welfare. We need to use AI to boost ourselves, not replace ourselves, and keep what’s human about us.

AI is an instrument, and you need to learn how to play it well. You wouldn’t buy a violin to play at your wedding, you’d hire a violin player. Ultimately, humans won’t trust an AI, they’ll trust other humans. AI can simulate the output of 10,000 hours of practice, but it cannot simulate the grit required to earn it. We don’t just value the answer; we value the person who stayed in the room long enough to find it.