AI is
Humon
Most organisations treat AI as an IT rollout rather than a new cohort of colleagues. Success comes from designing relationships with AI and onboarding it as apprentice or expert talent.
Leaders must adopt induction and learning loops, giving AI clear roles, regular feedback and space to develop. Companies that coach and grow with AI will unlock the greatest value

We publish our insights on Substack
https://humonglobal.substack.com
Most AI programmes are being run like IT rollouts, when they should be run like hiring sprees. Instead of asking “How do we implement AI?”, leaders need to ask “What kind of colleagues are we inviting into the business, and how will we help them thrive?”.
Stop treating AI like software
For 18 months, boards have been approving “AI implementation” as if it were a new ERP module: business case, project plan, stand‑ups, training deck, go‑live. The assumption is simple: install, configure, train, switch on. But modern AI doesn’t behave like a system; it behaves like a cluster of new people joining your organisation – with initiative, ambiguity, and rough edges. Treat it like a static tool and you will get static, disappointing results.
Start treating AI like new hires
The companies that are quietly winning with AI are behaving less like IT buyers and more like talent leaders. They write job descriptions for AI, define who it reports to, set expectations, give it a buddy, and run regular performance reviews. They assume a probation period, not a perfect launch. They don’t just “deploy a model”; they onboard an apprentice or an expert into real teams, with real feedback and real consequences.
Two roles: apprentice and expert
In practice, AI tends to show up in two forms. As an apprentice, it is the bright generalist that drafts, summarises, explores options and accelerates messy, cross‑functional work. As an expert, it is the deep specialist that supports underwriting, diagnosis, optimisation or forecasting in the places where you actually make or lose money. The apprentice needs teaching, examples and safe sandboxes to fail in. The expert needs a sharply defined mandate, pristine data and clear escalation paths when judgement is required.
Get the axes right or governance fails
Every AI use case sits on two axes: how it behaves in your context (more like a machine or more like a human), and how close it is to the core of value creation (broad and general, or narrow and business‑critical). If you treat human‑like AI as machine‑like, you smother it in SOPs and wonder why it never improves. If you treat machine‑like AI as human‑like, you over‑engineer culture when you should be hardening data and controls. Misread those axes and your risk, ROI, and governance logic all warp.
Design the relationship, not just the workflow
At heart, the strategic question is no longer “What can the technology do?” but “What kind of relationship do we want with it, and what future does that relationship create?”. Think less procurement, more long‑term partnership. Early on, AI is dependent: clumsy, supervised, learning from your examples. As competence grows, so does independence: agents reliably own chunks of work. Ultimately, you move into interdependence: humans and AI reshape each other’s ways of thinking, working and deciding.
Lead like a responsible “parent”, not a helicopter
This is a leadership and ethics problem as much as a technical one. Clamp down too hard and AI remains glorified autocomplete, never trusted with anything that moves the needle. Let it roam without guardrails and you abdicate responsibility for its impact on customers, employees and society. The job is to create safety and structure without smothering initiative: to hold AI – and the humans around it – to account while leaving room for surprise, disagreement and uncomfortable truths.
Use self‑determination as your blueprint
If you want AI that actually sticks, borrow from self‑determination theory. First, build relatedness: a sense that people and AI are on the same side, with shared goals and language, not in competition. Then build competence on both sides: humans learn to prompt, critique and integrate; AI is tuned to your domain, workflows and values. Only then earn autonomy: progressive automation and augmentation inside clear, agreed guardrails. Skip straight to autonomy and you get the resistance, shadow use and pilot purgatory everyone is now complaining about.
Run an induction, not a rollout
Reframe AI adoption as induction and probation. Give AI a real job description: here’s what you are responsible for, here’s what you are not, here’s how your work will be used. Build belonging: talk about AI in stand‑ups, retros and rituals so it becomes part of “how we work”, not a side project. Put in place appraisals: regular reviews of the human–AI pairing across quality, bias, efficiency and user experience. Then introduce stretch tasks once trust is established, deliberately pushing AI – and the team – into more challenging work in controlled environments.
Shift from pass/fail to learning loops
The most dangerous question in AI programmes today is “Did the pilot work?”. It assumes binary success. High‑performing organisations replace that with learning loops: bounded experiments, joint reviews, rapid iteration. Mixed results are treated as data, not failure. People are positioned as co‑creators, not just editors of machine output. Interfaces and governance are designed so humans feel both empowered and obliged to question, override and redirect AI, not defer to it.
Let AI change you back
Done properly, AI adoption becomes a two‑way development journey. You are not just training models; your organisation is learning new literacies – prompting, sense‑checking, pattern‑spotting – and rewriting its norms around responsibility, creativity and risk. AI expands what you can see and tackle, from complex systems and climate to health and inequality, while you embed your mission, values and constraints into how it operates. You are not “using a tool”; you are co‑evolving with a new class of colleague.
The companies that win
The winners in this next wave will not be those with the biggest models or the most pilots. They will be the ones that learn to hire, onboard, coach and challenge AI with the same seriousness they apply to their best people. They will treat AI as a colleague to get to know, not a switch to flip. They will design relationships in which humans and AI discover who they are, build rapport, and then use that trust to automate and augment the work that actually matters. For everyone else, “AI implementation” will remain what it mostly is today: expensive, shallow, and quietly abandoned
