November 2025

Agentic AI for SMEs: 6 Lessons from McKinsey/QuantumBlack

If you’re an SME deciding whether to try “AI agents” this blog will help you decide what to do, what to avoid and how to steer without drama.

McKinsey’s QuantumBlack team looked back on a year of real agentic AI deployments and outlined six practical lessons. If you’re an SME deciding whether to try “AI agents” (software that can plan, call tools, and complete multi-step tasks) this blog will help you decide what to do, what to avoid and how to steer without drama.  

1) Don’t start with “the agent” - start with the workflow

Everyone is talking about AI and Agentic AI nowadays and this often pushes us to make the mistake of chasing the “Agent” not the solution. Then you find yourself in a flashy demo session but somehow end up with little business value.  

It’s important to start with the workflow (people + process + tech) and only then decide where the agent would actually fit.  

SME move: Whiteboard one high-friction process (onboarding, RFP triage, invoice QA). Mark where people lose time, then decide which steps need rules, analytics, gen-AI drafting, or a true agent.  

2) Agents aren’t always the answer - use the simplest tool that works

Match the “player” to the job:

  • Rules/RPA for repeatable, structured tasks.
  • Analytics/ML for classification, scoring, forecasting.
  • Gen-AI for drafting, summarizing, and synthesis from messy text.
  • Agents for multi-step, high-variance work where context changes a lot.  

3) Stop “AI slop”: set evals and build trust

Treat agents like new hires: give them clear job descriptions, coach them, and evaluate them. Two easy evals to start with:

  • End-to-end task success rate (did it actually finish the job?)
  • Retrieval/grounding accuracy (did it use the right info?)

SME move: Score those two every week. Put the numbers in a simple dashboard, review with the team, and iterate.  

4) Add observability so you can verify every step - not just outcomes

Once you have more than a couple of agents, debugging mystery failures gets hard if you only measure results. Log step-level details: inputs, tool calls, decisions, and hand-offs. When performance dips, you’ll spot whether the culprit is data quality, a flaky connector, or the prompt.

SME move: For each action, keep a short “why” note (or link to evidence) in the UI so reviewers can see what the agent relied on.  

5) The best use case is the reuse case

Early efforts that build one bespoke agent per task end up with maintenance headaches. Standardize shared components - prompt blocks, retrieval utilities, eval harnesses, and logging. Teams that reuse these building blocks routinely cut ~30–50% of non-essential build work and scale faster.

SME move: Start a tiny “platform folder”: approved prompts, a retrieval module, two eval scripts, and a logging hook. Reuse them in every new pilot.  

6) Humans stay essential - roles change, headcount doesn’t have to

Agents will take on more steps, but people still own judgment, edge cases, compliance, and sign-off. Design human-in-the-loop points clearly, add lightweight UI cues (highlights, “jump to evidence”), and capture edits as structured feedback so the agent keeps improving.

SME move: Write the human-review steps into the SOP. Make it obvious where a person approves, edits, or escalates - and store that feedback.

Recent posts

Blog Image
New Partnership between Logiks Solutions and WALCA

We are excited to announce a new partnership with WALCA, an SAP VAR partner with a presence in Central America.

Blog Image
Tin Shack Case Study: 20% reduction in warehouse workforce with the implementation of Logiks WMS

Tin Shack and Aclaros worked with Logiks to deploy our Warehouse Management Suite.