On Friday 13th February, we brought together a room of over 45 North West business leaders for our AI Adoption Leader's Lunch - an honest, practical conversation about what's really happening with AI inside SMEs, what's working, and what's quietly creating risk.
We hosted the session at the iconic St James's Club, a long-established private members' club in the heart of Manchester, founded in 1825 and known for offering a professional, welcoming space for people to connect and talk business.
Our very own Stephen Hobson did an introduction to AI and Apex, and we were delighted to be joined by AI and Copilot expert, Matthew Higgins from Pax8, who gave our guests an insightful, interactive talk.
We had leaders and decision-makers from a genuinely mixed set of industries - including manufacturing, engineering, professional services, finance, investment, creative agencies, technology, property, construction, facilities management, and more.
And the biggest headline?
That combination is exactly why this lunch mattered.
One of the most powerful moments was realising just how quickly AI has moved from novelty to normal. In the session, we looked at how fast ChatGPT reached 100 million users - in months, not years.
In plain terms: even if leadership hasn't rolled out an AI strategy, your people are almost certainly experimenting already with AI.
A line that landed hard in the room was: It's not an AI problem. It's a belief problem.
If AI still feels optional, it gets postponed - and "future lists" don't create action or mitigate risk.
What we saw in the room was that once leaders connect AI to capacity, risk, and speed of delivery, the conversation changes quickly from "should we?" to "how do we do this safely and well?".
The most consistent theme from attendee discussions was this:
Teams adopt AI before policies, controls, or approved tools catch up.
That's where risk creeps in - especially when staff use public tools for:
The session also covered real-world examples of how quickly things can go wrong, including "confirmed data leaks" where sensitive information was exposed after employees used public AI tools for productivity gains. The example we looked at in the session was Samsung's breach in 2023 where an employee uploaded sensitive source code into ChatGPT, leaking this code to the masses for anyone to ask for (including their competitors).
Most leaders (and employees) assume public AI works like a search engine: you ask a question, you get an answer, end of story.
But with many public tools, you're often sharing far more context than you realise - and you typically don't get business-grade governance around:
Even when staff mean well, the risk is accidental leakage of:
Another key point we explored: human labour is reaching its limits.
The session referenced Microsoft's Work Trend Index findings - including that 80% of workers report not having enough time or energy to do their work, while 53% of SMB leaders agree productivity must increase.
That's the heart of the AI opportunity for SMBs: AI isn't about replacing people - it's about removing friction (drafting, summarising, searching, formatting, admin "glue work") so skilled people can spend more time on decisions, relationships and delivery.
One of the most useful parts of the lunch was clarifying the difference between public AI tools and secure, business-grade AI.
We discussed Microsoft's approach, including Copilot Chat with IT controls and Enterprise Data Protection - designed to support business use with organisational governance, rather than customer-style experimentation.
We also covered the wider "Copilot Control System" concept - spanning Copilot Chat (web/work), Copilot in the Microsoft 365 apps, Copilot Studio/agents, and adoption measurement.
For many attendees, this was the "lightbulb moment": You can get the productivity upside of AI without accepting unmanaged risk - but only if you standardise the right tools and set guardrails early.
Another practical takeaway: businesses get the best results when they pilot a small number of repeatable workflows, then expand what works.
The session shared an example where a Copilot pilot delivered meaningful time savings within a single month - and once the team could see the impact, the approach was scaled more widely. The examples shared with our leaders showed the business saving over 160 hours with 25 licenses in the first month, with Copilot being rolled out over 100 employees within 6 months.
We finished with something everyone could take away and use immediately: how to write better prompts.
The framework shared with the session was:
Context - why you need it, who it's for, and any background the AI should know
Goal - what you want the AI to produce (and what "good" looks like)
Source - what information it should use (and what it shouldn't)
Expectation - the format, length, structure, and style/tone/detail you need
Even small improvements here can massively improve output quality - and reduce the temptation for staff to keep experimenting across random tools.
If this blog feels uncomfortably familiar (because AI is already happening in your business), here's a sensible starting point:
If you want something more tailored, these sector pages are a strong next click:
Can't find your industry? Get in touch to request AI content specific to your industry here.