Everything We Learned from Our AI Adoption Leader's Lunch
February 18, 2026 Everything We Learned from Our AI Adoption Leader's Lunch
in
Ai ,
Copilot ,
Apex Computing Services
News by Rachael McKenzie
On Friday 13th February, we brought together a room of over 45 North West business leaders for our AI Adoption Leader's Lunch - an honest, practical conversation about what's really happening with AI inside SMEs, what's working, and what's quietly creating risk.
We hosted the session at the iconic St James's Club, a long-established private members' club in the heart of Manchester, founded in 1825 and known for offering a professional, welcoming space for people to connect and talk business.
Our very own Stephen Hobson did an introduction to AI and Apex, and we were delighted to be joined by AI and Copilot expert, Matthew Higgins from Pax8, who gave our guests an insightful, interactive talk.
A Quick Snapshot of the Room
We had leaders and decision-makers from a genuinely mixed set of industries - including manufacturing, engineering, professional services, finance, investment, creative agencies, technology, property, construction, facilities management, and more.
And the biggest headline?
- 90% of attendees raised their hand when asked if their business is already using AI in some form; yet a majority indicated that they've used (or have employees who have used) non-regulated or public AI tools (or that they had no policy or framework for safe AI use)
- Most were surprised by how unsecure free public AI tools (like ChatGPT) can be for business use - especially when staff paste in sensitive content without realising what could be retained, reused, or exploited by other users of the AI tool
That combination is exactly why this lunch mattered.

1. AI Isn't "Coming" - It's Already Mainstream
One of the most powerful moments was realising just how quickly AI has moved from novelty to normal. In the session, we looked at how fast ChatGPT reached 100 million users - in months, not years.
In plain terms: even if leadership hasn't rolled out an AI strategy, your people are almost certainly experimenting already with AI.
2. The Real Blocker Isn't Tech - It's Belief (and Urgency)
A line that landed hard in the room was: It's not an AI problem. It's a belief problem.
If AI still feels optional, it gets postponed - and "future lists" don't create action or mitigate risk.
What we saw in the room was that once leaders connect AI to capacity, risk, and speed of delivery, the conversation changes quickly from "should we?" to "how do we do this safely and well?".

3. Shadow AI is the Hidden Risk Most Businesses Underestimate
The most consistent theme from attendee discussions was this:
Teams adopt AI before policies, controls, or approved tools catch up.
That's where risk creeps in - especially when staff use public tools for:
- Summarising emails or meeting notes
- Drafting client communications
- Analysing spreadsheets, proposals, or tenders
- Troubleshooting technical issues
The session also covered real-world examples of how quickly things can go wrong, including "confirmed data leaks" where sensitive information was exposed after employees used public AI tools for productivity gains. The example we looked at in the session was Samsung's breach in 2023 where an employee uploaded sensitive source code into ChatGPT, leaking this code to the masses for anyone to ask for (including their competitors).
Why the "free version" problem catches people out
Most leaders (and employees) assume public AI works like a search engine: you ask a question, you get an answer, end of story.
But with many public tools, you're often sharing far more context than you realise - and you typically don't get business-grade governance around:
- What can be entered
- What gets retained
- Who can access outputs
- Where data goes
- How usage is audited
Even when staff mean well, the risk is accidental leakage of:
- Client details
- Contract terms
- Pricing and margin information
- Internal strategy, plans, and commercially sensitive documents
- Credentials or system details
4. AI is a Capacity Solution - Not a Gimmick
Another key point we explored: human labour is reaching its limits.
The session referenced Microsoft's Work Trend Index findings - including that 80% of workers report not having enough time or energy to do their work, while 53% of SMB leaders agree productivity must increase.
That's the heart of the AI opportunity for SMBs: AI isn't about replacing people - it's about removing friction (drafting, summarising, searching, formatting, admin "glue work") so skilled people can spend more time on decisions, relationships and delivery.

5. Secure Business AI Tools Look Different to Public AI
One of the most useful parts of the lunch was clarifying the difference between public AI tools and secure, business-grade AI.
We discussed Microsoft's approach, including Copilot Chat with IT controls and Enterprise Data Protection - designed to support business use with organisational governance, rather than customer-style experimentation.
We also covered the wider "Copilot Control System" concept - spanning Copilot Chat (web/work), Copilot in the Microsoft 365 apps, Copilot Studio/agents, and adoption measurement.
For many attendees, this was the "lightbulb moment": You can get the productivity upside of AI without accepting unmanaged risk - but only if you standardise the right tools and set guardrails early.
6. Adoption Works Best When You Start Small, Then Measure
Another practical takeaway: businesses get the best results when they pilot a small number of repeatable workflows, then expand what works.
The session shared an example where a Copilot pilot delivered meaningful time savings within a single month - and once the team could see the impact, the approach was scaled more widely. The examples shared with our leaders showed the business saving over 160 hours with 25 licenses in the first month, with Copilot being rolled out over 100 employees within 6 months.
7. Better Prompts - Better Outcomes
We finished with something everyone could take away and use immediately: how to write better prompts.
The framework shared with the session was:
-
Context - why you need it, who it's for, and any background the AI should know
-
Goal - what you want the AI to produce (and what "good" looks like)
-
Source - what information it should use (and what it shouldn't)
-
Expectation - the format, length, structure, and style/tone/detail you need
Even small improvements here can massively improve output quality - and reduce the temptation for staff to keep experimenting across random tools.

What to Do Next (Practical, Low-Drama Steps)
If this blog feels uncomfortably familiar (because AI is already happening in your business), here's a sensible starting point:
Find your Shadow AI risk level by taking our Shadow AI Risk Checker here
Get clear on "secure AI" vs "public AI" here
Use a structured adoption approach with our adoption framework here
Pick 2 use cases and pilot properly. You can browse practical use cases here, or book a Copilot demo with us here
Grab resources you can actually implement from our AI resource library here
Industry-Specific Next Steps
If you want something more tailored, these sector pages are a strong next click:
Can't find your industry? Get in touch to request AI content specific to your industry here.