The Invisible Risk: Navigating Hong Kong’s AI Procurement Laws for SMEs
A deep dive into the unique AI regulatory challenges facing Hong Kong SMEs in 2026. This article explores the "Buy vs. Build" liability shift and SFC "Human-in-the-Loop" bottlenecks. Essential reading for HK founders and directors navigating the PDPO and third-party vendor risks.

In corporate governance, there is an unwritten rule: the bigger you are, the brighter the spotlight.
When a multi‑billion‑dollar conglomerate in Hong Kong moves into artificial intelligence, everyone can see it. They form steering committees. They hire architects who speak in model sizes and token limits. They build systems behind firewalls that cost more than a small business’s annual revenue.
Now picture how a typical SME adopts AI.
There is no steering committee. There is a corporate credit card. A founder tells the marketing director to “try that AI thing” everyone is talking about. A subscription is bought. The tool appears in a browser tab. And just like that, the company has an AI strategy.
By the first quarter of 2026, over half (55%) of Hong Kong’s SMEs had already integrated or planned to utilise AI tools in their daily workflows. We tend to assume that AI regulation in Hong Kong applies to SMEs exactly as it does to corporate giants, just on a smaller scale. But as risk advisors, we see a very different reality on the ground.

Differences in AI Execution between a Multinational Corporation and an SME
Large, listed firms that worry about protecting proprietary data tend to avoid sending their crown jewels into public tools. They stand up private environments, license enterprise‑grade models, or even fine‑tune open‑source systems on their own infrastructure. They surround those models with teams of engineers and risk officers who talk about “model drift” and “adversarial testing” and “control libraries.”
For them, a large part of AI compliance feels like a software engineering problem. Are the models behaving as they were designed? Are the logs good enough to reconstruct what happened? Can they demonstrate to regulators that their systems are monitored, tested, and controlled?
Most SMEs live in another universe. They do not have the budget to build or host their own models. They do not have a model‑risk team. They reach for what is on the shelf: chatbots, writing assistants, summarisation tools, productivity plugins. The code and the models live elsewhere, inside somebody else’s data centre, governed by somebody else’s engineering decisions.
For them, AI risk arrives through the side door: the “I agree” button.
If an AI tool hallucinates a claim about a product, it is the SME’s brand on the line. If an employee quietly uploads a spreadsheet of customers to “get insights,” it is the SME that has to answer to the privacy regulator. Yet the SME cannot inspect the algorithm that produced the output, or patch the security of the environment. The only real lever it has is the agreement it signs.
Compliance, for a large firm, is about engineering a trustworthy system. Compliance, for an SME, is about choosing and governing a trustworthy vendor.
Once you see that distinction, the rest of the landscape looks different.

The "Buy vs. Build" Liability Shift
For a small business, the decision to “build” an AI system from scratch is almost always theoretical. The real decision is between buying a ready‑made tool and doing nothing at all. Buying feels safer. It is faster. It outsources the headache of servers, patches, and model updates. But it also imports something you cannot see: the vendor’s habits.
When you buy an off-the-shelf AI tool, you import the vendor's data practices and legal DNA. Hong Kong’s Personal Data (Privacy) Ordinance (PDPO) does not have a small‑business exemption. The Privacy Commissioner does not ask how big your office is before deciding whether personal data has been mishandled. If your HR lead pastes candidate CVs into a free AI résumé screener, or your sales manager drops a list of customers into a third‑party chatbot, the law does not see an experiment. It sees a transfer of personal data to an external party, often in another jurisdiction, under terms no one in your company has read.
Listed companies manage this by building localized models where the data never leaves their servers. SMEs do not have that luxury. To survive in this regulatory environment, your primary defense is your vendor contract. That makes certain questions non‑negotiable:
Where, physically and legally, is our data stored and processed?
Who can access it, and under what circumstances?
Are our prompts, files, and outputs being used to train or “improve” models offered to other customers?
What happens to the data when we leave?
If those answers are vague, or buried in language about “service improvement,” the risk does not disappear. It moves. It settles on the SME’s balance sheet and reputation.
In that situation, you are not just a customer. You are the training data.

The Human-in-the-Loop Bottleneck
The difference between large and small enterprises becomes even more stark when you look at financial regulations. The Securities and Futures Commission (SFC) has issued a circular to licensed corporations on the use of generative AI language models. It adopts a risk‑based approach and states that using such models to provide investment recommendations, investment advice or investment research to clients is generally regarded as a high‑risk use case, for which enhanced safeguards and governance measures are required.
One of the most critical safeguards is the "human-in-the-loop" requirement. Before an AI-generated investment recommendation is sent to a client, a qualified human must review it for factual accuracy.
For a global bank, putting humans in the loop is a logistics problem. They can assign teams, build workflows, and classify which outputs must be reviewed under which conditions. The cost is absorbed into a large compliance budget. For a small licensed advisory firm, the calculation is harder.
If every AI‑assisted piece of client advice requires the same human time as before, the promised efficiency vanishes. The tool may still help with internal drafts or summaries, but its role in high‑risk, client‑facing work becomes less obvious. At the same time, regulators expect these firms to exercise due diligence over whatever AI platforms they choose —knowing how they work, how they are monitored, and how risks are mitigated.
Big institutions can send questionnaires, run on‑site visits, and compare vendors against internal standards. A three‑partner advisory shop has less leverage. Yet on paper, they face the same obligations. The result is a bottleneck. The very rule designed to prevent blind reliance on AI can, if not planned for, erase the small firm’s economic reason for using AI in the first place.
That does not mean SMEs should avoid AI. It means they have to be more selective. In many cases, the smart move is to reserve AI for lower‑risk, internal tasks like drafting, summarising, translation, and keep final client recommendations firmly in the hands of human advisers who may use AI as a tool, but never as an autonomous decision‑maker.
The Takeaway
If you are leading an SME, you cannot copy and paste a multinational corporation's AI strategy. Your AI strategy is actually a Vendor Strategy. Your competitive edge is not your algorithm; it is your operational discipline. It will be the clarity of your contracts, the discipline of your procurement, and the honesty of your internal conversations about how people are actually using AI.
In practice, that means:
Treating AI tools like any other critical supplier: with due diligence, negotiated terms, and periodic review, not just a signup form.
Making sure someone in your organisation understands the data flows—what goes in, what comes out, where it travels, and how long it stays.
Asking, before every high‑risk use case, not “Is this cool?” but “If this goes wrong, who is accountable, and what does the contract say?”
The invisible rules of 2026 do not care whether your AI runs on the latest model or last year’s version. They care about something much more prosaic: when the machine makes a mistake, whose name appears on the letter from the regulator.
For small businesses, the safest place to be is not the one with the most sophisticated AI. It is the one where the founder, faced with a stack of vendor agreements, can say with confidence: “I know exactly what we just signed.”
