Stratwell Consulting Logo
Stratwell Consulting Logo

AI Governance in Hong Kong: A Guide to Laws and Regulations for Listed Companies

Analysis of Hong Kong’s AI regulatory framework for HKEX-listed companies. Covers PCPD’s 2025 Generative AI Checklist, SFC high-risk circulars, HKMA GenAI Sandbox++, and Copyright Ordinance (Cap. 528) TDM exceptions. Essential compliance insights for boards navigating AI risk in the absence of a standalone AI Act.

Blog Cover Image

When corporate boards ask us, "What are the AI laws for listed companies in Hong Kong?", they are usually expecting a single answer, something like the EU's "AI Act" they have heard about at conferences.

There is a short pause when we tell them it doesn't exist.

Hong Kong has not written a sweeping, standalone "Artificial Intelligence Act". Instead, it has taken the rules everyone already knew - privacy, securities regulation, banking supervision, copyright - and started to extend them into the world of AI. The guardrails are not on a new signpost. They are built into the road.

For listed companies trying to move quickly on generative AI, that can be unsettling. There is no one place to look. There is a web of ordinances, circulars, frameworks, and guidelines, some old, some very new. Understanding that web is fast becoming a board‑level responsibility.

Here is how to read the field.

1. Data Privacy: The PCPD Framework

If you want to see how Hong Kong thinks about AI, you can start with the Office of the Privacy Commissioner for Personal Data (PCPD). Because every AI model is fundamentally hungry for data. Resumes, transaction records, chat logs, support tickets, images, voice notes. The more, the better. The temptation for a business, especially in a competitive market, is to feed the machine, get smarter outputs. The PCPD's message is simple: the old law still applies.

In June 2024, the PCPD released a Model Personal Data Protection Framework for AI, inviting organisations to treat AI as a full‑fledged data‑processing activity that needs strategy, risk assessment, governance, and human oversight. It followed with a checklist aimed directly at employees using generative AI at work in March 2025.

The wording is guidance, not statute. But for a listed company, “guidance” from the privacy regulator is not optional reading. In practice, the frameworks ask for three things:

  • Treat AI as an extension of your existing data‑protection regime, not as a loophole.

  • Design use cases so they minimise personal data, rely on clear purposes and lawful bases, and give individuals a fair picture of what is happening.

  • Keep humans in the loop for higher‑risk decisions, and be prepared to explain and correct outcomes.

Consider a simple example. A recruitment manager pastes a stack of CVs into a public AI chatbot and asks it to summarise the top five candidates. Or a marketing team uploads raw customer purchase histories into an experimental recommendation engine without updating its privacy notices.

To the business, these are clever shortcuts. To the PCPD, they look like classic breaches of purpose limitation and fairness: personal data reused in ways that were never explained, with no clear consent or transparency, and perhaps no way to honour access or correction requests.

The rule of thumb in this new world is not “Can the tool do it?” but “Would we be comfortable defending this use in a PDPO investigation?” Boards cannot answer that question if they do not know what tools are being used in the first place.

Blog Image - 1

2. Financial Regulators: Where the Edges are Hardest

If you want to see where the hardest boundaries are being drawn, look at financial services. Regulators like the Securities and Futures Commission (SFC) and the Hong Kong Monetary Authority (HKMA) do not have the luxury of waiting for a grand AI law. They supervise institutions whose products can move markets and affect livelihoods. So they have started to write rules in real time, using the tools they already have.

  • SFC Circular on Generative AI (November 2024): This circular imposes mandatory guidance on licensed corporations using generative AI language models. The SFC divides AI use into low‑risk and high‑risk categories. On one side are the administrative tasks—summarising internal documents, generating drafts for internal reports, translating routine communications. These uses still require governance and care, but they sit closer to traditional IT. On the other side are the models that touch the heart of regulated activity: tools that help produce investment recommendations, advice, or research for clients. If your firm uses AI to write computer code or translate internal documents, the regulatory burden is relatively light. But if you use AI to provide investment recommendations, investment advice, or research to clients, you are officially operating in "high-risk" territory.

    For high-risk applications, firms must implement a "human-in-the-loop" to review and validate AI-generated output before it influences a client's decision. You cannot simply blame the model if it hallucinates a stock recommendation. You must also test and monitor the model's robustness against different prompts. How do the models treat outliers and stress conditions? What controls catch errors before they reach the client? What's more, clients should be told, in a meaningful way, when they are interacting with or relying on AI, so they understand the nature and limitations of what they are seeing.

  • HKMA Consumer Protection Circular (August 2024): The HKMA requires banks that use chatbots or AI-driven decision support are expected to give customers channels to opt out or request human intervention.

On March 5, 2026, HKMA and other regulators have expanded a "GenAI Sandbox++" which is a controlled, risk-managed environment where financial institutions can test their AI ideas using supercomputing resources before unleashing them on the open market. It is a classic regulatory compromise: encouraging aggressive innovation, but strictly within the confines of a padded room.

Blog Image - 2

3. Technology Governance: The Quiet Standards

The government’s Digital Policy Office (DPO) introduced the Generative Artificial Intelligence Technical and Application Guideline in April 2025, offering an operational framework for technology developers and users to manage technical risks like data leakage, model bias, logging and monitoring model behaviour.

For listed companies, especially those building or heavily integrating AI systems, these guidelines function as a blueprint. They describe, in technical terms, what a defensible AI stack looks like: segregation of environments, robust access controls, transparent logging, and mechanisms to override or roll back harmful behaviour.

No director will read these documents line by line. But someone in the organisation should, and should be able to look the board in the eye and say: “Our system looks like this. Here is where it diverges. Here is why.”

Without that conversation, it is difficult to claim that AI risk is being managed with the same seriousness as cyber risk or financial controls.

  1. Intellectual Property: The Copyright Quirk and TDM Exception

Then there is the question that unsettles lawyers more than engineers: who owns what the model creates?

Hong Kong Copyright Ordinance (Cap. 528) contains a provision that feels almost tailor-made for the AI era. It recognizes "computer-generated works" and says that, where there is no human author in the traditional sense, the “author” is the person who made the arrangements necessary for the creation of the work.

But in the real world of generative AI, who is that person? When a bank uses a vendor’s model to generate draft research notes, who made the arrangements? The vendor, who trained and deployed the system? The institution, which configured and integrated it? The analyst, who chose the data and wrote the prompts?

Courts have not yet charted this territory in the context of modern generative models. Until they do, boards should assume that ownership and enforceability of AI‑generated works may be contested, not guaranteed.

At the same time, Hong Kong is moving to clarify another piece of the puzzle: how data can be used to train AI in the first place. Proposals for a Text and Data Mining (TDM) exception would allow organizations to analyse copyrighted works for machine learning, provided the works are accessed lawfully and rights holders have not opted out.

For listed companies, that exception is both an opportunity and a warning. It lowers the barrier to innovation, but it also draws a sharper line around what counts as lawful training. If your vendors build models using scraped data with murky provenance, claiming the benefit of a TDM exception will not help if the underlying access was not legitimate, or if rights owners have explicitly reserved their rights.

The compliance implication is simple: audit trails matter. You need to know, with reasonable confidence, what datasets trained the systems you are putting into production and what rights attach to those datasets.

Blog Image - 3

The Governance Gap: Behavior vs. Policy

Perhaps the most sobering statistic we share with boards has nothing to do with laws at all. A 2025 survey by the Hong Kong Productivity Council (HKPC) found that 88% of employees in Hong Kong have already used AI tools in their daily work. Yet, fewer than 30% of organizations had a formal AI policy in place when key guidelines were issued.

That means, in effect, that AI governance in many listed companies is happening from the bottom up. Employees experiment first. Policy arrives later, if at all.

In a world of “invisible guardrails,” that is a problem. The law assumes that organisations know what they are doing with personal data, with client relationships, with copyrighted materials. It does not give a free pass because the tool had a friendly interface and a subscription model.

For boards, the response does not need to be complicated, but it does need to be deliberate.

  • Build an AI governance structure. Whether you call it a committee, a council, or a working group, someone must own the map: which AI systems are in use, what data they touch, what risks they carry.

  • Classify use cases by risk. A drafting assistant for internal memos is not the same as a model that influences credit decisions or trading strategies. High‑impact uses should have formal approval, testing, and human‑in‑the‑loop review.

  • Connect policies to reality. An AI policy no one reads is no better than no policy at all. Training, accessible guidelines for employees, and clear escalation channels are the difference between paper compliance and actual control.

  • Audit the data supply chain. Ask for documentation from vendors. Map where your data goes, how long it stays, and what models it trains. Treat this as part of your overall risk and disclosure strategy.

The companies that come out ahead in this environment will not necessarily be the ones that deploy the flashiest models. They will be the ones that can explain, calmly and clearly, how those models fit within Hong Kong’s existing rules—and how, when something goes wrong, a human is still accountable.

In a city without an AI Act, that may turn out to be the most important governance skill of all: the ability to recognise that the law is already here, even when you cannot see its name on the cover of a single document, and to build AI systems that behave as if a regulator were already watching.