Thursday 13 November 2025Innovation with Integrity: A UK Path to Responsible AI and Copyright
Innovation with Integrity: A UK Path to Responsible AI and Copyright
Devesh Raj, Chief Operating Officer, UK
Introduction: When AI Trains on Creativity Without Consent
The UK's creative industries are a global success story - driving employment, developing skills, and contributing significantly to GDP, all while delivering world-class entertainment to audiences at home and abroad. From film and television to music and journalism, these sectors enrich our cultural life and support millions of jobs. With the potential to generate an additional £10 billion annually by 2033 (source), their economic importance is only growing. Yet this is also an industry undergoing rapid transformation, with artificial intelligence presenting both exciting opportunities and serious challenges to the future of creative work.
Earlier this year, fans of Studio Ghibli, the legendary Japanese animation studio, were stunned to see AI-generated clips circulating online that looked like they had been lifted straight from Spirited Away or My Neighbour Totoro. These were not lost treasures from Ghibli's vaults. They were new imitations, generated by artificial intelligence systems trained on the studio's copyrighted films - without its knowledge or consent. Similar stories are prevalent across the creative industries.
For large organisations, this kind of unlicensed use undermines hard-earned investment. For small creators, it threatens their very survival. For artists, misappropriation of their voice, image or likeness puts their livelihood in jeopardy. Left unchecked, the practice risks hollowing out the creative economy, one of the key growth driving' sectors identified by the Government, reducing opportunities for new talent, and eroding trust in the industries that produce films, music, sports, and news.
The Legal Landscape
The United Kingdom
In the UK, copyright law is clear that copyrighted material cannot be used for commercial purposes - including training AI models - without a licence. There is a narrow exception for non-commercial research, but big technology firms cannot rely on this when training large language models or generative AI systems.
For a time, the government considered going further by consulting on a preferred option to amend existing copyright law to adopt an opt-out model, where companies would be free to use copyrighted works unless rights-holders explicitly blocked them. But this approach immediately faced strong opposition from the creative industries and from Parliament, on the grounds that it didn't recognise the true value of creativity and shifted the burden of enforcement of rights unfairly onto creators. Ever since, the Government has been resetting its approach, seemingly committing instead to exploring a licence-first system that would require companies to seek explicit permission and ensure that creators are compensated. Work is already underway on a new AI Bill, expected in 2026, which will set out frameworks for licensing, transparency, and enforcement.
Innovation with Integrity: A UK Path to Responsible AI and Copyright
The European Union
The European Union has taken a different path. Under its 2019 copyright rules, researchers can freely use copyrighted material for non-commercial purposes. For commercial AI training, however, the system defaults to an opt-out model: content can be used unless the rights- holder explicitly reserves their rights. While this sounds like a balance, it creates enormous practical challenges, including how a rights-holder can communicate their opt-out in an effective and efficient manner when their works appear on platforms - many of which are AI companies themselves - that they do not control. The problem is particularly acute for small businesses, individual artists and independent journalists who lack the resources to constantly monitor and enforce their rights.
This year, the EU's AI Act came into force, adding new transparency obligations. Foundation model developers must now document and disclose the sources of their training data, with fines of up to 35 million or 7% of global turnover for non-compliance. Most of the big AI companies have signed a voluntary code of practice to go beyond these requirements. Notably, however, Meta declined to do so, citing legal uncertainty. While the effectiveness of these measures is still unproven, the EU has taken an important first step towards forcing transparency into AI development.
The United States
In the United States, the issue is governed not by specific AI legislation but by the long- standing judicial doctrine of fair use. This legal test weighs four statutory factors against the specific facts of each case, the most important being whether the use is transformative - that is, whether it adds new meaning or purpose - and whether it harms the market for the work.
Why AI's Impact on News Is Already Clear
Artificial intelligence is rapidly transforming how news is created, distributed and consumed. Unlike other sectors, the implications for journalism go beyond creative and economic rights - they touch on trust, democracy and the integrity of public discourse.
News organisations like Sky News invest in rigorous, impartial, high-quality journalism under regulatory frameworks designed for a pre-AI era. Journalists spend time checking facts, challenging power, and ensuring that the public can rely on what they read and watch. Yet today, audiences access news across multiple platforms, while AI systems - often unregulated - reshape how information is surfaced and interpreted. AI is moving faster than the rules built to protect journalism and its role in a healthy democracy.
The stakes are high. AI models can misattribute facts, hallucinate stories, and cit










