Cookies

To provide a better user experience, we use marketing cookies. To allow marketing cookies, click accept below or click here to view our policies.

Skip to content
Live Lab - Intelligent Venues call coming soonFind out more
Foresight LAB
research
AI

AI in the Screen Sector Quarterly Digest - January to March 2026

Welcome to the CoSTAR Foresight Lab’s new screen sector digest, summarising events and developments in AI each quarter.

Posted: 23 April 2026
A large professional film or photography studio filled with equipment and crew members. The space is dominated by a huge, seamless bright blue backdrop that curves at the base, creating an infinity‑style stage. Several silhouetted figures stand or work around cameras, tripods, and large LED panel lights positioned throughout the room. The floor reflects the blue light, and overhead rigging and studio fixtures are visible across the dark ceiling.A large professional film or photography studio filled with equipment and crew members. The space is dominated by a huge, seamless bright blue backdrop that curves at the base, creating an infinity‑style stage. Several silhouetted figures stand or work around cameras, tripods, and large LED panel lights positioned throughout the room. The floor reflects the blue light, and overhead rigging and studio fixtures are visible across the dark ceiling.A large professional film or photography studio filled with equipment and crew members. The space is dominated by a huge, seamless bright blue backdrop that curves at the base, creating an infinity‑style stage. Several silhouetted figures stand or work around cameras, tripods, and large LED panel lights positioned throughout the room. The floor reflects the blue light, and overhead rigging and studio fixtures are visible across the dark ceiling.
AuthorPetra Lindnerova
ContributorsRishi Coupland, David Johnston, John Sandow, Brian Tarran
PeriodJanuary to March 2026

In this digest, we analyse developments thematically, linking them to the nine key recommendations defined in our June 2025 report, ‘AI in the Screen Sector: Perspectives and Paths Forward’. Through monitoring of news items, reports and publications, the digest highlights emerging industry trends and tracks whether the sector is moving towards or away from the report's recommendations that, as of June 2025, attempted to represent ideal outcomes for the sector.

Rights

In a report published in March, the UK government announced it no longer favoured allowing AI developers to train on copyrighted materials unless rights holders opt out. This was earlier put forward as the government’s preferred option in its Copyright and Artificial Intelligence Consultation, but responses to that consultation showed significant opposition to the idea, with support instead for requiring content licensing for AI training. Following what science secretary Liz Kendall promised to be a ‘reset moment’, the plan is now to ‘gather further evidence on how copyright laws are impacting the development and deployment of AI across the economy’. According to the report, no changes to copyright law will be made unless there is certainty that ‘they will meet our objectives for the economy and UK citizens’. The government said licensing markets to support AI training were ‘new and evolving’ and that it does not want to interfere at this time.

 In a sign of how these licensing markets are developing, US technology company Cloudflare acquired UK-based AI data marketplace Human Native in January. Together, the companies say they will work on ‘creating a new economic model for the internet in the age of AI’: a publication/subscription model where content owners can make decisions on whether and to what extent their content is accessed by AI developers and be remunerated accordingly.

 In the screen sector, there is currently ‘little licensing of third-party-owned screen content for AI training in the UK’. That’s according to a report commissioned by the British Film Institute’s Rapid Evidence Assessment and Data Review (READR) programme. The report recommends to first focus on transparency around AI training as well as commercial and regulatory approaches to distribution of compensation to creators.

 Meanwhile, one major – and widely reported – AI licensing deal has collapsed. A $1bn agreement between Disney and OpenAI was supposed to allow users of Sora, OpenAI’s video generation app, to create new audiovisual content using more than two hundred Disney characters. The deal has fallen through due to OpenAI’s surprise decision to scrap the app and refocus on practical AI tools that will, ‘help solve real-world problems’.

 Commenting on the news, Deep Fusion Films CEO Benjamin Field noted that the Sora app was never publicly available in the UK, which he ascribed to potential copyright issues, considering OpenAI’s ‘guarded’ approach to sharing details around training data. Charismatic.AI boss Guy Gadney said Sora had potential but had ‘started to lag behind’ its competitors, chiefly Google’s Veo3 model and the Seedance model, developed by Bytedance, the Chinese owner of TikTok.

 The launch of the second version of Seedance in February sparked industry concern when it was used to generate videos featuring copyrighted characters and real people (such as a scene of the actors Brad Pitt and Tom Cruise fighting. Both Disney and Paramount sent cease-and-desist letters to ByteDance, and SAG-AFTRA accused the company of a ‘blatant’ disregard for law and ethics, saying: ‘Responsible AI development demands responsibility, and that is nonexistent here.’

 

Responsible AI

In February, associations representing creative professionals gathered evidence from more than 10,000 creators to deliver a cross-sector report, ‘Brave New World? Justice For Creators in the Age of Gen AI’. The report called on government and the creative industries to adopt a proposed CLEAR Framework, focused on Consent, Licensing, Ethical use, Accountability and Remuneration. The framework is intended to serve as ‘the minimum standard for a functioning, fair and ethical creative economy’. Commissioned by the Society of Authors, the Association of Illustrators, the Independent Society of Musicians, and the Association of Photographers, the report argues that creators are not anti-AI but need to be respected and compensated for providing their content, and proposes creation of a framework adopted by both government and creative sector.

Similar sentiments were expressed during the launch of a March report on AI Human Avatars, published by University of Reading, the Synthetic Media Research Network and Replique. At a launch event at the House of Commons, voice actor Gayanne Potter recounted how a clone of her voice had been used by rail company ScotRail without her knowledge or permission – sold by Swedish company ReadSpeaker, with whom Potter had previously worked to lend her voice for accessibility and e-learning software. In remarks made during discussion, Potter claimed the incident and surrounding publicity had impacted on her income and livelihood, but she added: ‘I’m not anti-AI, I'm about consent.’ She argued for performers to be given recourse if contracts previously signed are affected by changes in technology.

As per the AI Human Avatars report: negotiating fair contracts that protect human likenesses is a necessary step, but UK law surrounding ownership of AI avatars is ‘fragmented’, and the current framework is ‘unfit for purpose’, with ‘UK businesses and services unable to access the growth opportunities of the technology without policy intervention.’

This aligns with the focus of talks between actors’ union Equity and producers’ trade body Pact, which resumed at the end of January after an Equity ballot showed 99% of 7,000 members refused digital scanning on set – encouraged to do so by the likes of Harriet Walter and Hugh Bonneville. The latest round of talks reportedly centred on compensation for actors whose likenesses are used to train AI models. Its outcome remains to be seen.

Skills

The Tony Blair Institute predicts that by 2030, up to 3 million jobs could be displaced by AI. For the creative sector, data from the aforementioned ‘Brave New World?’ report seems to align with this prediction, detailing both job losses and income hits in different areas – 57% of authors claim that ‘their career is no longer sustainable due to Gen AI’ and 58% photographers ‘lost assignments due to Gen AI by February 2025’.

Meanwhile, Anthropic’s March report, ‘Labor market impacts of AI: A new measure and early evidence’ tests a new approach to measuring occupational exposure by comparing the theoretical capability of large language models (LLMs) to perform certain tasks with real-world usage data. It finds ‘no systematic increase in unemployment for highly exposed workers since late 2022’, but claims that hiring has slowed when it comes to younger people in exposed occupations such as programmers or sales representatives. With growing automation of day-to-day administrative tasks usually carried out by entry-level workers, experts are urging businesses to re-design early career roles to support AI skills development, not only to ensure individual career progression but also as an ‘investment in the long-term health of the economy’, as advised by the British Chambers of Commerce. In the US, tech giant IBM is rewriting junior roles to account for AI fluency and choosing to double down on tasks that require ‘human judgment, customer interaction, and oversight of AI systems’. This provides an example of how firms might develop a workforce that works with AI instead of being substituted by it, and comes amid reports of rising youth unemployment rates — Office of National Statistics saw it sitting at 16 percent in December 2025 to February 2026.

Public transparency

UK sales firm The Mise En Scene Company (MSC) issued a ‘No AI' certification for its slate of films and demanded a ‘global industry standard’ for transparent AI disclosure for audiences. Inspired by A24’s disclaimer at the end of the film ‘Heretic’, MSC called for a centralised system that ensures clear distinction between AI and human-made works that prevents us from becoming ‘overwhelmed by a flood of synthetic culture’.

More companies across the creative sector are calling for transparent AI disclosure and participating themselves. The BBC recently counted ‘at least eight different initiatives’ to come up with a ‘no-AI’ label and likened its potential to the Fair Trade logo. In a first, the UK Society of Authors introduced a scheme for authors to register their works as ‘Human Authored’ and include the logo in their books, backed by writers including Malorie Blackman and Mary Beard. The Society highlighted that absence of governance requiring tech companies to clearly label AI-generated outputs was causing confusion for readers.1 At times, even editors failed to discern AI-generated output; the US release of horror novel ‘Shy Girl’ by Mia Ballard was scrapped by Hachette due to alleged AI use, with its UK version discontinued and pulled from shelves.

After marked pushback against AI-generated content in video games, online game store Steam began to require developers to disclose uses of AI. However, in January, Steam’s owner, Valve, relaxed its requirements: disclosures are now only required when AI has been used to create playable content. In a GamesIndustry.biz survey, almost half of games industry workers disagreed with this particular policy and showed preference for disclosure of all AI use (including those designed for ‘efficiency gains’ in developer tools). Controversy over the use of AI in games is such that one publisher – Running With Scissors, based in the US – went as far as to cancel an entire release after fans noticed AI-generated assets in a trailer.

In the music industry, scammers have been posting AI-generated songs to capitalise on clicks, at times even targeting specific artists via third-party platforms. One example is that of British indie musician Ormella, who discovered a new AI-generated song posted on her Spotify profile. Also, a song-in-progress shared on social media by English singer-songwriter Benedict Cork was finished and released without his knowledge: a user ran the partial track through an AI music generator and put it out under a different name. In an effort to protect against these types of incidents, Spotify launched an ‘Artist Profile Protection’ feature through which musicians can review songs prior to their online release. The platform Bandcamp also published its generative AI policy which bans those works produced ‘wholly or in substantial part by AI’ and encouraged users to report any audio they suspect to be made ‘entirely or with heavy reliance’ on these tools.

Meanwhile, academics continue to gather data on how audiences respond to AI-generated and AI-assisted content. An experiment by researchers at Syracuse University and Florida International University presented study participants with a piece of music they were told was written either by composer Hans Zimmer or a first-year music student. Half the participants were then told that the music was composed ‘in collaboration with AI technology’, and for this group of participants, the researchers observed that the music was evaluated more negatively, regardless of who the composer was said to be. “Right now, AI carries a reputational tax,” said one of the study’s authors.

Sector adaptation

Major games companies may have been using AI for development ‘for decades’, but implementing it into player-facing aspects remains a contentious topic. According to GamesRadar, a few Valve developers have been looking into generative AI’s potential for game writing, specifically in ‘NPC or world reactions’ as these are responses that historically always had to be simulated. Writer Erik Wolpaw stated he believed that generative AI can indeed become useful in game writing for characters reacting to particular gameplay moves in real time.

UK-based independent game developer 10six Games earned a place in the Official Selection of the London Games Festival 2026 with its first game, YOU Vs. Zombies, due to launch this summer. The game leverages generative AI for character creation, where player text inputs function as prompts for creating customised character models, skills, weapons, story settings and missions. Models have been finetuned on artwork created by the studio’s artist to ensure ‘outputs adhere to the game’s specific 2D comic art style’. YOU Vs. Zombies is one of a new collection of case studies being developed by the CoSTAR Foresight Lab to track sector adaptation through adoption of generative AI across film, television and video games.

Slide 1 of 0
A surreal black‑and‑white scene shows a flat, rectangular platform floating above rough, stormy ocean waves. A small turtle sits near the center of the platform, while a tall wading bird with long legs and outstretched wings stands near the edge. The sky is dark and foggy, adding to the dramatic, dreamlike atmosphere.
Related content
Insight'AI in the Screen Sector: Perspectives and Paths Forward'

This report represents, to the best of our knowledge, the first UK-wide exploration of the overall impact of generative artificial intelligence (AI) on the full breadth of the UK’s screen sector.

Read the report
A large professional film or photography studio filled with equipment and crew members. The space is dominated by a huge, seamless bright blue backdrop that curves at the base, creating an infinity‑style stage. Several silhouetted figures stand or work around cameras, tripods, and large LED panel lights positioned throughout the room. The floor reflects the blue light, and overhead rigging and studio fixtures are visible across the dark ceiling.
Related content
InsightAI in the Screen Sector Quarterly Digest - October to December 2025

The CoSTAR Foresight Lab’s screen sector digest summarising events and developments in AI each quarter.

Read Insight