Policy Brief
From Crisis to Renewal
Addressing AI’s Impact on Our Information Ecosystem
Authors
Published by
Interface
April 09, 2026
Executive Summary
AI-driven search-summary features, with Google's AI Overviews (AIOs) being the most promiment example, are rapidly transforming how people access news, with profound implications for our information ecosystem. While AI holds potential to improve access to information, its current deployment risks undermining the sustainability, diversity, and reliability of news.
As AI summaries summarise information directly within search results, they can significantly reduce traffic to news publishers. Evidence suggests users are less likely to click through to original sources when AIOs are present on Google, threatening the financial viability of journalism. These impacts from AI summaries and AIOs in particular are unlikely to be evenly distributed and will play out in different ways across the news sector across Europe, with smaller and ad-funded outlets particularly vulnerable, raising concerns about declining media plurality.
At the same time, AI-generated summaries often fail to provide accurate and diverse information. Research shows that AI tools disproportionately rely on a narrow set of sources, marginalising smaller publishers. In addition, persistent issues with accuracy – including misleading or incorrect outputs – risk distorting public understanding and weakening trust in information systems.
Current regulatory frameworks in both the EU and UK are struggling to address these challenges effectively. While multiple legal instruments are relevant and already in place – including competition law, platform regulation, copyright, and AI governance – none are fully equipped to respond quickly and comprehensively to the harms of monopolisation and the lack of remuneration for media outlets posed by AI summaries. We argue that competition policy offers the most effective and immediate route to intervention, with digital regulation serving as a secondary lever.
The UK’s approach, based on flexible, case-specific competition rules, enables faster action through tailored conduct requirements for dominant firms like Google. In contrast, the EU’s framework is broader but slower, relying more heavily on traditional, time-intensive antitrust investigations and ongoing debates around copyright and remuneration for publishers. Both systems offer complementary strengths, and policymakers should learn from each other’s approaches.
We therefore recommend a “competition-first” policy package to ensure fairer outcomes. Key measures include enforcing fair ranking and attribution practices, preventing unauthorised use of publisher content, and potentially requiring platforms to compensate news organisations. However, competition policy alone is not the key. We therefore propose additional long-term interventions, including greater transparency through “nutrition labels” for AI-generated content and increased public or democratic funding for journalism. These measures aim to strengthen the resilience and independence of the news sector, ensuring that trustworthy information remains accessible in an information ecosystem shaped by AI.
Introduction
Across Europe, trusted information supply chains are severely strained. AI-enabled mis- and disinformation are polluting online spaces. 1 Digital platforms are amplifying divisive content at scale 2 and media pluralism is under threat. 3 Now, more than ever, we need to protect the providers of trustworthy information.
Yet, at the exact moment when quality journalism matters more than ever, the news sector is being rapidly transformed by AI. Google AI Overviews (AIOs), the use of AI summaries at the top of Google's search results as a direct result of users’ search queries, and other AI summaries are becoming the new front door to the news. Not only are audiences of news organisations and media outlets collapsing, but AI also does not always succeed in presenting trustworthy information.
These harms are not inevitable – and, with the right policies in place, AI has the potential to improve our news environment. AI can make news content more engaging, it can verify facts across multiple sources, and it can help the public access information that would otherwise remain buried.
For AI to unlock this potential, we need to act quickly to rebalance power and steer AI towards a positive impact for the news ecosystem. In practice, this requires regulation that holds AI companies accountable for the quality of the news content they generate, and that supports the long-term sustainability of news organisations. We argue that, both in the UK and EU, competition policy offers the best route forward, with digital regulation serving only as a secondary lever. With the UK’s competition authority ahead in its efforts to design conduct requirements for Google, there are many lessons that the EU might learn.
Google AI Overviews are the new front door to news
AI is becoming a new front door through which the public access the news. Google’s AIOs launched in summer 2024 and now reach over 2 billion users every month 4 , appearing under more than 60 per cent of US Google searches.
With the introduction of AIOs, Google’s role has fundamentally shifted. It is no longer simply a gateway through which the public can access content. Instead, it is summarising and presenting information directly to users and providing them with all the information they need within the platform itself. In some ways, Google is ceasing to act as a gateway at all, instead serving as a one-stop shop for the public’s information needs.
AI, platform power and the future of trusted information
As AI summaries increasingly mediate how the public access information, their effects are already being felt across the news ecosystem. And, while there are many potential benefits to be realised through AI, early evidence points to concerning consequences, including emerging risks to trustworthy information and media pluralism.
AIOs, media plurality and news sustainability
Across Europe, the growth of AI-enabled search is posing a significant challenge to the sustainability of quality journalism. Evidence suggests that AIOs and other AI summaries are sharply reducing referral traffic to news websites. Analysis from the Pew Research Center finds that users are almost half as likely to click on search links when an AI Overview is present. 5 Publishers themselves expect this trend to continue, predicting that traffic from search engines will fall dramatically (-43 per cent) in the next three years. 6
In response, news organisations are fighting back, challenging the use of their content by large technology platforms and arguing that current practices undermine both competition and news sustainability:
-
A coalition of publishers has filed a formal antitrust complaint with the European Commission against Google, accusing its 'AI Overviews' of diverting traffic and revenue from them. 7
-
A German media alliance is suing Google for violating provisions in the EU’s Digital Services Act. 8
-
The Italian Federation of Newspaper Publishers (FIEG) launched a complaint with the Italian Digital Services Coordinator (DSC) – AGCOM – coining AIOs a ‘traffic killer’. 9
-
In a statement to the Competition and Markets Authority in the UK, a collective of news outlets argued that AIOs resulted in an 89% decline in click-through-rates (CTR) and slashed the income traditionally reaped by ad revenue. 10
These impacts from AIOs and other AI summaries are unlikely to be evenly distributed across the news sector, or across Europe. Advertising-funded organisations are particularly exposed, as they depend heavily on page views to generate revenue. In contrast, subscription-based business models and publicly funded news may be more resilient in the short term thanks to their lower reliance on traffic from search engines like Google.
Across Europe, this means the harms from AI will play out in different ways. Where there are high levels of public funding, a high degree of media plurality, and a greater willingness from the public to pay for news, the impacts may be slower. However, the corrosive effects on the media landscape are not the only issue for news consumers, who may, in the long run, face a shrinking diversity of sources. More immediately, readers are also affected by the lack of reliability in the information presented by AI.
AI and quality sourcing
AI summaries are increasingly shaping what information the public sees and trusts. There is growing evidence that AI summaries cannot be relied on to surface information that is both accurate and drawn from a diverse range of quality sources.
Concerns about accuracy and bias within AI-generated content are well established. And while Google's AI Overviews use retrieval-augmented generation (RAG) – a framework intended to improve reliability by drawing only on trusted external sources – this approach does not guarantee accurate sourcing, supporting the argument of a decline of media pluralism.
Research from IPPR highlights these limitations. In a small-scale one-shot experiment testing the performance of four AI tools (ChatGPT, Google AI Overviews, Google Gemini, Perplexity AI) in the UK across 100 randomly generated news queries, AI tools – including AIOs – consistently failed to surface a broad range of trusted journalistic sources. Instead, each of the tested AI tools concentrated heavily on just one news outlet, with the single most cited news source accounting for 34 per cent of all journalistic sources in AI answers. 11
A narrow range of journalistic sources dominates in AI answers
Number of citations from the top 10 journalistic sources (out of 100 queries)
Source: IPPR analysis of AI responses to a 100 UK news queries, based on a one-shot experiment evaluating ChatGPT, Google AI Overviews, Google Gemini and Perplexity.
This pattern is consistent with wider research, which has found that AI tends to concentrate citations on just a few major news outlets 12 and that the top 10 news outlets receive 80% of all mentions in AIOs. 13 In practice, this can make smaller news organisations invisible, to the detriment of media plurality. It can also easily create new winners and losers in the news ecosystem, with AI companies in the driving seat and directing which opinions are surfaced.
AI and accuracy
Additional concerns continue to be raised around the accuracy of AI answers. In Germany, SE Ranking found that AI Overviews are 2-3 times more likely to cite YouTube than trusted medical sites with several instances where AIOs provide misleading medical advice. 14 Beyond just Google, numerous studies have found that AI frequently fails to deliver accurate information. The BBC find that AI assistants misrepresent news content 45 per cent of the time 15 while Which?, the largest consumer organisation in the UK, has uncovered AI tools giving risky financial advice. 16
Taken together, this evidence raises serious questions about the role AI summaries play in shaping public understanding. At scale, persistently narrow sourcing combined with inaccurate synthesis, even when reliable sources are used, risks distorting public debate and weakening access to trustworthy information.
The UK and EU are waking up to the impacts, but efforts are stalled by a complex regulatory landscape
In both the UK and the EU, policymakers are beginning to wake up to the impacts AI summaries, especially Google's AIOs, are having on our information environment. Debate is intensifying over how to address these risks, balancing the need for information integrity and media plurality with support for AI-enabled growth and innovation. In each case, existing legislation is proving imperfect for quickly addressing AIO’s impacts.
Below, we summarise the legislative backdrop in the UK versus the EU, highlighting several laws which have given publishers hope of mitigating the impacts of Google's AIOs (Table 1). However, not all of these laws offer a practical route to addressing the impacts of AIOs. In our view, both in the UK and EU, competition and antitrust law offers the most practical mechanism through which to act.
Table 1: Overview of EU and UK regulation relevant for addressing AIOs
|
EU law |
How it applies to AIOs |
UK law |
How it applies to AIOs |
|---|---|---|---|
|
Competition and market power |
|||
|
Digital Markets Act (DMA)
|
The DMA is the first route for addressing the entrenched market power of Google, for example where they give preference to their own products in search results (Art. 6.5)
However, the obligations under the DMA are predefined for designated gatekeepers (e.g., Alphabet, Microsoft) and are proving inflexible for quickly addressing the impacts of AIOs. |
Digital Markets, Competition and Consumers Act (DMCCA) |
This enables the UK to designate technology firms with strategic market status (SMS) and then impose conduct requirements on them.
The UK Competition and Markets Authority (CMA) is currently considering conduct requirements for Google to explicitly cover AIOs. |
|
Article 102 TFEU (traditional antitrust) |
In the context of the DMA’s inflexibility, traditional antitrust law in the EU has so far been more relevant to addressing the impacts of AIOs, with a live investigation into whether the use of publisher’s data in AIOs without consent, compensation or opt-out violates EU competition law, with potential to impose new conduct requirements. |
|
|
|
Content visibility, prominence and attribution |
|||
|
Audiovisual Media Services Directive (AVMSD) |
Member States may ensure prominence for media services of general interest on audiovisual media (Art. 7a). The AVMSD review (part of the European Democracy Shield) could extend this to cover AI Overviews – but Google Search does not traditionally qualify as audiovisual media. |
Media Act 2024 |
Created a prominence regime for public service broadcasters on smart TVs and streaming devices. Does not currently apply to prominence of news content on social media or within AI answers.
|
|
EU Copyright Directive |
Under the EU’s copyright directive, copyrighted material may be used for research purposes under a text and data mining (TDM) exception. AI-generated summaries that reproduce protected content may fall under the TDM exceptions. The crux is therefore that web publishers as rights holders of copyrighted material cannot opt out of being included in AI Overviews, only from AI training. Publishers worry that Google is abusing its monopoly by forcing publishers’ participation in training data as a condition of visibility. A 2025 parliamentary report found that the current regulatory landscape is ill-suited to the problems posed by generative technology for copyright. |
|
|
|
Transparency and training data |
|||
|
EU AI Act |
The AI Act does not address media pluralism per se, but has included a high level of granularity in the disclosure of "sufficiently detailed" summaries of training data for general purpose AI (GPAI) models, so that rights holders could more easily claim compensation for the use of their content. So far, Google says that opting out of AI training does not impact a site’s appearance in search results, but some news outlets still fear it could impact visibility. |
No equivalent |
The UK has no specific AI regulation to cover transparency from AI companies on their training data. |
|
Systemic risks and online safety |
|||
|
Digital Services Act (DSA) |
The DSA requires Google to assess systemic risks to media freedom and pluralism before launching new features (Art. 34).
A coalition of media organisations and NGOs has filed a complaint against AIOs under the DSA, with Germany’s Digital Services Coordinator, arguing they threaten media diversity and democratic discourse. |
Online Safety Act |
No equivalent coverage of AIOs as Ofcom’s (the UK regulator for communication services) duties focus on illegal and harmful content, with AI-generated content regulated in the same way as human content. |
The way forward: A competition-first package to direct AI’s impact on news
In both the UK and the EU, competition policy offers the most immediate hope of addressing the harms of AIOs at their source, and authorities in both jurisdictions are moving forward quickly. Digital regulation must be effectively implemented as a secondary lever. However, the approaches both in the UK and the EU differ significantly, and each regulatory regime could learn from the other.
The UK approach: faster, more flexible, but narrower in scope
The DMCCA was designed to be flexible, meaning the UK’s Competition and Markets Authority (CMA) can tailor conduct requirements to specific problems rather than relying on pre-determined obligations. This has given them substantial freedom in responding to the impacts of Google AI Overviews. Following Google’s designation as having strategic market status, they have designed bespoke conduct requirements and consulted the publisher community on these in depth.
The UK CMA has also been able to act quickly, following Google’s SMS designation in October 2025. They began consulting on specific conduct requirements in January 2026, with a relatively quick route to implementing essential measures for Google. Those included ensuring Google does not scrape publisher content without permission, that they rank content fairly within AIOs, and that they sufficiently attribute publishers whose content gets used. 17
However, the conduct requirements under consideration in the UK remain narrow. And many are disappointed that the CMA has delayed any efforts to require Google to negotiate with publishers under fair and reasonable terms (i.e. require them to pay for content).
There are also wider concerns around whether the CMA is being left alone to get on with its investigation. The UK government has previously sacked the Chair of the CMA 18 and steered the CMA towards acting in the interests of UK growth 19 , while the chair of a related inquiry into cloud services quit in light of concerns about the slow pace of change. 20
The EU approach: slower to start, but broader in ambition
In the EU, the DMA should have been the legislation that helps address AIOs impacts quickly. On 25 March, the two-year mark of the Commission’s investigation into Alphabet’s self-preferencing of its own services in Google Search was reached. For that very reason, a CSO-publisher-business coalition urged the European Commission (EC) to recognise the seriousness of Google’s self-preferencing practices. 21 Given the slow procedural nature of the regulation, its ongoing proceedings and effective implementation, its pre-determined obligations couldn’t yet accommodate the specific harms posed by AIOs. This is not to say that the DMA cannot play a key role here, as it is based on market analysis and provides for ex-post checks. While the EU’s legislation in this field is more binding, this case starkly highlights the benefits of adaptability.
To respond to AIOs, the EC had to fall back on Article 102 TFEU, in other words, using traditional competition law rather than digital regulation. Under competition regulation, antitrust laws prohibit agreements between market operators that would restrict competition and the abuse of dominance. This offered the EC the necessary flexibility to cover AIOs. The EC opened a formal antitrust investigation into Google in December 2025, but there is not yet any clarity on how long this investigation will last – with complex cases taking up to five years.
Another angle that could be used is asking for remuneration with an updated Copyright Directive. On 10 March 2026, MEPs within a non-binding report 22 called for new copyright rules when AI trains on protected works and have urged the EC to act. The report calls upon the Commission to support collective licensing agreements per sector to “ensure the fair remuneration of rights holders while enabling AI providers to access high-quality training data.” Another ask is to “safeguard the press and news media sector, whose services are repeatedly and fully exploited by AI systems”. One way to do so is by ensuring that “providers of GenAI models or systems that demonstrably divert traffic and revenue from press and news media outlets compensate such outlets.”
Focusing on both fair competition and remuneration, the EU investigation is broader and may allow the bloc to reach a clearer answer on the question of paying publishers for content. While the UK has kicked the can down the road on this, despite it being the most obvious lever through which to ensure media plurality, the EU is actively considering requiring Google to pay publishers for content. 23 The premise here is that AI summaries are copies of original publisher content. If, however, summaries are not considered copies, there is another level recently established on a Member State level that could turn out useful: The Frankfurt Regional Court recently established in a ruling that AI misinformation cannot simply be dismissed as a technical error. 24 If AI disseminates false information about companies, this can be considered anti-competitive obstruction. We believe this gives affected news outlets a powerful weapon in antitrust law to take action against “hallucinations” for potential reputational damage and therefore a stronger lever for remuneration.
What can the UK and EU learn from each other?
Both jurisdictions should watch the progress of the other closely. The UK’s approach under the DMCCA is likely to result in binding conduct requirements sooner than anything in the EU, meaning the EC can monitor these for success and tailor their own measures accordingly.
Meanwhile, the UK should learn from the EU’s insights as they conclude on fair compensation for publishers whose content is used within AI summaries.
While the EU arguably has more legislation that at first sight appears relevant for addressing the risks of AIOs, it may well be the UK’s more flexible competition regime that enables them to act first with practical, binding requirements for Google.
Both regulatory regimes must also now begin defining clear success metrics for these measures. The following will all be important, but specific targets will be key:
-
Referral rates from AI summaries to news publishers increase.
-
Diversity in outlets sourced by AIOs increases.
-
Error rates within AIOs and other AI summaries decrease.
-
Media plurality scores improve.
Additional routes to take
Competition policy can help address power imbalances between the news and technology sectors, giving news organisations a fair chance of being represented in AI outputs. This offers the most immediate hope of binding constraints for Google and tangible benefits for media plurality in Europe.
But it is not enough on its own. In the longer term, a wider set of responses will be needed. Platform regulation alone does not create online spaces where dialogue, understanding, and social cohesion are fostered. Two additional interventions stand out.
-
'Nutrition’ labels for AI answers
AI systems should be required to explain, in clear and standardised ways, how their answers were generated, and should be designed in such a way as to ensure that sources are properly cited. Nutrition labels can show what kinds of sources underpin AI-generated news, giving users meaningful insight into the quality and provenance of information. The inclusion and labelling of public service media, for instance, guarantees users that the results delivered by the AI system are trustworthy. As AI systems become more personalised, nutrition labels can also be used to provide users with clear choices on the types of sources AI answers draw on.
-
Democratic funding for news
A small number of US technology firms are rapidly becoming the major gatekeepers of news information. One of the most effective ways to counter this concentration of power is to make news organisations financially independent of platforms.
Democratic funding models for news can do just this, offering a route to long-term sustainability for public interest journalism. In some countries, this is already a key part of the news funding model, for example, in Denmark, where public subsidies for private news are already a reality. 25 In other countries, this is a newer idea with instruments such as the European Democracy Shield 26 and the UK Local Media Action Plan, marking a step in the right direction. A digital tax on major online platforms and search engines, as implemented in several European Member States and currently being discussed in Germany, would not break the dependency chain with Big Tech but could serve as an alternative funding model to secure public-interest journalism and media pluralism. With stable resources and a more level regulatory playing field, public service broadcasters can genuinely compete with technology giants, rather than gradually becoming less relevant. 27
Acknowledgements
We would like to warmly thank Carsten JUNG, Interim Associate Director for Economic Policy and AI at IPPR, Dr. Renate DÖRR, Regulatory Affairs at ZDF, and Jessica GALISSAIRE, Senior Policy Researcher at interface, for their valuable input, which greatly contributed to informing and improving this paper. We would also like to express our appreciation to Luisa SEELING, Lead Writing, Editing & Publishing, Alina SIEBERT, Lead Design & Visual Communication, and Iana PERVAZOVA, Lead Media Relations & Outreach at interface, for your invaluable review, visual design and dissemination efforts.
Table of Contents
1 Seger et al., “Epistemic Security for Crisis Resilience: An analysis of information threats, vulnerabilities, and priority interventions for the maintenance of effective crisis response capacity in democratic societies”, Epistemic Security Network Demos, January 2026, accessed 10 March 2026, https://demos.co.uk/wp-content/uploads/2026/01/Epistemic-Security-for-Crisis-Resilience_Report_2025_Jan_optimised.pdf.
2 Krasedomski-Jones, Alex, “Everything in Moderation: Platforms, Communities and Users in a Healthy Online Environment”, Demos, October 2020, accessed 10 March 2026, https://demos.co.uk/wp-content/uploads/2020/12/Everything-in-Moderation.pdf ; Milli et al., “Engagement, User Satisfaction, and the Amplification of Divisive Content on Social Media”, PNAS Nexus, Volume 4, Issue 3, March 2025, accessed 10 March 2026, https://doi.org/10.1093/pnasnexus/pgaf062.
3 Day, Jonathan, Franziska Otto & Eva Simon, “Liberties Media Freedom Report 2025”, Civil Liberties Union for Europe, 2025, Accessed 10 March 2026, https://www.liberties.eu/f/oj-aem; Chivers, Tom, “2025 Report: Who Owns The UK Media?”, Media Reform Coalition, May 2025, Accessed 10 March 2026, https://www.mediareform.org.uk/wp-content/uploads/2025/05/2025-Who-Owns-The-UK-Media-report.pdf.
4 Pichai, Sundar, “Q2 earnings call: CEO’s remarks”, Google, 23 July 2025, Accessed 10 March 2026, https://blog.google/company-news/inside-google/message-ceo/alphabet-earnings-q2-2025/.
5 Chapekis, Athena & Anna Lieb, “Google users are less likely to click on links when an AI summary appears in the results”, Pew Research Center, July 22, 2025, Accessed 10 March, 2026, https://www.pewresearch.org/short-reads/2025/07/22/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results.
6 Newman, Nic, “Journalism, Media and Technology Trends and Predictions 2026”, Reuters Institute, 12 January, 2026, Accessed 10 March, 2026, https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2026.
7 Negreiro, Mar, "Search engines in times of Artificial Intelligence", European Parliamentary Research Service, December 2025, accessed 10 March, 2026, https://www.europarl.europa.eu/RegData/etudes/ATAG/2025/779238/EPRS_ATA (2025)779238_EN.pdf.
8 Brooker, Alice, “German Media Groups File Complaint Against Google AI Overviews”, Press Gazette, September 19, 2025, Accessed 10 March, 2026, https://pressgazette.co.uk/news/german-media-groups-file-complaint-against-google-ai-overviews/.
9 Wanted in Rome, “Italian Newspapers File Complaint Over Traffic Killer Google AI Overviews”, Wanted in Rome, 16 October, 2025, Accessed 10 March, 2026, https://www.wantedinrome.com/news/italian-newspapers-file-complaint-over-traffic-killer-google-ai-overviews.html.
10 Bearne, Suzanne, “Publishers fear AI summaries are hitting online traffic”, BBC, 9 September, 2025, Accessed 10 March, 2026, https://www.bbc.com/news/articles/c0mlvryx0exo.
11 Powell, Rosa & Carsten Jung, ” AI's got news for you: Can AI improve our information environment?”, Institute for Public Policy Research, 30 January, 2026, accessed 10 March, 2026, https://www.ippr.org/articles/ais-got-news-for-you.
12 Yang, Kai-Cheng, ” News Source Citing Patterns in AI Search Systems”, Northeastern University, 7 July, 2025, accessed 10 March, 2026, https://arxiv.org/pdf/2507.05301.
13 Deda, Yulia, ”AI Overviews research: Media presence and paywalled content analysis”, SE Ranking, 2 June, 2025, accessed 10 March, 2026, https://seranking.com/blog/media-presence-in-aios-research/.
14 Deda, Yulia, ” Health-related AI Overviews turn to YouTube 2–3x more than trusted medical sites”, SE Ranking, 14 January, 2026, accessed 10 March, 2026, https://seranking.com/blog/health-ai-overviews-youtube-vs-medical-sites/.
15 EBU & BBC, ” News Integrity in AI Assistants: An international PSM study”, BBC, October 2025, accessed 10 March, 2026, https://www.bbc.co.uk/mediacentre/documents/news-integrity-in-ai-assistants-report.pdf.
16 ”ChatGPT and Gemini among AI tools giving risky consumer advice, Which? finds”, Which?, 18 November, 2025, accessed 10 March, 2026, https://www.which.co.uk/policy-and-insight/article/chatgpt-and-gemini-among-ai-tools-giving-risky-consumer-advice-which-finds-aBnBP0l2CE0T.
17 Competition and Markets Auhority, ” Google’s general search services: proposed conduct requirements”, gov.uk, 28 January, 2026, accessed 10 March, 2026, https://www.gov.uk/government/consultations/googles-general-search-services-proposed-conduct-requirements.
18 Stanley, Martin, ” The Government has Undermined the UK's Principal Competition Authority”, Martin Stanley’s Substack, 23 January, 2025, accessed 10 March, 2026, https://ukcivilservant.substack.com/p/the-government-has-undermined-the.
19 Department for Business and Trade, ” Strategic steer to the Competition and Markets Authority”, gov.uk, 15 May, 2025, accessed 10 March, 2026, https://www.gov.uk/government/publications/strategic-steer-to-the-competition-and-markets-authority/strategic-steer-to-the-competition-and-markets-authority.
20 Bristow, Tom, ”Cloud over competition: The chair of the CMA‘s cloud inqiry quits“, The Morning Intelligence, 3 March, 2026, accessed 10 March, 2026, https://www.themorningintelligence.uk/cma-cloud-inquiry-chair-kip-meek-quits/.
21 Rebalance Now et al., ” Open Letter to the European Commission on Google’s Non-Compliance with the DMA”, 15 March, 2026, accessed 27 March, 2026, https://rebalance-now.de/wp-content/uploads/2026/03/260315-Open-Letter-DMA-Two-Years-of-Non-Compliance-Investigation.pdf.
22 https://www.europarl.europa.eu/news/en/press-room/20260306IPR37511/protecting-copyrighted-work-and-the-eu-s-creative-sector-in-the-age-of-ai.
23 European Commission, ” Commission Opens Investigation into Possible Anticompetitive Conduct by Google in the Use of Online Content for AI Purposes”, European Commission, 9 December, 2025, accessed 10 March, 2026, https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2964.
24 Krempl, Stefan, ” Lawsuit Against Google AI: False Search Information Can Justify Injunction”, Heise, 30 January, 2026, accessed 10 March, 2026, https://www.heise.de/en/news/Lawsuit-against-Google-AI-False-search-information-can-justify-injunction-11160423.html.
25 Nielsen, Prof. Rasmus Kleis, ”From taxes to news: How Denmark is rethinking public funding for private publishers”, Reuters Institute, 12 November, 2025, accessed 27 March, 2026, https://reutersinstitute.politics.ox.ac.uk/news/taxes-news-how-denmark-rethinking-public-funding-private-publishers.
26 The European Democracy Shield includes reinforced support for independent and local journalism to accompany the commission’s plans to foster the sustainability of EU media. It proposes actions to protect its economic viability, as well as to address the risks posed by AI. The Commission now proposes a budget of 3.2 bn euros for the Media+ programme to provide funding opportunities for independent and free media in the next Multiannual Financial Framework (MFF).
27 Powell, Roa & Dr. Sofia Ropek-Hewsom, ”Levelling the playing field: The BBC, Big Tech, and the case for a bold charter”, Institute for Public Policy Research, 10 March, 2026, accessed 27 March, 2026, https://www.ippr.org/articles/levelling-the-playing-field-bbc-big-tech.
Authors
Lena-Maria Böswald
Senior Policy Researcher Digital Public Sphere
Roa Powell
Senior Research Fellow at IPPR
Tyreese Calnan
Trainee Researcher at IPPR