Policy Brief
Building a centralised national AI authority
Poland's AI Act implementation model in a European context
Executive Summary
The EU Artificial Intelligence Act (AI Act) leaves Member States with significant discretion in designing their national enforcement structures. This paper examines Poland’s approach, one of the most structurally distinctive among the EU-27: the creation of a new, centralised supervisory authority, the Commission for the Development and Safety of Artificial Intelligence (KRiBSI), designated as the sole market surveillance authority. As of early 2026, only nine of the 27 Member States have officially designated their national competent authorities, with a further ten in the pending designation phase. Among the 19 countries that have designated or proposed their governance models, the prevailing approach has been to distribute oversight responsibilities across multiple existing sectoral regulators, ranging from two bodies to as many as 14. Poland is one of only two countries, alongside Lithuania, to designate a single entity as its sole market surveillance authority, and the only one creating an entirely new institution for this purpose.
This policy brief traces how this institutional design evolved through the intergovernmental legislative process. The Ministry of Digital Affairs originally envisioned a fully independent agency, but fiscal objections from the Ministry of Finance led to a significant scaling back of the proposal’s ambitions. The resulting compromise nests the authority’s operational support within the Ministry of Digital Affairs, raising questions about whether the statutory safeguards intended to preserve independence will prove sufficient in practice. Drawing on the financial supervision literature, the paper evaluates the trade-offs inherent in centralisation, including the advantages of consolidated AI-relevant expertise, against the risk of insufficient sectoral knowledge across the diverse domains in which high-risk AI systems are deployed.
Beyond its core supervisory structure, the Polish draft legislation introduces instruments that go beyond the AI Act’s requirements. Binding individual opinions offer businesses upfront legal certainty on how the regulation applies to their specific products, while a Social Council for AI formalises multi-stakeholder expert input into the authority’s work. Both mechanisms address challenges shared across the EU and merit attention as potential models for other Member States. The analysis is based on the most recent draft legislation available as of late February 2026, before parliamentary consideration.
Introduction
The EU Artificial Intelligence Act (AI Act), in force since August 2024, requires each Member State to designate national competent authorities responsible for overseeing and enforcing the new rules. The regulation establishes a multi-level governance framework distributing enforcement responsibilities between a centralised EU AI Office and national competent authorities, but leaves Member States with considerable discretion over the institutional form those national authorities should take. This paper examines one of the most structurally distinctive approaches currently emerging: Poland’s decision to establish a new, centralised supervisory authority, the Commission for the Development and Safety of Artificial Intelligence (KRiBSI), as its sole market surveillance authority under the AI Act.
Poland's implementation model is notable on several counts. Of the 19 Member States that have designated or proposed their national competent authorities as of early 2026, nearly all have opted to distribute market surveillance responsibilities across multiple existing sectoral regulators. Poland stands apart as one of only two, alongside Lithuania, to designate a single entity as its sole market surveillance authority, and the only one creating an entirely new institution for this purpose. Beyond this structural choice, the draft legislation introduces instruments that go beyond the AI Act’s requirements, including binding individual opinions designed to provide upfront legal certainty to businesses, and a Social Council for AI intended to bridge the public sector’s expertise gap through formalised multi-stakeholder input. To situate these choices, the paper surveys the emerging European landscape of national implementation and draws on the financial supervision literature to illuminate the trade-offs inherent in the centralisation-versus-dispersion debate.
While Poland's centralised approach is the paper's primary focus, it is not the only Member State to invest in new institutional capacity for AI governance. Spain, for instance, established a dedicated AI supervisory agency (AESIA) as early as 2023, though it ultimately opted for a dispersed enforcement model. Ireland is similarly creating a new National AI Office to coordinate across its 13 designated bodies. These cases provide useful comparative reference points that are explored in Section 5.
The paper’s central analytical contribution lies in tracing how this institutional design evolved through the intergovernmental legislative process. The Ministry of Digital Affairs originally envisioned a fully independent agency, but sustained fiscal objections from the Ministry of Finance led to a significant scaling back of the proposal’s ambitions, both in terms of institutional autonomy and resources. The resulting compromise retains KRiBSI as the designated authority but nests its operational support within the Ministry of Digital Affairs. The paper evaluates the consequences of this compromise for institutional independence, enforcement capacity, and positioning within the AI Act’s multi-level governance architecture. An important caveat applies: the analysis rests on the most recent draft legislation available as of late February 2026, and parliamentary amendments could still alter the design described here. The paper does not attempt a comprehensive comparative analysis of all Member States or a legal compliance assessment; both represent avenues for further research.
The paper begins by outlining the EU-level institutional framework under the AI Act and the Digital Omnibus on AI’s implications for implementation timelines, before surveying the European landscape of national implementation models. It then turns to the Polish case, tracing the legislative negotiation, describing the resulting institutional framework, and evaluating its design along four dimensions: centralisation strategy, independence safeguards, resource adequacy, and EU-level coordination. The last section offers concluding observations.
Overview of the institutional framework under the EU AI Act
To understand the practical challenges of implementation, it is first necessary to look at the formal institutional architecture established under the AI Act. 1 The regulation creates a multi-level governance model that distributes responsibilities between the EU institutions and Member States, with three key actors at its core, as illustrated in Figure 1.
Figure 1: The EU AI Act enforcement architecture
First, the EU AI Office, 2 part of the European Commission (DG CONNECT), is the central pillar of the enforcement structure. Crucially, it has exclusive supervision and enforces the rules concerning General-Purpose AI (GPAI) models, particularly those with systemic risk (Art. 88 of the AI Act). This gives the AI Office direct authority over the foundational layer of the AI ecosystem. Furthermore, its mandate includes developing Union expertise and capabilities in AI (Art. 64), as well as supporting the European Artificial Intelligence Board in coordinating the work of member state authorities (Art. 65 & 66 of the AI Act).
Second, the national competent authorities are the main on-the-ground enforcers of the AI Act. The regulation requires each Member State to designate two types of national competent authorities: market surveillance and notifying authorities (Art. 70 of the AI Act). The market surveillance authorities oversee AI systems directly, particularly those classified as high-risk. Their duties include conducting market surveillance, investigating breaches, handling complaints, and applying penalties (Art. 74, 85 & 99 of the AI Act). The notifying authorities are responsible for assessing, designating, and notifying third-party conformity assessment bodies (notified bodies) that audit high-risk AI systems (Art. 28 & 43 of the AI Act). For most businesses and citizens, these national authorities will be the primary point of contact and enforcement.
Third, the European AI Board 3 is the bridge between these two layers. Composed of one representative from each of the 27 Member States (Art. 65 of the AI Act), its primary role is advisory and coordinative (Art. 66 of the AI Act). The Board is responsible for advising the European Commission and Member States on the consistent application of the AI Act, facilitating the exchange of best practices, and issuing opinions on standards (Art. 66 of the AI Act). The Board is supported by sub-groups, which act as platforms for cooperation and information exchange among the national authorities (Art. 65 of the AI Act). For instance, the Board's agenda from March 2025 included a roundtable on national enforcement, a review of forthcoming AI Act deliverables (such as codes of practice and guidelines), and reports from the sub-groups. 4
This formal division of labour establishes a complex system. The AI Office holds centralised power over the GPAI models, while national authorities are responsible for managing most specific AI applications within their jurisdictions and local markets. The AI Board is the forum that ensures these two levels work together effectively. However, the real-world effectiveness of this framework will depend on the quality of the informal cooperation and coordination between these bodies. Additionally, there are still ambiguities in key areas of interaction. For example, when a high-risk system from one provider is built on a GPAI model from another, supervision is the responsibility of the national market surveillance authority, not the AI Office. While the regulation mandates this authority to collaborate with the AI Office, there is currently no clear guidance on how joint investigations or requests for information will be handled in practice (Art. 74 & Recital 161 of the AI Act).
Digital Omnibus on AI and national implementations
While the AI Act has been in force since 1 August 2024, establishing the general institutional architecture, the regulatory landscape is currently evolving due to the proposed ‘Digital Omnibus on AI’. 5 Given that this legislative proposal is currently under negotiation, it is imperative to note at the outset that the amendments discussed below, particularly those concerning implementation timelines, are provisional and remain subject to revision pending final adoption.
Proposed by the European Commission on 19 November 2025, this set of amendments is part of a broader simplification package aiming to reduce the regulatory burden on businesses and improve Europe’s competitiveness. 6 The call for digital simplification was also a key priority of the Polish Presidency in the first half of 2025. 7 This Omnibus initiative seeks to reopen the AI Act to address specific implementation bottlenecks that became clear after consultations conducted by the European Commission, which highlighted, for instance, the delays in designating national competent authorities. 8
The proposal’s significant impact for national enforcement lies in the targeted amendment to the timeline for high-risk AI systems, which under the AI Act was originally 2 August 2026. In the Omnibus, the Commission proposed to delay the application of these rules until harmonised standards are available (but not later than 2 December 2027 for Annex III and 2 August 2028 for Annex I), effectively replacing the fixed deadline with a floating timeline dependent on the availability of compliance tools. 9
However, the European Parliament’s draft report has pushed back against this open-ended proposal, advocating instead for fixed, extended deadlines of 2 December 2027 (Annex III) and 2 August 2028 (Annex I). 10 This Omnibus caveat is important to take into account when discussing Member State implementation models; the Polish case illustrates how complicated and time-consuming this process is, which may explain why some Member States might prefer the certainty of the fixed deadlines 11 similar to the draft Parliament’s proposal.
The European landscape: models of national implementation
The following overview sketches the emerging landscape of national implementation against which Poland's approach can be situated, rather than offering a detailed comparative analysis of the models chosen by individual Member States. The process of designating national competent authorities has proven to be complex and time-consuming for the Member States. An analysis of the regulatory landscape across the EU 27, based on tracking data from the International Association of Privacy Professionals (IAPP) as of early 2026, reveals a highly fragmented and staggered implementation process. 12 Currently, the progress of institutional setup varies significantly across the 27 Member States. Only nine countries 13 have officially designated their national competent authorities. A further ten states 14 are in the pending designation phase, having introduced draft legislation or active proposals. The remaining eight countries 15 are still listed as awaiting designation, with no formal public action or confirmed bodies yet announced officially. This means that more than two thirds of Member States have yet to finalise their governance models, making Poland's early and distinctive institutional choice directly relevant to the design decisions that a majority of countries still face.
Beyond the pace of implementation, a clear trend has emerged among the Member States that have chosen their governance models. Of the 19 countries that have designated or proposed their national competent authorities, 17 have opted for a "dispersed" or sectoral approach, distributing market surveillance responsibilities among multiple existing regulators. Only two (Lithuania and Poland) have designated a single entity as their sole market surveillance authority. Lithuania consolidated this role within its existing communications regulator (RRT), while Poland is the only Member State creating an entirely new institution for this purpose. The degree of dispersion among the remaining 17 varies dramatically: Hungary and Germany have designated just two market surveillance authorities each, while France plans to involve 14 separate bodies, Ireland and Latvia 13 each, and Sweden 11. In the dispersed model, the responsibilities of the market surveillance authority are divided among multiple existing sectoral regulators, such as data protection authorities, financial supervisors, telecommunications offices, and health or transport agencies.
Within this landscape, Spain presents an interesting and proactive case. While it ultimately follows a dispersed model involving seven separate bodies (including the national data protection agency and the central bank), Spain was an early mover in establishing a dedicated, specialised entity in 2023. Operating as an autonomous agency attached to the Ministry of Digital Transformation, the Spanish Artificial Intelligence Supervisory Agency (AESIA) was created to oversee the AI Act's execution and represent the state at the European level. 16 Beyond its strict regulatory role, AESIA's role extends to leading and coordinating the oversight of AI development to ensure it is applied ethically, safely, and for the benefit of society. 17
In contrast to this trend of sectoral dispersion, strict centralisation remains rare. As noted above, only Lithuania and Poland have designated a single entity as their market surveillance authority, though these two cases differ significantly, with Lithuania consolidating powers in an existing body and Poland creating a new institution. However, the choice between dispersed enforcement and new institutional creation is not a strict binary. Ireland, which designated 13 existing bodies as market surveillance authorities in September 2025, simultaneously announced the establishment of a new National AI Office by August 2026. While the Irish AI Office will not itself hold market surveillance powers, it will serve as the central coordinating authority, the single point of contact for EU-level engagement, a provider of centralised technical expertise to the other national competent authorities, and the host of a national regulatory sandbox. 18 This combination, a dispersed enforcement model with a purpose-built coordinating institution, represents a governance approach distinct from both the purely dispersed and the fully centralised models. Poland's decision to create a new body as the sole market surveillance authority remains a structural outlier, but it is not the only Member State investing in new institutional capacity specifically for AI governance. A systematic comparison of how these different governance architectures perform in practice will become feasible as more Member States progress beyond designation toward operational enforcement, and represents a natural extension of the present analysis.
Figure 2: The European Landscape of AI Act Implementation: Market Surveillance Authorities
Case study: Poland's Tug-of-War over AI governance
Poland’s implementation trajectory serves as a critical case study for the practical challenges of implementing the AI Act in Member States, demonstrating how the legislative negotiation process can fundamentally alter the institutional design of national competent authorities. Initially, the Ministry of Digital Affairs advanced an ambitious vision for a market surveillance authority, proposing the establishment of a fully independent new, centralised supervisory body, the Commission for the Development and Safety of AI (KRiBSI), supported by a dedicated Bureau. 19
To ensure the regulator's capacity and independence, this model proposed a robust funding structure directly from the central state budget. 20 The proposal from September 2025 outlined a resource-intensive model designed to secure high-level capacity. It projected a total ten-year financial commitment of 448.15 million PLN (approx. €105 million), with a standardised annual allocation of 43.6 million PLN (approx. €10.2 million) from 2027. Structurally, this draft distinguished between the Commission, acting as the collegial decision-making body, and the Bureau, its dedicated operational arm, tasked with daily enforcement and projected to employ 100 staff members by 2027. 21
The Ministry justified this resource-heavy model as the viable way to secure first the authority’s independence (Art. 70 of the AI Act stipulates that the member state authorities ‘shall exercise their powers independently, impartially and without bias’) and second, the high-level expertise required to oversee complex AI systems. In the Regulatory Impact Assessment, the Ministry explicitly argued that the scarcity of AI specialists in Poland would risk non-compliance without additional staff and that a decentralised model would lead to harmful competition between government bodies competing for talent within a narrow pool of experts. Consequently, the Ministry concluded that establishing a single, specialised authority is financially more efficient than obliging existing sectoral regulators to build their own independent AI competence. 22
However, this proposal faced significant opposition during the governmental review phase. The Ministry of Finance heavily criticised the agency model, arguing that the AI Act does not mandate the creation of a new central budget unit and that such a structure was not justified. Instead of a new agency, the Ministry of Finance initially recommended assigning the tasks to an organisational unit within the Ministry of Digital Affairs, citing the Digital Poland Projects Centre as an example to reduce costs. 23 Subsequently, the Ministry advanced a specific alternative proposal to establish the Commission within the structure of the telecoms regulator, the Office of Electronic Communications (UKE), as a cost-efficient solution. 24 25
To resolve this deadlock between the Digital Ministry’s institutional ambition and the Finance Ministry’s fiscal discipline, the Ministry of Digital Affairs submitted two competing versions of the draft law to the Standing Committee. 26 The first option maintained the original ambition by proposing the Bureau of the Commission as an independent entity with separate legal personality. 27 Conversely, the second version offered a compromise model where the Commission would be serviced directly by the Ministry of Digital Affairs, thereby removing the separate Bureau to satisfy fiscal demands. 28
Ultimately, the second option was selected. The draft from late February 2026, officially approved by the Standing Committee of the Council of Ministers on February 12, 29 reflects this pragmatic compromise: while the Commission for the Development and Safety of AI (KRiBSI) remains the designated market surveillance authority, 30 its technical and operational support is integrated directly into the Ministry of Digital Affairs rather than functioning as a separate agency. 31 This solution significantly reduces the implementation budget, lowering the 10-year projected cost to approximately 278 million PLN (approx. €65 million). 32
To balance this fiscal consolidation with the EU AI Act's independence requirements, the drafters attempted to introduce robust statutory 'firewalls', to ensure the independence of the Commission in the compromise proposal. While the legislation explicitly guarantees that the Chairperson, Commission members, and Ministry employees acting on their behalf remain independent in their surveillance functions, 33 the late-February updates significantly diluted this structural separation. Following comments from further intergovernmental consultations, the KRiBSI Chairperson was stripped of the previously proposed 34 exclusive human resources authority over the dedicated Ministry unit. Instead, the revised draft dictates that the Ministry's Director General retains these HR powers under the Civil Service Act, merely acting in 'cooperation' with the KRiBSI Chairperson. 35
How is Poland implementing the EU AI Act?
Poland's approach to implementing the AI Act offers a case study of the strategic choices made by Member States. Rather than simply transposing the minimum requirements, Poland is developing a national law that establishes a new, centralised authority, 36 though one whose operational apparatus, following the legislative compromise described in Section 6, is nested within the Ministry of Digital Affairs rather than functioning as a standalone agency. This structural choice raises independence questions that parallel those that surrounded the EU AI Office itself, which operates as an organisational unit within DG CONNECT rather than as a separate independent body. 37 The draft law also introduces several novel legal and advisory tools not mentioned in the regulation, such as individual opinions 38 and the Social Council for AI. 39
According to the draft bill from February 2026 40 that is yet to reach Parliament (following the standard legislative timelines, expected to be adopted by the Council of Ministers in the first quarter of 2026 41 ), the new Commission for the Development and Safety of Artificial Intelligence (Komisja Rozwoju i Bezpieczeństwa Sztucznej Inteligencji - KRiBSI) will act as the primary market surveillance authority. 42 The Minister for Digital Affairs will be the notifying authority, responsible for designating the conformity assessment bodies. 43 The institutional structure is outlined in Figure 3. The soon-to-be-established framework will operate within a broader national context of AI governance; for example, the Scientific and Academic Computer Network – National Research Institute (NASK) has already established the Centre for Research on the Safety of Artificial Intelligence (Ośrodek Badań nad Bezpieczeństwem Sztucznej Inteligencji), which focuses on responsible AI development and deployment. 44
Figure 3: The Polish Institutional Framework for the AI Act Implementation
Establishing a new centralised authority: The Commission (KRiBSI)
The key feature of the Polish model is the establishment of a new, centralised supervisory body, the KRiBSI Commission, designated as the market surveillance authority supported by a dedicated unit nested within the Ministry of Digital Affairs. 45 The explanatory memorandum justifies this choice in two ways. First, it is presented as a more cost-effective solution than attempting to build specialised AI expertise across multiple existing sectoral regulators, such as those for data protection or competition. Second, this centralised approach is designed to avoid a harmful competition between different governmental bodies for a small pool of qualified AI experts. 46 This rationale for consolidating scarce resources in one place, while keeping other regulators in the collegial Commission (KRiBSI), is a key feature of the Polish approach.
Figure 4: The Designated Market Surveillance Authority in Poland (KRiBSI)
To ensure its capacity, the KRiBSI Commission is supported by a consolidated funding model. Unlike earlier proposals for a fully independent, standalone agency, the draft law states that the new authority's operations will be funded directly from the Ministry of Digital Affairs' budget. This is supported by a total ten-year financial commitment of approximately 278 million PLN (approx. €65 million), averaging roughly 27 million PLN (approx. €6.2 million) annually, to fund the KRiBSI Commission and its supporting unit. 47 While the KRiBSI Commission is the collegial decision-making body, 48 the formal market surveillance authority, and the advisory hub for drafting national AI legislation, it is also explicitly tasked with addressing AI safety threats and driving market innovation. Meanwhile, the dedicated Ministry unit will handle daily enforcement 49 and is projected by the Regulatory Impact Assessment to scale up to 70 employees by 2027. 50
Figure 5: The Dedicated Operational Unit of the KRiBSI Commission
Providing upfront legal certainty via individual opinions
Beyond its core supervisory functions, the Polish draft bill endows KRiBSI with legal tools that go beyond the AI Act. As per the explanatory memorandum, the provisions for "individual opinions" and general "explanations" are explicitly designed to provide upfront legal certainty and predictability for businesses. 51 This mechanism allows a company to formally request a binding ruling from KRiBSI on how the AI Act applies to their specific product or service, thereby offering companies upfront legal certainty. It can be argued that this mechanism will be complementary to the regulatory sandboxes 52 established under the AI Act. A business could first seek a binding opinion to clarify if its system is high-risk, then use the sandbox to test modifications and ensure compliance under the regulation.
Bridging the expertise gap with the Social Council for AI
The second key novelty is the establishment of the Social Council for AI, an advisory-consultative body to the KRiBSI Commission. 53 The draft law's justification states its official purpose is twofold: to provide expert support in AI and to increase the transparency and democracy of the KRiBSI Commission's actions by including a wide group of stakeholders, allowing for diverse social and economic perspectives to be considered.
The Council will comprise 9 to 15 members selected by the KRiBSI Commission from candidates nominated by a range of stakeholders, including the Ombudsman, business chambers, trade unions, academic institutions, and NGOs. Members must have expertise in areas like AI, IT, cybersecurity, data protection, new technologies law, and human rights, effectively bridging the public sector's expertise gap. This expert outreach is further expanded by allowing Council members to bring in additional external experts as needed, creating a flexible network of expertise for the KRiBSI Commission to draw upon. Members will serve a two-year term, a length intended to allow for regular "refreshing" of the Council to adapt to rapidly changing technology.
Figure 6: The Social Council for AI
Figure 6a: How members of the Social Council for AI are selected
Evaluating the Polish model: implications & risks
The preceding sections have described the institutional architecture that Poland’s draft legislation proposes. This section evaluates the choices embedded in that design. The analysis proceeds along four dimensions: the strategic logic of centralisation in the European context, the consequences of the legislative compromise on institutional independence, the practical implications of the revised resource allocation, and the model’s potential effects on EU-level coordination under the multi-level governance framework of the AI Act.
Centralisation as a European outlier
As discussed in Section 5, of the 19 Member States that have designated or proposed their national competent authorities, 17 have opted for dispersed governance models, distributing market surveillance responsibilities across multiple existing sectoral regulators. 54 Against this backdrop, Poland’s decision to create a single, brand-new centralised authority is a clear structural outlier. Among the Member States, only Spain with AESIA has similarly created an entirely new institution with market surveillance powers for AI Act purposes, and only Poland and Lithuania have designated just one entity as the market surveillance authority. It should also be noted that institutional innovation is not confined to the centralised end of the spectrum: Ireland, despite operating one of the most dispersed enforcement models with 13 market surveillance authorities, is simultaneously establishing a new National AI Office to serve as the central coordinating body and single point of contact, a recognition that even dispersed models may require dedicated institutional capacity for effective AI governance.
Poland's specific choice, not merely to invest in new institutional capacity, as Ireland and Spain are also doing, but to vest the entirety of market surveillance powers in a single new body, reflects a deliberate strategic calculation by the Polish legislator. The Regulatory Impact Assessment advanced two interconnected justifications. First, the Ministry of Digital Affairs argued that the scarcity of AI specialists in Poland made it financially inefficient to require each sectoral regulator to independently develop AI oversight capacity. Second, the Ministry contended that a dispersed model would generate harmful competition between government bodies for a limited pool of qualified experts. 55 Centralisation, in this framing, is not merely an administrative preference but a resource-management strategy tailored to the constraints of a Member State that cannot readily compete for technical talent on the same terms as the private sector. The trade-offs embedded in this choice are not unique to AI governance; the debate over centralised versus dispersed regulatory structures has been extensively examined in the context of financial sector supervision, which offers a useful analytical parallel for evaluating the Polish model. Abrams and Taylor, for instance, similarly argued that consolidation enables scarce specialist staff to be deployed more efficiently, since the public sector invariably struggles to retain professionals with highly marketable skills. 56
With this parallel in mind, the trade-offs inherent in Poland's centralisation choice can be mapped more precisely. The centralised model offers several structural advantages: it provides a single, identifiable point of contact for both regulated entities and EU institutions; it enables a consistent interpretation of the AI Act across all sectors within the national jurisdiction; and, as the financial supervision literature suggests, a unified structure can also reduce the duplication of regulatory effort by simplifying the process of seeking decisions, while allowing the single authority to respond more rapidly and flexibly to emerging issues, thereby reducing the risk of regulatory gaps developing. Moreover, a unified management hierarchy can direct its various divisions to share information and cooperate closely, facilitating coordination and the closing of supervisory gaps more readily than a system of separate agencies operating under distinct mandates. 57
However, centralisation also carries identifiable risks. A single authority must develop sufficient understanding of each sector in which high-risk AI systems are deployed, from healthcare and finance to law enforcement and education. Dispersed models inherently draw on the pre-existing domain expertise of sectoral regulators, an advantage that a new, horizontal authority must build from the ground up. Coming back to the financial supervision parallel, when Norway attempted to fully integrate its supervision by assigning the same examiners to both banking and insurance, the loss of sector-specific expertise was found to outweigh the consistency gains. 58 More broadly, critics of unified regulatory bodies have argued that a single agency spanning diverse domains may struggle to balance competing objectives and may prove harder, not easier, to hold accountable than specialist bodies with clearly defined mandates. 59
The Polish legislator has sought to mitigate this sectoral knowledge gap through the collegiate design of the KRiBSI Commission. Its membership includes representatives of key existing regulators, UOKiK (competition and consumer protection), KNF (financial supervision), KRRiT (broadcasting), and UKE (electronic communications), thereby embedding sectoral knowledge directly within the decision-making body. Whether this embedded representation can adequately substitute for the deep regulatory familiarity that dedicated sectoral authorities possess will depend on how effectively the Commission's collegial deliberation functions in practice.
The independence compromise
Article 70 of the AI Act requires that national competent authorities 'shall exercise their powers independently, impartially and without bias'. The legislative compromise described in Section 6 raises questions about the extent to which Poland's final design satisfies this requirement in substance rather than merely in form.
The most consequential change during the drafting process was the elimination of the Bureau as an independent entity. The September 2025 proposal envisioned a standalone operational arm with its own legal personality, separate from any existing ministry structure. The February 2026 draft instead nests the Commission's operational support directly within the Ministry of Digital Affairs. This is not merely an administrative rearrangement; it means that the body conducting day-to-day enforcement of the AI Act is structurally part of the same ministry whose policy portfolio the Commission is tasked with independently overseeing. The February 2026 draft does retain explicit statutory provisions guaranteeing the independence of the Chairperson, Commission members, and Ministry employees acting on their behalf. However, these formal safeguards sit uneasily alongside the progressive dilution of structural separation during the drafting process.
Beyond the loss of the Bureau's independent status, a further change between the January and February 2026 drafts transferred human resources authority over the dedicated Ministry unit from the KRiBSI Chairperson to the Ministry's Director General, reducing the Chairperson's role to one of 'cooperation'. The cumulative effect is that the Commission retains formal decision-making authority, but the operational apparatus on which it depends, its staff, its budget, and its institutional infrastructure, is administered by the Ministry. Whether the statutory 'firewalls' in the legislation prove sufficient to maintain effective independence will ultimately be an empirical question, but the direction of travel during the drafting process warrants close monitoring during the implementation phase.
Resource constraints and enforcement capacity
The legislative compromise also entailed a significant reduction in the resources allocated to AI Act enforcement. The September 2025 proposal projected a ten-year financial commitment of 448.15 million PLN (approximately €105 million) and envisaged scaling the Bureau to 100 staff members by 2027. The approved February 2026 draft reduced the ten-year budget to approximately 278 million PLN (approximately €65 million), a reduction of roughly 38 per cent, and set the target staffing level for the dedicated Ministry unit at 70 substantive expert positions by 2027.
These figures should be assessed in light of the scope of the enforcement mandate. The KRiBSI Commission will serve as the sole market surveillance authority for all AI systems, excluding GPAI models, covered by the regulation across the Polish market. The Regulatory Impact Assessment itself identified the scarcity of professionals with the relevant interdisciplinary expertise, spanning AI, law, and cybersecurity, as a key implementation risk. Yet the compromise simultaneously reduced the resources available to address this risk.
It should be noted, however, that the approved budget of approximately €6.2 million per year is not negligible in comparative terms. Precise cross-country budget comparisons remain difficult, as most Member States are relying on existing sectoral regulators to absorb AI oversight duties. Staffing levels offer a more tractable point of comparison. For example, Spain's AESIA, the other newly created dedicated AI supervisory agency in the EU, employed approximately 30 staff as of mid-2025. 60 At the EU level, the AI Office itself, which holds supervisory authority over GPAI models for the entire single market, currently employs more than 125 staff across all of its units, although this figure encompasses the Office's full range of policy, research, and international functions, not solely AI Act enforcement. 61 Poland's projected 70 employees for the dedicated Ministry unit by 2027 would position KRiBSI's operational capacity above that of AESIA and at a scale broadly comparable, relative to its national mandate, to the central EU enforcer. The relevant benchmark is therefore not only the gap between Poland's original ambition and its final allocation, but also how that allocation compares with the resources effectively available to market surveillance authorities elsewhere.
Implications for multi-level governance
On the EU level, the AI Act requires each Member State to designate a single representative to the European AI Board, the coordination body that bridges national authorities and the EU AI Office. A centralised national authority simplifies this interface. Where a dispersed model may require internal coordination among multiple national bodies before a coherent position can be represented at the Board level, Poland’s KRiBSI Commission can, in principle, speak with a single authoritative voice. This is a tangible advantage for the effectiveness of the AI Board as a coordination forum.
The coordination challenge identified in Section 3, regarding cases where a high-risk system built on a GPAI model requires joint oversight between a national authority and the EU AI Office, is similarly affected by the choice of national governance model. A centralised authority with consolidated technical capacity can be better positioned to act as a credible counterpart to the AI Office in such cross-tier investigations than a fragmented set of sectoral regulators, each with limited AI-specific expertise. However, this advantage is contingent on the centralised authority actually possessing the technical resources to engage substantively with GPAI-related issues, a capacity that, as discussed in Section 8.3, can now be constrained by the revised budget and staffing levels.
Mitigating design features
The risks identified above should be assessed alongside the novel institutional tools that the Polish draft introduces. In particular, the Social Council for AI creates a formal mechanism through which external expertise can supplement the Commission's internal capacity. By drawing on candidates nominated by academic institutions, civil society organisations, business chambers, and other stakeholders, the Council provides a structured means of accessing specialist knowledge that the supporting staff may lack. This model is consistent with recommendations from civil society groups such as the European Centre for Not-for-Profit Law, which has advocated for the inclusion of third-party experts in advising national competent authorities. 62 The two-year term for Council members is designed to allow regular renewal in response to technological developments. However, the Council's opinions are not legally binding on the Commission, and its members serve in an unpaid, voluntary capacity, both features that may limit its influence and the depth of engagement it can sustain.
Whether this advisory mechanism will prove sufficient to offset the capacity constraints imposed by the fiscal compromise is an open question that will depend on the practical implementation of the law, in particular, on the quality of appointments to the Commission and the Social Council, and on the political willingness to increase the authority's resources if initial allocations prove inadequate. Nonetheless, the very existence of a formalised advisory structure with multi-stakeholder representation provides a foundation that can be strengthened as the enforcement regime matures.
Conclusion
Poland’s draft legislation for implementing the AI Act represents one of the most distinctive institutional choices among the EU-27. By creating a single, new centralised authority rather than distributing oversight across existing sectoral regulators, the Polish model departs from the clear trend among the 19 Member States that have designated or proposed their governance models, 17 of which have opted for dispersed enforcement across multiple existing regulators. The rationale behind this choice is coherent: consolidating scarce technical expertise in one body avoids duplication of effort and the harmful competition for specialists that a dispersed model would entail. The collegiate design of the KRiBSI Commission, which embeds representatives of key sectoral regulators within the decision-making body, offers a mechanism for retaining domain-specific knowledge without fragmenting enforcement. At the same time, centralisation carries risks that the financial supervision literature has documented in analogous contexts, above all the challenge of developing sufficient sectoral understanding across the full range of domains in which high-risk AI systems are deployed.
The intergovernmental legislative process that shaped the final design reveals a tension that is unlikely to be unique to Poland. The Ministry of Digital Affairs’ original vision of a fully independent agency was substantially curtailed by fiscal objections, producing a compromise in which the authority’s operational apparatus is nested within the Ministry itself. The statutory safeguards intended to preserve independence are in place, but their progressive dilution during the drafting process, most notably the transfer of human resources authority from the KRiBSI Chairperson to the Ministry’s Director General, raises questions about whether formal guarantees will translate into effective autonomy in practice. This tension between institutional ambition and fiscal constraint is one that other Member States building their own enforcement structures are likely to encounter.
The novel instruments introduced by the draft legislation, binding individual opinions and the Social Council for AI, deserve attention beyond the Polish context. The individual opinions mechanism offers a model for providing upfront legal certainty that could complement the regulatory sandboxes envisaged by the AI Act. The Social Council, while advisory and unpaid, still institutionally formalises multi-stakeholder input for the market surveillance authority. Whether these tools prove effective will depend on implementation, but they represent concrete responses to challenges, legal uncertainty for businesses, and expertise gaps in the public sector, which are shared across the EU.
As the AI Act’s enforcement deadlines approach, the choices documented in this paper will be tested in practice. The effectiveness of centralised versus dispersed governance models, the adequacy of the resources allocated, and the real independence of national authorities will become empirical questions rather than design debates. Poland’s experience, as one of the few Member States building an entirely new institution for this purpose, will offer an early and instructive data point for that broader assessment.
Bibliography
Abrams, Richard K., and Michael W. Taylor. 'Issues in the Unification of Financial Sector Supervision'. IMF Working Paper WP/00/213. Washington, DC: International Monetary Fund, December 2000.
Council of the European Union. 'Outcomes of the discussions on simplification activities in the digital field'. Note 9383/25. Brussels, June 2, 2025. Accessed February 11, 2026. https://data.consilium.europa.eu/doc/document/ST-9383-2025-INIT/en/pdf.
Council of the European Union. 'Simplification'. Policies. Accessed February 12, 2026. https://www.consilium.europa.eu/en/policies/simplification/.
Department of Enterprise, Tourism and Employment. 'Ministers Burke and Smyth announce landmark progress in AI Act implementation'. gov.ie. September 16, 2025. Accessed March 6, 2026. https://www.gov.ie/en/department-of-enterprise-tourism-and-employment/press-releases/ireland-leads-the-way-in-eu-ai-regulation/.
European Commission. 'AI Board convenes its third meeting to advance EU AI policy'. Shaping Europe's digital future, March 24, 2025. Accessed August 20, 2025. https://digital-strategy.ec.europa.eu/en/news/ai-board-convenes-its-third-meeting-advance-eu-ai-policy.
European Commission. 'European AI Office'. Shaping Europe's digital future. Last updated January 14, 2026. Accessed March 5, 2026. https://digital-strategy.ec.europa.eu/en/policies/ai-office.
European Commission. 'Proposal for a Regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI)'. COM(2025) 836 final. Brussels, November 19, 2025. Accessed February 11, 2026. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52025PC0836.
European Commission. 'The European Artificial Intelligence Board'. Policies. Last modified August 1, 2025. Accessed August 17, 2025. https://digital-strategy.ec.europa.eu/en/policies/ai-board.
European Parliament. 'Draft Report on the proposal for a regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI)'. 2025/0359(COD). Committee on the Internal Market and Consumer Protection & Committee on Civil Liberties, Justice and Home Affairs. Accessed February 11, 2026. https://www.europarl.europa.eu/doceo/document/CJ40-PR-782530_EN.pdf.
European Parliament and Council of the European Union. 'Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)'. Official Journal of the European Union L, 2024/1689. July 12, 2024.
Government Legislation Centre. 'Proces legislacyjny w Polsce' [The Legislative Process in Poland]. 2020. Accessed February 20, 2026. https://rcl.gov.pl/legislacja/proces-legislacyjny-w-polsce/.
Government of Spain. 'Real Decreto 729/2023, de 22 de agosto, por el que se aprueba el Estatuto de la Agencia Española de Supervisión de Inteligencia Artificial' [Royal Decree 729/2023, of August 22, approving the Statute of the Spanish Agency for the Supervision of Artificial Intelligence]. Boletín Oficial del Estado, September 2, 2023. Accessed March 2, 2026. https://www.boe.es/boe/dias/2023/09/02/pdfs/BOE-A-2023-18911.pdf.
International Association of Privacy Professionals (IAPP). 'EU AI Act Regulatory Directory'. Last updated January 14, 2026. Accessed March 2, 2026. https://iapp.org/resources/article/eu-ai-act-regulatory-directory.
Iwańska, Karolina, Vanja Skoric, Francesca Fanucci, Berna Keskindemir, and Sushruta Kokkula. 'Towards an AI Act that serves people and society: Strategic actions for civil society and funders on the enforcement of the EU AI Act'. Report, European Center for Not-for-Profit Law, August 2024. Accessed August 10, 2025. https://europeanaifund.org/wp-content/uploads/2024/09/240827_FINAL_AI_ACT_Enforcement.pdf.
Kancelaria Prezesa Rady Ministrów. 'Projekt ustawy o systemach sztucznej inteligencji'. Serwis Rzeczypospolitej Polskiej, June 26, 2025. Accessed August 20, 2025. https://www.gov.pl/web/premier/projekt-ustawy-o-systemach-sztucznej-inteligencji.
Luca Bertuzzi, 'EU countries edge toward fixed AI Act legal deadlines', MLex, January 26, 2026, accessed February 12, 2026, https://www.mlex.com/mlex/artificial-intelligence/articles/2433952/eu-countries-edge-toward-fixed-ai-act-legal-deadlines.
Madiega, Tambiama, and Anne Louise Van De Pol. 'Artificial intelligence act and regulatory sandboxes'. Briefing. European Parliamentary Research Service, June 2022. Accessed January 7, 2026. https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733544/EPRS_BRI(2022)733544_EN.pdf.
Minister of Digital Affairs. 'Pismo przekazujące dwie alternatywne wersje projektu ustawy o systemach sztucznej inteligencji (UC71)' [Letter submitting two alternative versions of the draft Act on Artificial Intelligence Systems (UC71)]. Letter to the Secretary of the Standing Committee of the Council of Ministers. Ref. DP.MC.WLA.0211.35.2024. November 18, 2025. Accessed February 16, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Minister of Digital Affairs. 'Pismo przekazujące nowy tekst projektu ustawy o systemach sztucznej inteligencji (UC71)' [Letter submitting the new text of the draft Act on Artificial Intelligence Systems (UC71)]. Letter to the Secretary of the Standing Committee of the Council of Ministers. Ref. DP.MC.WLA.0211.35.2024. January 28, 2026. Accessed February 19, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Minister of Digital Affairs. 'Pismo przekazujące projekt ustawy o systemach sztucznej inteligencji (UC71) do komisji prawniczej' [Letter submitting the draft Act on Artificial Intelligence Systems (UC71) to the legal commission]. Letter to the President of the Government Legislation Centre. February 23, 2026. Accessed February 26, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Minister of Finance and Economy. 'Propozycje zmian do projektu ustawy o systemach sztucznej inteligencji (UC71)' [Proposals for amendments to the draft Act on Artificial Intelligence Systems (UC71)]. Letter to the Secretary of State at the Ministry of Digital Affairs. Ref. PR2.021.403.2024. November 4, 2025. Accessed February 16, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Minister of Finance and Economy. 'Uwagi do projektu ustawy o systemach sztucznej inteligencji (UC71)' [Comments on the draft Act on Artificial Intelligence Systems (UC71)]. Letter to the Secretary of the Standing Committee of the Council of Ministers. Ref. PR2.021.403.2024. September 19, 2025. Accessed February 16, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Ministry of Digital Affairs. 'Ocena Skutków Regulacji' [Regulatory Impact Assessment]. January 28, 2026. Accessed February 19, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Ministry of Digital Affairs. 'Ocena Skutków Regulacji (I wersja - POP)' [Regulatory Impact Assessment (Version 1 - POP)]. November 18, 2025. Accessed February 16, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Ministry of Digital Affairs. 'Ocena Skutków Regulacji (wersja z 23 lutego)' [Regulatory Impact Assessment (February 23 Version)]. February 23, 2026. Accessed February 26, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Ministry of Digital Affairs. 'Ocena Skutków Regulacji (wersja z 2 września - SKRM)' [Regulatory Impact Assessment (September 2 Version - SKRM)]. September 2, 2025. Accessed February 16, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Ministry of Digital Affairs. 'Projekt ustawy o systemach sztucznej inteligencji' [Draft Act on Artificial Intelligence Systems]. Draft legislation, January 28, 2026. Accessed February 16, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Ministry of Digital Affairs. 'Projekt ustawy o systemach sztucznej inteligencji (II wersja - MC)' [Draft Act on Artificial Intelligence Systems (Version 2 - MC)]. Draft legislation, November 18, 2025. Accessed February 16, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Ministry of Digital Affairs. 'Projekt ustawy o systemach sztucznej inteligencji (I wersja - POP)' [Draft Act on Artificial Intelligence Systems (Version 1 - POP)]. Draft legislation, November 18, 2025. Accessed February 16, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Ministry of Digital Affairs. 'Projekt ustawy o systemach sztucznej inteligencji (wersja z 23 lutego)' [Draft Act on Artificial Intelligence Systems (February 23 Version)]. Draft legislation, February 23, 2026. Accessed February 26, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
Ministry of Digital Affairs. 'Projekt ustawy o systemach sztucznej inteligencji (wersja z 2 września - SKRM)' [Draft Act on Artificial Intelligence Systems (September 2 Version - SKRM)]. Draft legislation. September 2, 2025. Accessed February 16, 2026. https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
NASK. 'Powstanie Ośrodka Badań nad Bezpieczeństwem Sztucznej Inteligencji w NASK'. Aktualności, May 10, 2024. Accessed February 26, 2026. https://www.nask.pl/aktualnosci/powstanie-osrodka-badan-nad-bezpieczenstwem-sztucznej-inteligencji-w-nask.
Niestadt, Maria. 'Digital Omnibus on AI'. EU Legislation in Progress. European Parliamentary Research Service, February 2026. Accessed February 11, 2026. https://www.europarl.europa.eu/RegData/etudes/BRIE/2026/782651/EPRS_BRI(2026)782651_EN.pdf.
Novelli, Claudio, Philipp Hacker, Jessica Morley, Jarle Trondal, and Luciano Floridi. 'A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities'. European Journal of Risk Regulation 16, no. 2 (2025): 566–590. https://doi.org/10.1017/err.2024.57.
Office of Electronic Communications. 'O nas' [About Us]. Accessed February 18, 2026. https://www.uke.gov.pl/o-nas/.
Piotrowski, Ryszard. 'Jak powstaje ustawa?'. In Noty o Senacie. Warsaw: Kancelaria Senatu, April 2024. Accessed August 17, 2025. https://www.senat.gov.pl/gfx/senat/userfiles/_public/senatrp/noty2024/04/04_04_24.pdf.
Spanish Agency for the Supervision of Artificial Intelligence (AESIA). 'AESIA consolidates its role in Europe in promoting ethical, sustainable and reliable AI'. August 1, 2025. Accessed March 5, 2026. https://aesia.digital.gob.es/en/present20250801-aesia-balance-2025.
Spanish Agency for the Supervision of Artificial Intelligence (AESIA). 'Ensuring ethical and responsible AI'. 2025. Accessed March 2, 2026. https://aesia.digital.gob.es/en/es.
Taylor, Michael, and Alex Fleming. 'Integrated Financial Supervision: Lessons of Northern European Experience'. World Bank Policy Research Working Paper No. 2223. Washington, DC: World Bank, November 1999.
Table of Contents
1 European Parliament and Council of the European Union, 'Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)', Official Journal of the European Union L, 2024/1689, July 12, 2024.
2 European Commission, 'European AI Office', Shaping Europe's digital future, last updated January 14, 2026, accessed March 5, 2026, https://digital-strategy.ec.europa.eu/en/policies/ai-office.
3 European Commission, 'The European Artificial Intelligence Board', Policies, last modified August 1, 2025, accessed August 17, 2025, https://digital-strategy.ec.europa.eu/en/policies/ai-board.
4 European Commission, 'AI Board convenes its third meeting to advance EU AI policy', Shaping Europe's digital future, March 24, 2025, accessed August 20, 2025, https://digital-strategy.ec.europa.eu/en/news/ai-board-convenes-its-third-meeting-advance-eu-ai-policy.
5 European Commission, 'Proposal for a Regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI)', COM(2025) 836 final (Brussels, November 19, 2025), accessed February 11, 2026, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52025PC0836.
6 Maria Niestadt, 'Digital Omnibus on AI', EU Legislation in Progress (European Parliamentary Research Service, February 2026), 1-2, accessed February 11, 2026, https://www.europarl.europa.eu/RegData/etudes/BRIE/2026/782651/EPRS_BRI(2026)782651_EN.pdf.
7 Council of the European Union, 'Outcomes of the discussions on simplification activities in the digital field', Note 9383/25 (Brussels, June 2, 2025), accessed February 11, 2026, https://data.consilium.europa.eu/doc/document/ST-9383-2025-INIT/en/pdf.
8 Niestadt, 'Digital Omnibus on AI', 2-3.
9 Niestadt, 'Digital Omnibus on AI', 2-3.
10 European Parliament, 'Draft Report on the proposal for a regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI)', 2025/0359(COD), Committee on the Internal Market and Consumer Protection & Committee on Civil Liberties, Justice and Home Affairs, accessed February 11, 2026, https://www.europarl.europa.eu/doceo/document/CJ40-PR-782530_EN.pdf.
11 Luca Bertuzzi, 'EU countries edge toward fixed AI Act legal deadlines', MLex, January 26, 2026, accessed February 12, 2026, https://www.mlex.com/mlex/artificial-intelligence/articles/2433952/eu-countries-edge-toward-fixed-ai-act-legal-deadlines.
12 International Association of Privacy Professionals (IAPP), 'EU AI Act Regulatory Directory', last updated January 14, 2026, accessed March 2, 2026, https://iapp.org/resources/article/eu-ai-act-regulatory-directory.
13 Denmark, Finland, Hungary, Ireland, Italy, Latvia, Lithuania, Malta, and Slovenia
14 Cyprus, Czechia, Estonia, France, Germany, Luxembourg, Poland, Slovakia, Spain, and Sweden
15 Austria, Belgium, Bulgaria, Croatia, Greece, the Netherlands, Portugal, and Romania
16 Government of Spain, 'Real Decreto 729/2023, de 22 de agosto, por el que se aprueba el Estatuto de la Agencia Española de Supervisión de Inteligencia Artificial' [Royal Decree 729/2023, of August 22, approving the Statute of the Spanish Agency for the Supervision of Artificial Intelligence], Boletín Oficial del Estado, September 2, 2023, accessed March 2, 2026, https://www.boe.es/boe/dias/2023/09/02/pdfs/BOE-A-2023-18911.pdf.
17 Spanish Agency for the Supervision of Artificial Intelligence (AESIA). 'Ensuring ethical and responsible AI'. 2025. Accessed March 2, 2026. https://aesia.digital.gob.es/en/es.
18 Department of Enterprise, Tourism and Employment, 'Ministers Burke and Smyth announce landmark progress in AI Act implementation', gov.ie, September 16, 2025, accessed March 6, 2026, https://www.gov.ie/en/department-of-enterprise-tourism-and-employment/press-releases/ireland-leads-the-way-in-eu-ai-regulation/.
19 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji (wersja z 2 września - SKRM)' [Draft Act on Artificial Intelligence Systems (September 2 Version - SKRM)], draft legislation, September 2, 2025, arts. 5, 30, 107, accessed February 16, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
20 Ministry of Digital Affairs, 'Ocena Skutków Regulacji (wersja z 2 września - SKRM)' [Regulatory Impact Assessment (September 2 Version - SKRM)], September 2, 2025, 3-4, accessed February 16, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
21 Ministry of Digital Affairs, 'Ocena Skutków Regulacji (wersja z 2 września - SKRM)', 14-16.
22 Ministry of Digital Affairs, 'Ocena Skutków Regulacji (wersja z 2 września - SKRM)', 2-4, 16.
23 Minister of Finance and Economy, 'Uwagi do projektu ustawy o systemach sztucznej inteligencji (UC71)' [Comments on the draft Act on Artificial Intelligence Systems (UC71)], letter to the Secretary of the Standing Committee of the Council of Ministers, Ref. PR2.021.403.2024, September 19, 2025, 1-2, accessed February 16, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
24 Minister of Finance and Economy, 'Propozycje zmian do projektu ustawy o systemach sztucznej inteligencji (UC71)' [Proposals for amendments to the draft Act on Artificial Intelligence Systems (UC71)], letter to the Secretary of State at the Ministry of Digital Affairs, Ref. PR2.021.403.2024, November 4, 2025, accessed February 16, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
25 Office of Electronic Communications, 'O nas' [About Us], accessed February 18, 2026, https://www.uke.gov.pl/o-nas/.
26 Minister of Digital Affairs, 'Pismo przekazujące dwie alternatywne wersje projektu ustawy o systemach sztucznej inteligencji (UC71)' [Letter submitting two alternative versions of the draft Act on Artificial Intelligence Systems (UC71)], letter to the Secretary of the Standing Committee of the Council of Ministers, Ref. DP.MC.WLA.0211.35.2024, November 18, 2025, accessed February 16, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
27 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji (I wersja - POP)' [Draft Act on Artificial Intelligence Systems (Version 1 - POP)], draft legislation, November 18, 2025, art. 30, accessed February 16, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
28 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji (II wersja - MC)' [Draft Act on Artificial Intelligence Systems (Version 2 - MC)], draft legislation, November 18, 2025, art. 30, accessed February 16, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
29 Minister of Digital Affairs, 'Pismo przekazujące projekt ustawy o systemach sztucznej inteligencji (UC71) do komisji prawniczej' [Letter submitting the draft Act on Artificial Intelligence Systems (UC71) to the legal commission], letter to the President of the Government Legislation Centre, February 23, 2026, accessed February 26, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
30 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji' [Draft Act on Artificial Intelligence Systems], draft legislation, February 23, 2026, art. 5, accessed February 26, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
31 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), art. 30.
32 Ministry of Digital Affairs, 'Ocena Skutków Regulacji' [Regulatory Impact Assessment], February 23, 2026, 15, accessed February 26, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
33 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), arts. 6, 30.
34 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji' [Draft Act on Artificial Intelligence Systems], draft legislation, January 28, 2026, art. 30, accessed February 16, 2026, https://legislacja.rcl.gov.pl/projekt/12390551/katalog/13087932#13087932.
35 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), art. 30.
36 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), art. 5.
37 Claudio Novelli et al., 'A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities', European Journal of Risk Regulation 16, no. 2 (2025): 575, 586, https://doi.org/10.1017/err.2024.57.
38 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji' (February 23, 2026), arts. 11, 12.
39 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji' (February 23, 2026), art. 27.
40 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji' (February 23, 2026).
41 Government Legislation Centre, 'Proces legislacyjny w Polsce' [The Legislative Process in Poland], 2020, accessed February 20, 2026, https://rcl.gov.pl/legislacja/proces-legislacyjny-w-polsce/.
42 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), art. 5.
43 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), art. 70.
44 NASK, 'Powstanie Ośrodka Badań nad Bezpieczeństwem Sztucznej Inteligencji w NASK', Aktualności, May 10, 2024, accessed February 26, 2026, https://www.nask.pl/aktualnosci/powstanie-osrodka-badan-nad-bezpieczenstwem-sztucznej-inteligencji-w-nask.
45 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), arts. 5, 30.
46 Ministry of Digital Affairs, 'Ocena Skutków Regulacji' (February 23, 2026), 15.
47 Ministry of Digital Affairs, 'Ocena Skutków Regulacji' (February 23, 2026), 15.
48 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), art. 6.
49 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), art. 30.
50 Ministry of Digital Affairs, 'Ocena Skutków Regulacji' (February 23, 2026), 18.
51 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), arts. 11-13.
52 Tambiama Madiega and Anne Louise Van De Pol, 'Artificial intelligence act and regulatory sandboxes', Briefing, European Parliamentary Research Service, June, 2022, 2, accessed January 7, 2026. https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733544/EPRS_BRI(2022)733544_EN.pdf. As the authors note, ‘regulatory sandboxes generally refer to regulatory tools allowing businesses to test and experiment with new and innovative products, services or businesses under supervision of a regulator for a limited period of time’
53 Ministry of Digital Affairs, 'Projekt ustawy o systemach sztucznej inteligencji', (February 23, 2026), arts. 27, 28.
54 IAPP, 'EU AI Act Regulatory Directory'.
55 Ministry of Digital Affairs, 'Ocena Skutków Regulacji' (February 23, 2026), 15.
56 Richard K. Abrams and Michael W. Taylor, 'Issues in the Unification of Financial Sector Supervision', IMF Working Paper WP/00/213 (Washington, DC: International Monetary Fund, December 2000), 14–15.
57 Abrams and Taylor, 'Issues in the Unification of Financial Sector Supervision', 11.
58 Michael Taylor and Alex Fleming, 'Integrated Financial Supervision: Lessons of Northern European Experience', World Bank Policy Research Working Paper No. 2223 (Washington, DC: World Bank, November 1999), 16.
59 Abrams and Taylor, 'Issues in the Unification of Financial Sector Supervision', 17.
60 Spanish Agency for the Supervision of Artificial Intelligence (AESIA), 'AESIA consolidates its role in Europe in promoting ethical, sustainable and reliable AI', August 1, 2025, accessed March 5, 2026, https://aesia.digital.gob.es/en/present20250801-aesia-balance-2025.
61 European Commission, 'European AI Office'.
62 Karolina Iwańska et al., 'Towards an AI Act that serves people and society: Strategic actions for civil society and funders on the enforcement of the EU AI Act' (report, European Center for Not-for-Profit Law, August 2024), 8, 23–24, accessed August 10, 2025, https://europeanaifund.org/wp-content/uploads/2024/09/240827_FINAL_AI_ACT_Enforcement.pdf.