The Role of EU data regulations in building trust in digital services

By: Sanna Wong-Toropainen

For the past five years, the EU has adopted new rules to make digital services more accessible, trustworthy, and transparent. Especially the EU’s “Big 5,” which include the AI Act, the Data Act, the Digital Services Act (DSA), the Digital Markets Act (DMA), and the Data Governance Act (DGA), have significantly altered the way data is governed in the EU.  In my recently published book on EU data regulations, I show how the five new acts are designed to increase the availability of data in the EU, while also safeguarding fundamental rights, like the right to privacy.

Increased data availability

Let’s look at one concrete example on how the DGA and Data Act increase data availability. As one of its objectives, DGA sets a framework for how companies, researchers, and other actors can access and re-use certain categories of data held by public authorities. For individuals this could mean that information originally collected for education, health, or social services might later be reused for research, policy-making, or service development. In most cases, such data must be anonymised, but under exceptional circumstances, re-use in a secure processing environment may permit access to non-anonymised or pseudonymised data, subject to GDPR safeguards.

By contrast, under the Data Act, governments can obtain access to data held by private companies in cases of “exceptional need,” such as during a public emergency. Public authorities could, for instance, request phone records or financial records from private companies, making data about individuals available for public authorities. However, the Data Act gives strict guidelines when and how such data could be used, including strong data protection measures. As is visible from these two examples, the EU’s big five increase both the private sector access to publicly-held data as well as public access to privately-held data. In these scenarios, strong data protection measures serve as a counterbalance to the market-driven objective of creating a borderless internal data market in the EU.

Data for AI development

The rationale behind increased data availability is also to ensure the development of AI-driven products and services, which, in their own right, bring specific risks to individuals’ fundamental rights and safety. As part of the EU’s Big 5, the Artificial Intelligence Act (AI Act) imposes strict obligations on so-called “high-risk” AI systems, including those used in migration, welfare, and employment.

How the AI Act intends to build trust towards the use of AI, is through different risks assessment. For instance, public authorities are required to carry out a Fundamental Rights Impact Assessment (FRIA) under Article 27 of the AI Act. In the FRIA, the public authority must describe how and where the AI will be used, who may be affected, what risks to fundamental rights might occur, what human oversight will be in place, and what remedies and complaint mechanisms will be provided if harms materialise. 

Regulation and deregulation of AI

While the AI Act brings new obligations for the private sector and to public authorities, there is also a growing trend to ease rules regarding the use of AI when providing public services.  

The Finnish government is currently preparing a proposal (VM044:00/2025) to remove legal barriers that hinder automation of public digital services. Section 6 a of the Finnish Digital Services Act (306/2019) requires that people using digital public services must always have the option to contact a human official. While this safeguard is intended to ensure accountability and trust in administrative guidance, it restricts the deployment of fully automated or AI-driven systems. Current reform efforts aim to remove this barrier so that autonomous technologies can be more widely applied in public services. 

The national reform, however, does not remove the safeguards required by the AI Act. Even if Finland lifts the obligation, high-risk systems must undergo a FRIA in instances required by the Act. In practice, the Finnish proposal may enable wider use of automation, but the AI Act should ensure that such use remains subject to safeguards.

Conclusions

In conclusion, as companies and public authorities move to implement the EU’s Big 5 in practice, it remains to be seen whether the policy goals of increased data availability and the protection of fundamental rights can be achieved simultaneously, and whether the safeguards put in place are strong enough to instill trust in fair digital services.

The insights are from a book called ‘Euroopan unionin datasääntely – käsikirja viiteen asetukseen (Edita)’, written by Trust-M researcher Sanna Wong-Toropainen.

Scroll to Top