Regulatory Sandboxes as an Ethical Tool for Trustworthy AI

Authors

  • Corrado Claverini Università degli Studi del Salento

DOI:

https://doi.org/10.6093/2284-0184/11593

Keywords:

AI Act; Ethics by Design; thics of Artificial Intelligence; HTA; Regulatory Sandboxes.

Abstract

The AI Act, which came into force on 1 August 2024, is a significant step towards regulating artificial intelligence in the European Union. It introduces risk categories, from “minimal” to “unacceptable”, for applications of this technology. Far from limiting innovation, the AI Act provides some tools to promote it responsibly, such as regulatory sandboxes, i.e. controlled environments where companies can test AI systems before placing them on the market, without facing the normal consequences of non-compliance with regulatory constraints. The idea, therefore, is to assess at an early stage the possible risks associated with the development of a technology that uses AI and to embrace a responsible approach to innovation, in line with the vision that inspired the path that led to the adoption of the AI Act, which – as set out in Article 113 of this regulation – will lead the competent authorities of the Member States to establish, by 2 August 2026, at least one AI regulatory sandbox at national level. Regulatory sandboxes, which originated in the fintech sector, have been widely used in the healthcare sector, in particular in Health Technology Assessment (HTA) processes. This paper aims to show the benefits, especially ethical ones, of adopting a sandbox approach for the development of trustworthy AI, according to a perspective known as ethics by design.

 

Downloads

Download data is not yet available.

Published

2025-01-18

How to Cite

Claverini, C. (2025). Regulatory Sandboxes as an Ethical Tool for Trustworthy AI. RESEARCH TRENDS IN HUMANITIES Education & Philosophy, 12(1), 39–44. https://doi.org/10.6093/2284-0184/11593

Similar Articles

> >> 

You may also start an advanced similarity search for this article.