RiseThe European Commission issued a regulatory proposal on April 21, which is the cornerstone of future European artificial intelligence (AI) regulations. We must commend the committee’s work in establishing expert committees and consulting with stakeholders.
The text must now be discussed in the European Parliament, and we hope to have a broad debate around these proposals.
In fact, Europeans must seize the opportunity to ensure that artificial intelligence is used to serve society as a whole, not as a simple means to influence consumers or enslave consumer citizens.
The Commission’s text is based on a risk analysis method, which leads it to distinguish three groups of AI systems: prohibited systems or uses because they are considered incompatible with the values of the European Union (EU), such as the manipulation of vulnerable people in public places or real-time biometrics Used for police purposes (except for the exception of strict supervision); A system that has significant risks and is subject to a certain number of obligations (risk analysis and management, transparency, correctness assurance, unbiasedness, safety, etc.); Finally, there are no major risks and Only in certain circumstances (interactions with people, etc.) are subject to a system of transparency obligations.
Another special feature of the draft regulation is that it is built around two types of participants: the suppliers of artificial intelligence systems and the people who deploy them. Therefore, most of the obligations associated with high-risk systems fall on suppliers, who must fulfill these obligations before their products are put on the market. We can only welcome the obligation of prior risk assessment and mitigation measures.