Czech Presidency puts forward narrower classification of high-risk systems –


A brand new partial compromise on the AI Act, seen by EURACTIV on Friday (16 September) additional elaborates on the idea of the ‘additional layer’ that may qualify an AI as high-risk provided that it has a serious influence on decision-making.

The AI Act is a landmark proposal to control Synthetic Intelligence within the EU following a risk-based method. Due to this fact, the class of high-risk is a key a part of the regulation, as these are the classes with the strongest influence on human security and basic rights.

On Friday, the Czech Presidency of the EU Council circulated the brand new compromise, which makes an attempt to handle the excellent issues associated to the categorisation of high-risk methods and the associated obligations for AI suppliers.

The textual content focuses on the primary 30 articles of the proposal and likewise covers the definition of AI, the scope of the regulation, and the prohibited AI purposes. The doc would be the foundation for a technical dialogue on the Telecom Working Occasion assembly on 29 September.

Excessive-risk methods’ classification

In July, the Czech presidency proposed including an additional layer to find out if an AI system entails excessive dangers, particularly the situation that the high-risk system must play a significant component in shaping the ultimate determination.

The central concept is to create extra authorized certainty and stop AI purposes which are “purely accent” to decision-making from falling below the scope. The presidency needs the European Fee to outline the idea of purely accent through implementing act inside one 12 months for the reason that regulation’s entry into drive.

The precept {that a} system that takes selections with out human overview might be thought-about high-risk has been eliminated as a result of “not all AI methods which are automated are essentially high-risk, and since such a provision could possibly be vulnerable to circumvention by placing a human within the center”.

As well as, the textual content states that when the EU govt updates the listing of high-risk purposes, it must take into account the potential profit the AI can have for people or society at giant as an alternative of simply the potential for hurt.

The presidency didn’t change the high-risk classes listed below Annex III, nevertheless it launched vital rewording. As well as, the textual content now explicitly states that the situations for the Fee to take purposes out of the high-risk listing are cumulative.

Excessive-risk methods’ necessities

Within the danger administration part, the presidency modified the wording to exclude that the dangers associated to high-risk methods could be recognized by testing, as this observe ought to solely be used to confirm or validate mitigating measures.

The adjustments additionally give the competent nationwide authority extra leeway to evaluate which technical documentation is important for SMEs offering high-risk methods.

Concerning the human overview, the draft regulation requires at the very least two individuals to supervise high-risk methods. Nevertheless, the Czechs are proposing an exception to the so-called ‘4 eye ideas’, particularly for AI purposes within the space of border management the place EU or nationwide legislation permits it.

As regards monetary establishments, the compromise states that the standard administration system they must put in place for high-risk use instances could be built-in with the one already in place to adjust to current sectorial laws to keep away from duplications.

Equally, the monetary authorities would have market surveillance powers below the AI regulation, together with the finishing up of ex-post surveillance actions that may be built-in into the prevailing supervisory mechanism of the EU’s monetary service laws.


The Czech presidency saved most of its earlier adjustments to the definition of Synthetic Intelligence however deleted the reference to the truth that AI should comply with ‘human-defined’ goals because it was deemed “not important”.

The textual content now specifies that an AI system lifecycle would finish whether it is withdrawn by a market surveillance authority or if it undergoes substantial modification, wherein case it must be thought-about as a brand new system.

The compromise additionally launched a distinction between the person and the one controlling the system, which could not essentially be the identical particular person affected by the AI.

To the definition of machine studying, the Czechs added that it’s a system able to studying but additionally of inferring information.

Furthermore, the beforehand added idea of autonomy of an AI system has been described as “the diploma to which such a system capabilities with out exterior affect.”


Prague launched a extra direct exclusion of analysis and growth actions associated to AI, “together with additionally in relation to the exception for nationwide safety, defence and navy functions,” the explanatory half reads.

The important a part of the textual content on general-purpose AI was left for the subsequent compromise.

Prohibited practices

The half on prohibited practices, a delicate difficulty for the European Parliament, is just not proving controversial amongst member states that didn’t request main modifications.

On the identical time, the textual content’s preamble additional defines the idea of AI-enabled manipulative strategies as stimuli which are “past human notion or different subliminal strategies that subvert or impair particular person’s autonomy […] for instance in instances of machine-brain interfaces or digital actuality.”

Main MEPs increase the curtain on draft AI guidelines

The 2 European Parliament co-rapporteurs finalised the Synthetic Intelligence (AI) draft report on Monday (11 April), protecting the place they’ve discovered widespread floor. Probably the most controversial points have been pushed additional down the road.

[Edited by Zoran Radosavljevic]

Share post:


More like this