TÜV AUSTRIA expressly welcomes the EU AI Act as a sensible regulatory instrument. Care must be taken to ensure that so-called low-profile certifications, which are issued exclusively on the basis of “paper audits”, do not appear to be suitable for high-risk AI applications – an assessment must also include the function and performance of the system.
Our experience from the certification projects we have already carried out also shows that the often-feared over-regulation does not materialize in the context of our audits, because even audits for complex models or security-relevant applications are carried out in a way that is economically and temporally acceptable for the companies.
The next step is to create standardized technical regulations for the protection of AI, which raise the state of the art and best practices, such as our TRUSTED AI test catalog, to uniform standards. Some regulatory requirements from the AI Act and published guidelines, such as fixed test data sets or predefined training data sets, are simply wrong in our view, while other topics such as dealing with updates and continuous learning are not – here, improvements and additions are to be expected through standardization, and we at TÜV AUSTRIA will continue to work intensively to introduce technically correct and practical requirements in the working groups and standards committees.