TÜV AUSTRIA presents the Trusted AI Framework, a pioneering approach to the evaluation and certification of AI systems used in safety-critical applications. In cooperation with the Johannes Kepler University Linz (JKU), the Software Competence Center Hagenberg (SCCH) and the joint venture TRUSTIFAI, an end-to-end audit catalog has been developed that translates technical, legal and ethical requirements into verifiable criteria.
TÜV AUSTRIA’s TRUSTED AI Framework is based on three central pillars: secure software development, functional trustworthiness requirements and guidelines related to ethics and data protection. It offers a structured methodology for evaluating the functional trustworthiness of machine learning systems – including risk-based minimum requirements and statistical tests with independent data samples.
“With the TRUSTED AI Framework, we are creating the basis for transparent and reproducible certification of AI systems for the first time – a decisive step for their safe use in critical areas such as medicine, industry or mobility,” explains Andreas Gruber, Managing Director of the TÜV AUSTRIA joint venture TRUSTIFAI.
Target group: Those responsible for implementing the requirements of the European AI Regulation
TÜV AUSTRIA is addressing the white paper specifically to those responsible for system development, corporate governance, quality and risk management. “These people are responsible for implementing the regulatory requirements and therefore play a key role in the successful implementation of AI,” Gruber continues.
The white paper documents not only the methodology, but also practical experience from the application of the test catalog. It highlights typical sources of error such as data leaks, inadequate domain definitions or insufficient testing of AI applications and offers specific recommendations for action.
“Our experience from many customer projects shows that companies need concrete requirements and implementation aids for the development and operation of AI systems. In our research cooperation with JKU and SCCH, our white paper offers a comprehensive insight into which processes, procedures and technical and organizational measures should be implemented as state-of-the-art,” says TÜV AUSTRIA CEO Stefan Haas.
With the TRUSTED AI Framework, TÜV AUSTRIA provides a practical roadmap for providers, users and regulatory authorities – towards legally compliant, functionally trustworthy and certifiable AI systems in line with European standards.
“The regulation of AI must go beyond mere principles – it must be verifiable in practice. Our framework combines technical best practices with the requirements of the EU AI Regulation and makes trust measurable,” emphasizes Andreas Gruber.
