A cornerstone for trust in AI: publication of the world's first compliance criteria catalog for Artificial Intelligence

05 February, 2021

On 2 February 2021, the German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik – BSI) published the first-ever catalog of specific criteria for trustworthy and secure Artificial Intelligence (AI). The criteria can be applied in a variety of ways. As a basis for audits pursuant to ISAE 3000 (Revised), they provide transparency for users of AI services. Similarly, they provide a sound basis to shape the AI lifecycle as well as for quality assurance in AI development processes.

In 2019 alone, companies in Germany generated almost €60 billion of revenues from AI products and AI services. Globally, it is anticipated that revenues from corporate AI applications will increase more than sixfold by 2025 in comparison to 2020.

Despite the huge investments being made in AI, 70 percent of AI projects do not achieve their desired impact on businesses right away. Is this due to a general expectation gap compared to the real technological possibilities? Or is there simply a lack of best practices for development and integration? Are companies not yet mature enough in their processes for the development and operation of AI? All of these can be reasons, which we also frequently observe in practice. What is important is that AI comes with enormous potential, but also with new risks and challenges that must be addressed effectively.

What is the AI Cloud Service Compliance Criteria Catalogue (AIC4)?

The AI Cloud Service Compliance Criteria Catalogue (AIC4) is the world's first catalog of specific criteria published by a public authority that sets out requirements that can be put into operation for the purpose of AI lifecycle management. The criteria catalog is a first answer to calls made by the market, the German Federal Government and the European Commission for transparent criteria for robust AI. This means that Germany and any domestic or international service provider using the criteria as orientation will gain a leading position in the race to achieve trustworthy, secure and transparent AI services. For vendors, it creates a framework by which user needs for transparency about quality and security can be addressed through audits (analogous to SOC1, SOC2, BSI C5, etc.).

Core application fields of the AIC4

Organizations that use AI services can have their compliance with AI quality and security standards confirmed with an audit report pursuant to ISAE 3000, which is prepared by independent auditors. Furthermore, there are areas of application for the AIC4 beyond a conventional audit, e.g. to mobilize the AI transformation through digital trust or by conceptualizing an AI-specific governance system in order to ensure adherence to both internal and external compliance requirements. In addition, the AIC4 can be used to draw up best practices for development and operation processes of AI for protection, reliability and monitoring of AI. It can thus be ensured that only trustworthy algorithms are implemented in productive environments and that, therefore, they do not impede internal innovation processes, but rather stimulate them.

The seven criteria domains of the AIC4

Security & Robustness

Security & Robustness is concerned with whether AI systems can be manipulated by way of malicious influence. There is a particular focus on the performance of suitable tests to detect malicious input as well as the implementation of measures to counteract targeted attacks. This is intended to ensure that the AI service is robust and to safeguard the confidentiality and integrity of data along the training pipeline of the algorithms concerned.

Performance & Functionality

Performance & Functionality is concerned with ensuring that the AI service fulfills its prescribed performance targets in accordance with its characteristics and purpose of use. Before being used in productive operations, the system must be sufficiently trained, evaluated and tested to ensure adherence with these requirements. In order to make the AI service quantifiable in this regard, performance metrics should be applied, e.g. regarding the accuracy of the algorithm, and sensitivity analyses should be conducted.

Reliability

Reliability relates to the establishment of processes for ensuring the continuous operation of AI services being used in productive environments as well as for investigating any potential errors or failures. To this end, appropriate procedures for resource management, logging, error processing and back-ups have to be implemented.

Data Quality

Data Quality sets out requirements for a framework of guidelines on appropriate data processing and for ensuring appropriate data quality. This is intended to ensure that the data of an AI service (training and test data) meet the applicable requirements regarding data quality.

Data Management

Data management entails the description of criteria for the structured recording and acceptance of data for the purpose of training the AI service. This is to be done by defining data-related framework conditions that are applied during both the development and operation of the AI service and which provide protection against unauthorized access.

Explainability

Explainability is concerned with the implementation of measures that assist the users of an AI service to follow and understand the decisions made by the AI. Especially in cases where the processes of an AI service cannot be fully retraced, it must be made clear to the user – depending on how critical the area of application is – which components of the service cannot be explained in full.

Bias

The area of Bias is concerned with the objective of appropriately investigating potential biases within and potential discriminatory output from AI services. In this context, mathematical procedures are to be used to evaluate such biases in order to reconcile fair output with appropriate algorithm performance.

BSI C5 report as a prerequisite for secure cloud computing

The AIC4 criteria catalog has been developed for all AI services that are hosted in a cloud environment regardless of their deployment scenario (public, private, community or hybrid). An attestation according to the AIC4 criteria catalog requires a BSI C5 report in order to cover the secure operation of the underlying cloud infrastructure in addition to the AI-specific aspects of AIC4 itself. The Cloud Computing Compliance Criteria Catalogue (C5) sets minimum requirements for secure cloud computing and is targeted towards cloud service providers as well as their auditors and customers. C5 was first published by the BSI in 2016 and has since become an established benchmark for secure cloud environments. In the last year, it has been comprehensively revised in consultation with users, auditors, regulators and cloud providers in order to reflect current technological developments.

An attestation pursuant to the criteria of the AIC4 is issued as part of a non-audit assurance engagement (NAAE), while the results of an audit are set out in a detailed audit report pursuant to the International Standard on Assurance Engagements (ISAE) 3000 (Revised). In such a report, independent auditors issue a statement on the compliance of the AI service examined with regard to its fulfillment of the AIC4 criteria by way of appropriate control activities. The audit report is prepared for the AI service provider, who can then communicate it to the users of the service in order to provide them with transparency regarding the AI development, operation and monitoring concept.

Conclusion

The AIC4 provides the first standardized framework with specific criteria for ensuring security, transparency and robustness of AI services. Its application in the auditing of AI will increase trust between AI service providers and their users. At the same time, the needs of users for reliable AI are addressed. Thus, the AIC4 lays an important cornerstone for the German AI market and creates a uniform starting point for using AI in a trustworthy manner.

Event note

16 February 2021: Webinar on the AIC4 for Cloud Service Providers

In this webinar, our experts Hendrik Reese and Kai Kümmel will be offering an overview of the structure and content of AIC4. This will give you the opportunity to learn how cloud providers offering AI services can benefit from AIC4 by arranging independent AI assurance audits and by setting up a dedicated AI governance framework.

Register now

22 February 2021:Webinar on the AIC4 for industrial companies (German)

In this webinar, you will learn how to use the catalogue to audit AI services or to support the design of dedicated AI governance. We will also discuss how to design secure AI services and embed the right cornerstones for quality in your organisation's AI transformation.

Register now

Contact us

Hendrik Reese

Hendrik Reese

Director, Artificial Intelligence, PwC Germany

Tel: +49 89 5790-6093

Kai Kümmel

Kai Kümmel

Manager, PwC Germany

Tel: +49 89 5790-7153

Follow us