AI-based applications are gaining ground, especially in industrial image processing and quality control. Such applications can now automate monotonous and time-consuming manual processes reliably and offer a significant competitive advantage, particularly for small and medium-sized enterprises (SMEs). This is especially true when test parts or potential defects are variable and could consequently push conventional rule-based systems to their limits. However, a lack of transparency and understanding of how these applications work leads to uncertainty when deploying AI.
The applications mainly rely on machine learning (ML) processes. The artificial neural networks used here learn independently from large amounts of data. However, even AI experts are rarely able to explain how exactly a result – and in a worst-case scenario, an incorrect result – is produced based on this learning process. This situation leads to businesses erring on the side of caution when adopting this type of technology. In addition, legal problems could arise in the event that businesses are forced to comply with additional regulations for the use of AI technologies in the context of the EU AI Act in the future.
Simplifying and supporting AI audits
Suitable standards and development methods are required to find solutions providing greater security, especially for businesses less experienced in using AI. These would make verifying the suitability or qualification of ML-based AI applications easier, even without the kind of specialist knowledge that has been necessary up to now.
This is precisely the goal pursued by the “AIQualify” research project with the help of an emerging software framework. A software-based assistance system supports users in defining and formulating test and evaluation criteria. These are centrally bundled in what is known as an assurance case before being used to approve the AI application based on these criteria subsequently. The basis for this is an audit platform that provides specific audit modules for each development phase of the ML components of the AI application. A modular design will be used for the platform to ensure that test modules can be easily integrated or expanded.
In addition to isolated qualification, the framework can also be integrated iteratively
as an element in the development process of an AI system.
Taking the entire development process into consideration
Prof. Marco Huber, who heads up the project, underlines the innovative nature of the resultant approach: “Rather than considering only the final application, we go back much further – in fact, right to the very beginning. Each development phase for an AI application requires decisions that can influence the result. For this reason, we also include aspects such as data selection, pre-processing, quality criteria, and model selection.”
The software framework therefore enables three types of qualification:
- by the company itself,
- by a customer, supplier or partner, and finally
- through independent institutions.
This produces three distinct target groups: First, service providers for ML-based quality control and management; second, manufacturing companies; and third, service providers for conformity testing and auditing. In particular, small and medium-sized enterprises (SMEs) can qualify for third-party AI systems. In this way, the aim is for such businesses to be able to evaluate the performance of an AI system even without having their own AI specialists at their disposal.
Evaluating the framework by way of typical applications
Two use cases serve the purpose of testing the software framework in practical terms. The first use case relates to the research context of the project partners, where AI is used for camera-based detection of defective perforated discs. What is so special about this is that synthetic images of defects can also be created and used in addition to actual camera images. This allows different degrees of detail of the test task to be considered when assessing the suitability of the ML components.
The second use case comes directly from industrial practice. In addition to supporting the project as a whole, a project committee comprising manufacturing companies, among other organizations, will contribute this use case to the project.
Project overview: AIQualify
Full name: AIQualify – Framework for the qualification of AI systems in industrial quality inspection
Duration: 01 May 2023 to 30 April 2025
Partners: Fraunhofer Institute for Manufacturing Engineering and Automation IPA; Institute of Industrial Manufacturing and Management IFF at the University of Stuttgart
Associated members on the project committee: 36ZERO Vision, Audi, Babtec Informationssysteme, Bosch, EVT, Festool, Maddox AI, preML, scitis.io, sentin and Wickon Hightech.
In addition, the “Allianz Industrie 4.0”, a partnership between Bitkom and the German University of Administrative Sciences Speyer, is also supporting the project.
Funding information: The IGF project 22929 BG of the Federation of Quality Research and Science (FQS) was funded by the Federal Ministry for Economic Affairs and Climate Action through the German Federation of Industrial Research
Associations (AiF) under the Industrial Collective Research (IGF) programme based on a decision by the German Bundestag.
Expert Contact:
Prof. Dr.-Ing. Marco Huber | Phone +49 711 970-1960 | marco.huber@iff.uni-stuttgart.de