Ethical and Safety Issues Related to AI and Machine Learning
In our first part, our colleagues have already presented a selection of exciting presentations at m3. In this further section, we will again separately address the topics of ethical and safety-related issues of AI as well as transfer learning and domain adaptation of AI models.
Reliable AI: Securing artificial neural networks
Even though the topic of artificial intelligence is becoming increasingly important in the public eye, ethical and safety-related questions must also be considered and answered as AIs become more widespread in real-world use cases. This was demonstrated by Prof. Dr.-Ing. Marco Huber from Fraunhofer IPA in his presentation.
In the near past, well-known failures in ethical contexts were, for example, an AI at Amazon that was supposed to help in the selection of applicants. The AI was trained with employee profiles from the IT department that were predominantly male, which resulted in female applicants subsequently being excluded by the AI as unsuitable. Another experiment was a Twitter bot developed by Microsoft to learn from users (“Bot to Learn from User”). The model was trained by interacting with users, which turned the AI racist in just a few hours. The system was immediately stopped. Other failures that became known and were relevant to safety were various accidents with Tesla’s Autopilot, some of them serious.
Who do people trust: AI or a human decision?
To safeguard an AI against errors, ethical missteps, security breaches, and human mistrust, there are the following action items:
Security: Intentional manipulation of an AI through an attack
Reliability: through model validation to reliable artificial intelligence
Transparency: Blackbox AI and the explainability of artificial decisions
It is in the nature of a neural network that decisions are intransparent, and thus it is difficult for users to understand how one decision was made and not another. So how can humans understand what the AI is doing – for example, to check the results? After all, making the AI’s decisions explainable is essential for humans to trust it.
There are various approaches to solving these problems, for example by means of optimization, which, however, sometimes leads to mathematical problems that are almost impossible to solve. Another possibility is Bayesian inference: Here, previously known information is used to check whether a solution can be possible. All three questions, i.e. regarding security, reliability and transparency, should be considered and clarified during the development of AI.
From the playground to implementation: Deployment of ML in battery production
In the presentation by Dr. Antje Fitzner, Digitalization of Battery Cell Production, Fraunhofer Research Institution for Battery Cell Production FFB Münster and Alexander D. Kies, M.Sc., Production Quality, Fraunhofer Institute for Production Technology IPT Aachen, two possible use cases for the deployment of AI in battery production were presented.
For example, the FFB is developing manufacturing options for subsequent battery cell production in small and large manufacturing facilities up to “giga-factories”. In this context, the possible integration of AI in the manufacturing and monitoring processes was also classified as an interesting area of development. The presentation went from the basic consideration of where the use of AI makes sense to the implementation of the use case.
Finally, the following two applications seemed promising:
- detection of anomalies in layer thickness in battery film manufacturing.
- prediction of maintenance intervals for an extruder screw during continuous mixing of battery cell components.
The CRISP-DM Model was used to implement the two use cases. The CRISP-DM Model is a unified standard for AI model development and helps to implement AI projects in a structured way. We should consider this or a similar approach for our use cases as well.
Due to the special way of working of a research facility and the resulting constantly changing way of manufacturing the batteries, the models in the institute were not very meaningful at first. However, it is assumed that in the later application area, in production lines, the models will provide constant and value-added results.
We are here for You!
In summary, the m³ conference provides an exciting insight into the world of artificial intelligence every year. Especially the smaller framework compared to other events promotes the exchange with other participants and the speakers. The workshop on the day before the conference offers deeper insights into individual topics. Here we dealt extensively with the topic of MLOps. This is also one of the main topics that the machine learning community is currently dealing with.
While machine learning algorithms and new breakthroughs in the field of neural networks were mainly on the agenda during the past two on-site conferences in 2018 and 2019, this time the focus was on how machine learning standards can be established. Making models and data productive is also a big topic. After all, a model naturally loses accuracy, while “normal” software performs its service relatively unimpressed by time.
leogistics is also intensively involved with the topics of artificial intelligence and machine learning and application scenarios in the world of logistics and supply chain management. Have we sparked your interest in artificial intelligence solutions?
Contact us at email@example.com!