minds mastering machines 2022: AI trends for the World of Logistics (part 2/2)

Axel Bohnet

Axel Bohnet

Ethical and Safety Issues Related to AI and Machine Learning

In our first part, our colleagues have already presented a selection of exciting presentations at m3. In this further section, we will again separately address the topics of ethical and safety-related issues of AI as well as transfer learning and domain adaptation of AI models.

Reliable AI: Securing artificial neural networks

Even though the topic of artificial intelligence is becoming increasingly important in the public eye, ethical and safety-related questions must also be considered and answered as AIs become more widespread in real-world use cases. This was demonstrated by Prof. Dr.-Ing. Marco Huber from Fraunhofer IPA in his presentation.

In the near past, well-known failures in ethical contexts were, for example, an AI at Amazon that was supposed to help in the selection of applicants. The AI was trained with employee profiles from the IT department that were predominantly male, which resulted in female applicants subsequently being excluded by the AI as unsuitable. Another experiment was a Twitter bot developed by Microsoft to learn from users (“Bot to Learn from User”). The model was trained by interacting with users, which turned the AI racist in just a few hours. The system was immediately stopped. Other failures that became known and were relevant to safety were various accidents with Tesla’s Autopilot, some of them serious.

Who do people trust: AI or a human decision?

A survey collected by Bosch as part of its AI future forecast revealed notable differences in various AI application areas. While there is a high level of trust in AIs for topics relating to industry and transport, people’s trust in AI dwindles to a minimum when it comes to health issues or personnel decisions.
levels of trust - ki vs human
Varying levels of trust in AI

To safeguard an AI against errors, ethical missteps, security breaches, and human mistrust, there are the following action items:

  1. Security
  2. Reliability
  3. Transparency

Security: Intentional manipulation of an AI through an attack

When developing an AI, it is necessary to ensure and test that the model behaves as it should. One way to test this is to falsify the results. An adversarial attack is the use of adversarial examples to manipulate the AI. An adversarial example is a specially manipulated input signal to an artificial neural network that intentionally misleads it into misclassifications. The manipulation can even be done in such a way that a human observer does not notice it or does not recognize it as intentional manipulation. Even single erroneous pixels on the training data can cause a completely different classification result.

Reliability: through model validation to reliable artificial intelligence

When developing an AI model, it must also always be clear what it was developed for. The limits of the AI should also be clearly defined: Does the model know what it doesn’t know? In tests, the same stop sign was fed to an AI for evaluation over and over again, but in different color contrasts and/or exposure scenarios. Not all of them were correctly recognized. Theoretically, an infinite amount of such manipulated data can be generated for an attack scenario, making verification impossible. Only small changes in input lead to large changes in output, so it is imperative to verify both the nature and quality of the training data when developing an AI model.

Transparency: Blackbox AI and the explainability of artificial decisions

It is in the nature of a neural network that decisions are intransparent, and thus it is difficult for users to understand how one decision was made and not another. So how can humans understand what the AI is doing – for example, to check the results? After all, making the AI’s decisions explainable is essential for humans to trust it.

There are various approaches to solving these problems, for example by means of optimization, which, however, sometimes leads to mathematical problems that are almost impossible to solve. Another possibility is Bayesian inference: Here, previously known information is used to check whether a solution can be possible. All three questions, i.e. regarding security, reliability and transparency, should be considered and clarified during the development of AI.

From the playground to implementation: Deployment of ML in battery production

In the presentation by Dr. Antje Fitzner, Digitalization of Battery Cell Production, Fraunhofer Research Institution for Battery Cell Production FFB Münster and Alexander D. Kies, M.Sc., Production Quality, Fraunhofer Institute for Production Technology IPT Aachen, two possible use cases for the deployment of AI in battery production were presented.

For example, the FFB is developing manufacturing options for subsequent battery cell production in small and large manufacturing facilities up to “giga-factories”. In this context, the possible integration of AI in the manufacturing and monitoring processes was also classified as an interesting area of development. The presentation went from the basic consideration of where the use of AI makes sense to the implementation of the use case.

AI use cases in production
Possible fields of application for ML

Finally, the following two applications seemed promising:

  • detection of anomalies in layer thickness in battery film manufacturing.
  • prediction of maintenance intervals for an extruder screw during continuous mixing of battery cell components.

The CRISP-DM Model was used to implement the two use cases. The CRISP-DM Model is a unified standard for AI model development and helps to implement AI projects in a structured way. We should consider this or a similar approach for our use cases as well.

Due to the special way of working of a research facility and the resulting constantly changing way of manufacturing the batteries, the models in the institute were not very meaningful at first. However, it is assumed that in the later application area, in production lines, the models will provide constant and value-added results.

We are here for You!

In summary, the m³ conference provides an exciting insight into the world of artificial intelligence every year. Especially the smaller framework compared to other events promotes the exchange with other participants and the speakers. The workshop on the day before the conference offers deeper insights into individual topics. Here we dealt extensively with the topic of MLOps. This is also one of the main topics that the machine learning community is currently dealing with.

While machine learning algorithms and new breakthroughs in the field of neural networks were mainly on the agenda during the past two on-site conferences in 2018 and 2019, this time the focus was on how machine learning standards can be established. Making models and data productive is also a big topic. After all, a model naturally loses accuracy, while “normal” software performs its service relatively unimpressed by time.

leogistics is also intensively involved with the topics of artificial intelligence and machine learning and application scenarios in the world of logistics and supply chain management. Have we sparked your interest in artificial intelligence solutions?
Contact us at blog@leogistics.com!

CONTACT US

GET IN TOUCH

Are you interested in state-of-the-art logistics solutions? Then I am your contact person. I look forward to your call or your message via contact form.

Stay up-to-date

Sign up now and get access to our free whitepaper and downloads.