Refine
Year of publication
Document Type
- Conference Proceeding (11)
- Article (6)
- Master's Thesis (5)
- Book (1)
- Report (1)
- Working Paper (1)
Institute
Language
- English (19)
- German (5)
- Multiple languages (1)
Keywords
In this paper, we consider the question of data aggregation using the practical example of emissions data for economic activities for the sustainability assessment of regional bank clients. Given the current scarcity of company-specific emission data, an approximation relies on using available public data. These data are reported in different standards in different sources. To determine a mapping between the different standards, an adaptation to the Covariance Matrix Self-Adaptation Evolution Strategy is proposed. The obtained results show that high-quality mappings are found. Nevertheless, our approach is transferable to other data compatibility problems. These can be found in the merging of emissions data for other countries, or in bridging the gap between completely different data sets.
Ansätze des maschinellen Lernens werden sowohl in der Forschung als auch in der Praxis eingesetzt, um gewünschte Ausgabedaten anhand bekannter Eingabedaten vorherzusagen. In dieser Masterarbeit wird die Anwendung des maschinellen Lernens in der Batteriedatenanalyse zur Bestimmung des Alterungsstatus von Lithium-Ionen-Batterien untersucht. Das Ziel dieser Arbeit besteht in der Vorhersage von Alterungskurven (englisch state of health - SoH) für Lithium-Ionen Batterien über die Anzahl der Entladezyklen (Zeitachse). Dies erfolgt auf der Grundlage zuvor erfasster Daten für drei Typen von Lithium-Ionen-Batterien, die bei Temperaturen von 15 °C, 25 °C und 35 °C sowie C-Raten von 0,5C, 1C und 2C aufgenommen wurden. Im Zuge dessen wurden die angewandten Methoden des maschinellen Lernens analysiert und ihre Ergebnisse verglichen. Der Umfang dieser Arbeit hebt sich von anderen Ansätzen des maschinellen Lernens in der Batteriedatenanalyse ab, da dieselben Methoden in einem breiteren Spektrum von Daten mit unterschiedlichen Temperaturen und Kathodenmaterialien verwendet wurden. Dies ist für die Analyse von Unterschieden im Verhalten in der Praxis relevant. Nach dem Erwerb und der Vorbereitung der Daten wurden Modelle mit vier ausgewählten Regressionsverfahren (lineare Regression, Ridge-Regression, Random-Forest-Regression und KNN-Regression) des überwachten Lernens trainiert und die Vorhersagen durchgeführt. Aus den Ergebnissen kann eine allgemeingültige Auslegungsgrundlage für weitere Untersuchungen und die praktische Anwendung abgeleitet werden, bei der die Vorhersagen von SoH-Kurven für Lithium-Ionen-Batterien mit linearer Regression und Ridge-Regression die höchste Genauigkeit aufweisen.
Medizinische Verpackungen werden in der Industrie häufig mittels thermischen Siegelns verschlossen. Um eine kontinuierliche Qualitätsprüfung zu ermöglichen, soll in dieser Arbeit untersucht werden, ob es möglich ist, mittels Infrarotkamera, anhand der sich ausbildenden Wärmesignatur, fehlerhafte Teile zu erkennen. Dabei teilt sich die Forschungsfrage in zwei Teile. Im ersten Teil wird analysiert was zu beachten ist, um eine ideale Auswertung zu ermöglichen. Der zweite Teil der Forschungsfrage untersucht, welche Wärmesignatur sich bei fehlerhaften Teilen ausbildet. Im ersten Teil der Forschungsfrage wird mittels Modellierung des Siegelprozesses und der nachfolgenden Abkühlung, sowie eines späteren Versuchs analysiert, wie die Kamera am besten positioniert werden muss, um das beste Eingangssignal zu erhalten. Im zweiten Teil werden in verschiedenen Versuchsreihen Fehler provoziert und anschließend die Unter-schiede der Wärmesignatur ausgewertet. Anhand der Modellierung und der Durchführung von Versuchen zeigt sich, dass eine Siegelung am besten 1-2s nach dem Siegelende ausgewertet werden kann. Die weitere Untersuchung zeigt, dass große Fehler zwar gut erkannt, kleinere aber eher nicht mehr zuverlässig erkannt werden können.
The usage of data gathered for Industry 4.0 and smart factory scenarios continues to be a problem for companies of all sizes. This is often the case because they aim to start with complicated and time-intensive Machine Learning scenarios. This work evaluates the Process Capability Analysis (PCA) as a pragmatic, easy and quick way of leveraging the gathered machine data from the production process. The area of application considered is injection molding. After describing all the required domain knowledge, the paper presents an approach for a continuous analysis of all parts produced. Applying PCA results in multiple key performance indicators that allow for fast and comprehensible process monitoring. The corresponding visualizations provide the quality department with a tool to efficiently choose where and when quality checks need to be performed. The presented case study indicates the benefit of analyzing whole process data instead of considering only selected production samples. The use of machine data enables additional insights to be drawn about process stability and the associated product quality.
Im vorliegenden Paper wird ein Vergleich zwischen Produktions-und Simulationsdaten präsentiert welches im Rahmen einer größeren Initiative zur Verwendung von Shopfloor Daten bei einem Projektpartner in der Automobilindustrie umgesetzt wurde. In diesem Projekt wurden die Daten die während der Füllbildsimulation entstehen mit den Daten aus der finalen Werkzeugabnahme verglichen um zu analysieren, wie genau diese miteinander über einstimmen. Je besser die Simulation ist, desto schneller kann der gesamte Werkzeugentwicklungsprozess abgewickelt werden, welcher als Kernprozess massives Einsparungspotenzial und damit Wettbewerbsvorteil mit sich bringt.
Recent developments in the area of Natural Language Processing (NLP) increasingly allow for the extension of such techniques to hitherto unidentified areas of application. This paper deals with the application of state-of-the-art NLP techniques to the domain of Product Safety Risk Assessment (PSRA). PSRA is concerned with the quantification of the risks a user is exposed to during product use. The use case arises from an important process of maintaining due diligence towards the customers of the company OMICRON electronics GmbH.
The paper proposes an approach to evaluate the consistency of human-made risk assessments that are proposed by potentially changing expert panels. Along the stages of this NLP-based approach, multiple insights into the PSRA process allow for an improved understanding of the related risk distribution within the product portfolio of the company. The findings aim at making the current process more transparent as well as at automating repetitive tasks. The results of this paper can be regarded as a first step to support domain experts in the risk assessment process.
This paper analyses an electrical test tower of the OMCIRON electronics GmbH and evaluates whether a Predictive Maintenance (PdM) strategy can be implemented for the test towers. The company OMICRON electronics GmbH performs unit tests for its devices on test towers. Those tests consist of a multitude of subtests which all return a measurement value. Those results are tracked and stored in a database. The goal is to analyze the data of the test towers subtests and evaluate the possibility of implementing a predictive maintenance system in order to be able to predict the RUL and quantify the degradation of the test tower.
By assuming that the main degradation source are the relays of the test tower, a reliability modelling is performed which is the model-driven approach. The data-driven modelling process of the test tower consists of multiple steps. Firstly, the data is cleaned and compromised by removing redundances and optimizing for the best subtests where a subtest is rated as good if the trendability and monotonicity metric values are above a specific threshold. In a second step, the trend behaviours of the subtests are analyzed and ranked which illustrates that none of the subtests contained usable trend behaviour thus making an implementation of a PdM system impossible.
By using the ranking, the data-driven model is compared with the reliability model which shows that the assumption of the relays being the main error source is inaccurate.
An analysis of a possible anomaly detection model for a PdM is evaluated which shows that an anomaly detection is not possible for the test towers as well. The implementability of PdM for test towers and other OMICRON devices is discussed and followed up with proposals for future PdM implementations as well as additional analytical analyses that can be performed for the test towers.
With Cloud Computing and multi-core CPUs parallel computing resources are becoming more and more affordable and commonly available. Parallel programming should as well be easily accessible for everyone. Unfortunately, existing frameworks and systems are powerful but often very complex to use for anyone who lacks the knowledge about underlying concepts. This paper introduces a software framework and execution environment whose objective is to provide a system which should be easily usable for everyone who could benefit from parallel computing. Some real-world examples are presented with an explanation of all the steps that are necessary for computing in a parallel and distributed manner.
This master thesis investigates a Computational Intelligence-based method for solving PDEs. The proposed strategy formulates the residual of a PDE as a fitness function. The solution is approximated by a finite sum of Gauss kernels. An appropriate optimisation technique, in this case JADE, is deployed that searches for the best fitting parameters for these kernels. This field is fairly young, a comprehensive literature research reveals several past papers that investigate similar techniques.
To evaluate the performance of the solver, a comprehensive testbed is defined. It consists of 11 different Poisson equations. The solving time, the memory consumption and the approximation quality are compared to the state of the art open-source Finite Element solver NGSolve. The first experiment tests a serial JADE. The results are not as good as comparable work in the literature. Further, a strange behaviour is observed, where the fitness and the quality do not match. The second experiment implements a parallel JADE, which allows to make use of parallel hardware. This significantly speeds up the solving time. The third experiment implements a parallel JADE with adaptive kernels. It starts with one kernel and introduce more kernels along the solving process. A significant improvement is observed on one PDE, that is purposely built to be solvable. On all other testbed PDEs the quality-difference is not conclusive. The last experiment investigates the discrepancy between the fitness and the quality. Therefore, a new kernel is defined. This kernel inherits all features of the Gauss kernel and extends it with a sine function. As a result, the observed inconsistency between fitness and quality is mitigated.
The thesis closes with a proposal for further investigations. The concepts here should be reconsidered by using better performing optimisation algorithms from the literature, like CMA-ES. Beyond that, an adaptive scheme for the collocation points could be tested. Finally, the fitness function should be further examined.
Many test drives are carried out in the automotive environment. During these test drives many signals are recorded. The task of the test engineers is to find certain patterns (e.g. an emergency stop) in these long time series. Finding these interesting patterns is currently done with rule based processing. This procedure is very time consuming and requires a test engineer with expertise. In this thesis it is examined if the emerging field of machine learning can be used to support the engineers in this task. Active Learning, a subarea of machine learning, is used to train a classifier during the labeling process. Thereby it proposes similar windows to the already labeled ones. This saves the annotator time for searching or formulating rules for the problem. A data generator is worked out to replace the missing labeled data for tests. The custom performance measure “proportion of seen samples” is developed to make the success measurable. A modular software architecture is designed. With that, several combinations of Time Series Classification algorithms and query strategies are compared on artificial data. The results are verified on real datasets, which are open source available. The best performing, but computational intensive solution is an adapted RandOm Convolutional KErnel Transform (ROCKET). The custom query strategy “certainty sampling” shows the best results for highly imbalanced datasets.