600 Technik
Refine
Year of publication
Document Type
- Master's Thesis (10)
- Conference Proceeding (8)
- Article (7)
- Preprint (3)
- Doctoral Thesis (1)
Institute
- Forschungszentrum Mikrotechnik (10)
- Forschungszentrum Energie (3)
- Forschungszentrum Human Centred Technologies (3)
- Forschungszentrum Business Informatics (2)
- Technik | Engineering & Technology (2)
- Forschungszentrum Digital Factory Vorarlberg (1)
- Josef Ressel Zentrum für Intelligente Thermische Energiesysteme (1)
- Wirtschaft (1)
Keywords
- Y-branch splitter (2)
- 3D splitter (1)
- 6-DOF (1)
- AAL (1)
- CO2-Emissionen (1)
- CO2-Gashydraten (1)
- CO2-Konzentrationen (1)
- Carbon Capture-Methode (1)
- ChatGPT (1)
- Digital Transformation (1)
The utilization of lasers in dentistry expands greatly in recent years. For instance, fs-lasers are effective for both drilling and caries prevention, while cw-lasers are useful for adhesive hardening. A cutting-edge application of lasers in dentistry is the debonding of veneers. While there are pre-existing tools for this purpose, there is still potential for improvement. Initial efforts to investigate laser assisted debonding mechanisms with measurements of the optical and mechanical properties of teeth and prosthetic ceramics are presented. Preliminary tests conducted with a laser system used for debonding that is commercially available showed differences in the output power set at the systems console to that at specified distances from the handpiece. Furthermore, the optical properties of the samples (human teeth and ceramics) were characterised. The optical properties of the ceramics should closely resemble those of teeth in terms of look and feel, but they also influence the laser assisted debonding technique and thus must be taken into account. In addition first attempts were performed to investigate the mechanical properties of the samples by means of pump-probe-elastography under a microscope. By analyzing the sample surface up to 20 ns after a fs-laser pulse impact, pressure and shock waves could be detected, which can be utilized to determine the elastic constants of specific materials. Together such investigations are needed to shape the basis for a purely optical approach of debonding of veneers utilizing acoustic waves.
Das Wärmequartier wird als entscheidende räumliche Einheit betrachtet, um Effizienzsteigerungen zu ermöglichen. Dabei spielt die Nutzung regionaler Wärmequellen eine wichtige Rolle, um den steigenden Energiebedarf der Menschheit zu decken. Neben der vielversprechenden Nutzung der Erdwärme durch eine Erdwärmepumpe werden in Uferregionen zunehmend Seewasserwärmepumpen eingesetzt, denn diese nutzen die thermische Energie aus dem See, um den Heiz- und Kühlbedarf zu decken. Dabei ist der geschlossene Kreislauf im Vergleich zu dem offenen Kreislauf weniger effizient, aber dafür stabiler und zuverlässiger, da der Wärmeübertrager im See Vereisungs- und Korrosionsprobleme abmildern kann. Aus diesem Grund werden zunehmend Überlegungen an einem in den See eingetauchten Wärmeübertrager angesetzt. Allerdings wird dieses Potential im deutschsprachigen Raum kaum ausgeschöpft. In dieser Studie erfolgt anhand eines Referenzgebäudes eines Wärmequartiers ein simulativer Vergleich eines Erdwärmepumpensystems mit Erdsonden und eines Seewasserwärmepumpensystems mit einem in den See eingetauchten Rohrbündelwärmeübertrager. Für den Rohrbündelwärmeübertrager wurde die Auswirkungen von einer sich veränderten Übertragungsfläche auf die Effizienz analysiert und optimale Eigenschaften definiert. Der Effekt von einem veränderten Nennvolumen der Pufferspeicher auf die Effizienz, die Heizzyklen und die Vorlauftemperatur der Fußbodenheizung und Brauchwarmwassers wurde untersucht. Die Folgen von Temperaturänderungen des Erdreichs und des Seewassers auf die Effizienz des Gesamtsystems wurde zudem betrachtet. Die Ergebnisse zeigen, dass die Systemeffizienz durch die Übertragungsfläche und das Verhältnis zwischen Rohrbündelanzahl und -länge verbessert werden kann. Weitere Ergebnisse zeigen, dass die Seewasserwärmepumpe empfindlicher auf Temperaturänderungen der Wärmequelle reagiert als die Erdwärmepumpe. Es wurde ein abschließender jährlicher Vergleich basierend auf den Erkenntnissen aus den Analysen zu dem Rohrbündelwärmeübertrager und dem Nennvolumen der Pufferspeicher mit dem bestehenden geothermischen Referenzsystem durchgeführt. Die Untersuchung der Ergebnisse ergab, dass die Jahresarbeitszahl der Erdwärmepumpe für die Trinkwassererwärmung höher als die der Seewasserwärmepumpe ist. Für den Heizbetrieb wurde eine geringfügig höhere Jahresarbeitszahl bei der Erdwärmepumpe festgestellt. Dafür ist die Seewasserwärmepumpe robuster hinsichtlich der Effizienz und zuverlässiger bezüglich der Wärmebedarfsdeckung von Heizwärme- und Brauchwarmwasserbedarf.
Open tracing tools
(2023)
Background: Coping with the rapid growing complexity in contemporary software architecture, tracing has become an increasingly critical practice and been adopted widely by software engineers. By adopting tracing tools, practitioners are able to monitor, debug, and optimize distributed software architectures easily. However, with excessive number of valid candidates, researchers and practitioners have a hard time finding and selecting the suitable tracing tools by systematically considering their features and advantages. Objective: To such a purpose, this paper aims to provide an overview of popular Open tracing tools via comparison. Methods: Herein, we first identified 30 tools in an objective, systematic, and reproducible manner adopting the Systematic Multivocal Literature Review protocol. Then, we characterized each tool looking at the 1) measured features, 2) popularity both in peer-reviewed literature and online media, and 3) benefits and issues. We used topic modeling and sentiment analysis to extract and summarize the benefits and issues. Specially, we adopted ChatGPT to support the topic interpretation. Results: As a result, this paper presents a systematic comparison amongst the selected tracing tools in terms of their features, popularity, benefits and issues. Conclusion: The result mainly shows that each tracing tool provides a unique combination of features with also different pros and cons. The contribution of this paper is to provide the practitioners better understanding of the tracing tools facilitating their adoption.
Coupling is one of the most frequently mentioned metric in software systems. However, to measure logical coupling between microservices, runtime information is needed or the availability of service-log files to analyze the calls between services is required. This work presents our emerging results, in which we propose a metric to statically calculate logical coupling between microservices based on commits to versioning systems. We performed an initial validation of the proposed metric with a dataset containing 145 open-source microservices projects. The results illustrate how logical coupling affects every system and increases overtime. However, we did not find a correlation between the number of commits or the number of developers and the introduction of logical coupling. In future, we investigate why, how, and when logical coupling is introduced in a system.
The role of entrepreneurs and intrapreneurs in the current zeitgeist is to drive innovation, re-shape rigid, established processes in business as well as for consumers. They use new viewpoints to pioneer new (business) models which focus on ‘smartness’ rather than the purely monetary and short-sighted models of yesteryear. Fostering and supporting the culture of this current zeitgeist is a mayor challenge for entre- and intrapreneurial support infrastructures, namely startup centres and innovation hubs of universities and other public institutions as well as innovation centres of private companies. Hereby, support may range from access to funding over provision of resources such as offices or computing hardware to coaching in the development of business ideas and strategic roadmaps for product and service deployment. In this paper, we focus on describing the status-quo of afore- mentioned support infrastructures in Vorarlberg and the Lake Constance region, then extend the scope to existing (international) approaches for aiding founders and inno- vators in the development of smart services. An analysis of success stories of the Vorarlberg startup centre ‘startupstube’ and other initiatives including their compar- ison to international counterparts builds the basis for a methodological framework for (service science) coaching in entre- and intrapreneurial support infrastructures. The paper is concluded by the description of a framework for choosing the right methods and tools to create service value in entre-/intrapreneurship based upon tested, proven know-how and for defining support infrastructure needs based upon pre-defined stakeholder and target groups as well as the (industry) sectors of the innovators.
Demand-side management approaches that exploit the temporal flexibility of electric vehicles have attracted much attention in recent years due to the increasing market penetration. These demand-side management measures contribute to alleviating the burden on the power system, especially in distribution grids where bottlenecks are more prevalent. Electric vehicles can be defined as an attractive asset for distribution system operators, which have the potential to provide grid services if properly managed. In this thesis, first, a systematic investigation is conducted for two typically employed demand-side management methods reported in the literature: A voltage droop control-based approach and a market-driven approach. Then a control scheme of decentralized autonomous demand side management for electric vehicle charging scheduling which relies on a unidirectionally communicated grid-induced signal is proposed. In all the topics considered, the implications on the distribution grid operation are evaluated using a set of time series load flow simulations performed for representative Austrian distribution grids. Droop control mechanisms are discussed for electric vehicle charging control which requires no communication. The method provides an economically viable solution at all penetrations if electric vehicles charge at low nominal power rates. However, with the current market trends in residential charging equipment especially in the European context where most of the charging equipment is designed for 11 kW charging, the technical feasibility of the method, in the long run, is debatable. As electricity demand strongly correlates with energy prices, a linear optimization algorithm is proposed to minimize charging costs, which uses next-day market prices as the grid-induced incentive function under the assumption of perfect user predictions. The constraints on the state of charge guarantee the energy required for driving is delivered without failure. An average energy cost saving of 30% is realized at all penetrations. Nevertheless, the avalanche effect due to simultaneous charging during low price periods introduces new power peaks exceeding those of uncontrolled charging. This obstructs the grid-friendly integration of electric vehicles.
In 2021, a prominent Austria dairy producer suffered from an IT attack and was completely paralysed. Without clearly defined mitigation measures in place, major disruptions were caused alongside the whole supply chain, including logistics service providers, governmental food safety bodies, as well as retailers (i.e., supermarkets and convenience stores). In this paper, we ask the question how digitisation and digital transformation impact IT security, especially when considering the complex company ecosystems of food production and food supply chains in Austria. The problem statement stems from a gap in knowledge of key differences in approaches towards IT security, resilience, risk management and especially business interfaces between food suppliers, supermarkets, distributors, logistics and other service providers. In order to answer related research questions, firstly, the authors conduct literature research, and highlight common guidelines and standardisation as well as look at state-based recommendations for critical infrastructure. In a second step, the paper describes a quantitative and qualitative survey with Austrian food companies (producers and retailers) which is described in detail in the paper. A description of recommended measures for the industry, further steps, as well as an outlook conclude the paper.
Power plant operators increasingly rely on predictive models to diagnose and monitor their systems. Data-driven prediction models are generally simple and can have high precision, making them superior to physics-based or knowledge-based models, especially for complex systems like thermal power plants. However, the accuracy of data-driven predictions depends on (1) the quality of the dataset, (2) a suitable selection of sensor signals, and (3) an appropriate selection of the training period. In some instances, redundancies and irrelevant sensors may even reduce the prediction quality.
We investigate ideal configurations for predicting the live steam production of a solid fuel-burning thermal power plant in the pulp and paper industry for different modes of operation. To this end, we benchmark four machine learning algorithms on two feature sets and two training sets to predict steam production. Our results indicate that with the best possible configuration, a coefficient of determination of R^2 = 0.95 and a mean absolute error of MAE=1.2 t/h with an average steam production of 35.1 t/h is reached. On average, using a dynamic dataset for training lowers MAE by 32% compared to a static dataset for training. A feature set based on expert knowledge lowers MAE by an additional 32 %, compared to a simple feature set representing the fuel inputs. We can conclude that based on the static training set and the basic feature set, machine learning algorithms can identify long-term changes. When using a dynamic dataset the performance parameters of thermal power plants are predicted with high accuracy and allow for detecting short-term problems.
Strain-induced dynamic control over the population of quantum emitters in two-dimensional materials
(2023)
The discovery of quantum emitters in two-dimensional materials has triggered a surge of research to assess their suitability for quantum photonics. While their microscopic origin is still the subject of intense studies, ordered arrays of quantum emitters are routinely fabricated using static strain-gradients, which are used to drive excitons toward localized regions of the 2D crystals where quantum-light-emission takes place. However, the possibility of using strain in a dynamic fashion to control the appearance of individual quantum emitters has never been explored so far. In this work, we tackle this challenge by introducing a novel hybrid semiconductor-piezoelectric device in which WSe2 monolayers are integrated onto piezoelectric pillars delivering both static and dynamic strains. Static strains are first used to induce the formation of quantum emitters, whose emission shows photon anti-bunching. Their excitonic population and emission energy are then reversibly controlled via the application of a voltage to the piezoelectric pillar. Numerical simulations combined with drift-diffusion equations show that these effects are due to a strain-induced modification of the confining-potential landscape, which in turn leads to a net redistribution of excitons among the different quantum emitters. Our work provides relevant insights into the role of strain in the formation of quantum emitters in 2D materials and suggests a method to switch them on and off on demand.
Signatures of the optical stark effect on entangled photon pairs from resonantly-pumped quantum dots
(2023)
Two-photon resonant excitation of the biexciton-exciton cascade in a quantum dot generates highly polarization-entangled photon pairs in a near-deterministic way. However, the ultimate level of achievable entanglement is still debated. Here, we observe the impact of the laser-induced ac-Stark effect on the quantum dot emission spectra and on entanglement. For increasing pulse-duration-to-lifetime ratios and pump powers, decreasing values of concurrence are recorded. Nonetheless, additional contributions are still required to fully account for the observed below-unity concurrence.
Aufgrund des anhaltenden Klimawandels wird eine Reduzierung des Ausstoßes von Kohlendioxid (CO2) immer wichtiger. Dabei könnte die Verwendung und Weiterentwicklung von CO2-Gashydraten als „Carbon Capture“-Methode (CC) einen großen Beitrag leisten. Bei der Bildung von Gashydraten wird Gas in den Hohlräumen eines Hydratgitters, welches sich unter hohen Drücken und niedrigen Temperaturen bildet, eingeschlossen. In dieser Arbeit wurden verfahrenstechnische Methoden getestet, um die Dynamik der Gashydratbildung zu verbessern. Dies wird vor allem durch eine Verbesserung von Massen- und Wärmetransport erreicht. Das Hauptproblem für die langen Bildungszeiten ist unter anderem eine sehr geringe Kontaktfläche. Diese kann durch den Einsatz eines Rührers, eines permanenten Gasdurchflusses und durch die Verwendung von Dry Water erhöht werden. Dry Water besteht aus vielen winzigen Wassertröpfchen, die von hydrophobiertem Siliziumdioxid (SiO2) umschlossen sind. Dadurch kann die Dauer der Induktions- und Wachstumsphase der Gashydratbildung verkürzt werden.
In dieser Arbeit wird ein Reaktor mit 30 ml Volumen und ein Schrägblattrührer mit einem Durchmesser von 17 mm verwendet. Der Rührer wurde bei einer maximalen Drehzahl von 15 000 rpm ca. alle 10 min für max. 30 s betrieben, bis die Wachstumsphase der Gashydratbildung beginnt. Der Gasdurchfluss hatte einen Volumenstrom von ca. 4 slm. Das verwendete Dry Water wurde in einem Haushaltsmixer bei 25 000 rpm aus Kieselsäure und Wasser hergestellt und besitzt einen Wasseranteil von 95 %. Es wurden unterschiedliche Kombinationen getestet. Die kürzeste Induktionszeit wurde mit durchschnittlich 6,9 min durch den Einsatz eines Rührers in Kombination mit einem Gasdurchfluss und Wasser erreicht, bei einem Volumenverhältnis zwischen Gas und Wasser von 8,8 v/v nach 60 min. Die größte Menge an Gashydrat konnte durch den Einsatz von Dry Water in Kombination mit einem Gasdurchfluss erreicht werden. Das führte nach 60 min zu einem Volumenverhältnis zwischen Gas und Wasser von 14,9 v/v. Die Induktionszeit betrug dabei 120 min.
Untersuchung zur Lösbarkeit der Rückwärtskinematik eines 6-DOF Roboter mit einem neuronalen Netz
(2022)
Das Berechnen der inversen Kinematik ist komplex und muss für jeden Robotertyp individuell gelöst werden. Da ein Manipulator ohne die Rückwärtskinematik, die die erforderlichen Achsvariablen für eine Ziellage ermittelt, in der Praxis nicht verwendet werden kann, ist dieses Problem elementar in der Robotik. In dieser Arbeit wird der Ansatz zur Lösung der inversen Kinematik mit einem neuronalen Netz für einen Roboter mit sechs Freiheitsgarden untersucht. Dabei ist besonders darauf zu achten alle Mehrdeutigkeiten der inversen Kinematik beim Training zu berücksichtigen, da sonst das Kriterium des Determinismus zwischen Inputs und Outputs verletzt wird, was verhindert, dass ein Netz für das Problem trainiert werden kann. Es hat sich gezeigt, dass der Optimierungsalgorithmus Adams ebenso gute Ergebnisse wie der Scaled Conjugated Gradient erzielt. Die in Tensorflow verwendete typischen Aktivierungsfunktion Tangens hyperbolicus, weist im Vergleich zu anderen untersuchten Aktivierungsfunktionen, die in Tensorflow implementiert sind, die beste Performance auf. In MATLAB hingegen weist die Log sigmoid Aktivierungsfunktion die beste Performance von den implementierten Aktivierungsfunktionen auf. Zusätzlich verringert das Einschränken der Achsvariablen auf die tatsächlichen Achsbeschränkungen beim Trainieren des Netzes, sowohl den Netzwerkfehler als auch die Datenmenge, die benötigt wird, damit das Netz gut generalisiert. Abschließend stellt sich heraus, dass die trainierten Netze keine Praxistauglichkeit aufweisen, da der erzielte Netzwerkfehler zu groß ist. Da alle Mehrdeutigkeiten durch geometrische Analyse ausgeschlossen sind und ein ausreichend großer Datensatz verwendet wurde, kann mit den hier vorgestellten Ansätzen das Ergebnis nur durch komplexere Netze und damit mehr Daten verbessert werden. Andere Ansätze die zusätzliche Informationen zur Berechnung der Achswinkel zur Verfügung stellen könnten zudem auch bessere Ergebnisse erzielen. Darüber hinaus könnte es sinnvoll sein, Ansätze zu untersuchen, die sich die Achsbeschränkungen zunutze machen.
Der Energieausweis stellt sich als wichtiges Instrument zu Dekarbonisierung des Gebäudesektors dar. Aufgrund dieser zentralen Wichtigkeit kommt auch der Qualitätssicherung von Energieausweisen eine tragende Bedeutung zuteil. Während die Übereinstimmung von Energieausweisberechnungen mit realen Verbrauchswerten gut erforscht ist, gibt es wenig Informationen hinsichtlich der Qualität von ausgestellten Energieausweisen. Vorliegende Arbeit befasst sich mit der Identifikation von Fehlerquellen bei der Eingabe energetisch relevanter Daten. Dabei wird das Ausfindigmachen von fehlerhaften energetischen Eingaben bei real ausgestellten Energieausweisen, die Ermittlung deren Fehlerquote und Auswirkungen auf die energetischen Kennzahlen HWB, PEB, CO2 und IO3 angestrebt.
Energetisch relevanten Daten zum Gebäude, dessen Haustechniksystemen, Klimadaten, Nutzungsprofil und rechtlich relevante Eingaben wurden bei vier Energieausweisen auf deren korrekte Eingabe geprüft. Hierdurch konnten Fehlerquellen zur Eingabe von Gebäude- und Haustechnikdaten ausgemacht werden konnten. Eine genauere Kontrolle einzelner Eingaben an einer größeren Stückzahl von Energieausweisen ergab deutliche Fehlerpotentiale bei der Eingabe von PV- und thermischen Solaranlagen als auch bei der Berechnung des Ökoindex (OI3).
A quantum-light source that delivers photons with a high brightness and a high degree of entanglement is fundamental for the development of efficient entanglement-based quantum-key distribution systems. Among all possible candidates, epitaxial quantum dots are currently emerging as one of the brightest sources of highly entangled photons. However, the optimization of both brightness and entanglement currently requires different technologies that are difficult to combine in a scalable manner. In this work, we overcome this challenge by developing a novel device consisting of a quantum dot embedded in a circular Bragg resonator, in turn, integrated onto a micromachined piezoelectric actuator. The resonator engineers the light-matter interaction to empower extraction efficiencies up to 0.69(4). Simultaneously, the actuator manipulates strain fields that tune the quantum dot for the generation of entangled photons with fidelities up to 0.96(1). This hybrid technology has the potential to overcome the limitations of the key rates that plague current approaches to entanglement-based quantum key distribution and entanglement-based quantum networks. Introduction
Experimental multi-state quantum discrimination in the frequency domain with quantum dot light
(2022)
The quest for the realization of effective quantum state discrimination strategies is of great interest for quantum information technology, as well as for fundamental studies. Therefore, it is crucial to develop new and more efficient methods to implement discrimination protocols for quantum states. Among the others, single photon implementations are more advisable, because of their inherent security advantage in quantum communication scenarios. In this work, we present the experimental realization of a protocol employing a time-multiplexing strategy to optimally discriminate among eight non-orthogonal states, encoded in the four-dimensional Hilbert space spanning both the polarization degree of freedom and photon energy. The experiment, built on a custom-designed bulk optics analyser setup and single photons generated by a nearly deterministic solid-state source, represents a benchmarking example of minimum error discrimination with actual quantum states, requiring only linear optics and two photodetectors to be realized. Our work paves the way for more complex applications and delivers a novel approach towards high-dimensional quantum encoding and decoding operations.
The production of liquid-gas dispersions places high demands on the process technology, which requires knowledge of the bubble formation mechanisms, as well as the phase parameters of the media combinations used. To obtain the bubble sizes introduced to a flow not knowing the phase parameters, different process parameters are investigated. Their quality and applicability are evaluated. The results obtained make it possible to simplify long design processes of dispersion processes in manufacturing plants and to ensure the product quality of the products manufactured, by reducing waste.
A trend from centralized to decentralized production is emerging in the manufacturing domain leading to new and innovative approaches for long-established production methods. A technology supporting this trend is Cloud Manufacturing, which adapts technologies and concepts known from cloud computing to the manufacturing domain. A core aspect of Cloud Manufacturing is representing knowledge about manufacturing, e.g., machine capabilities, in a suitable form. This knowledge representation should be flexible and adaptable so that it fits across various manufacturing domains, but, at the same time, should also be specific and exhaustive. We identify three core capabilities that such a platform has to support, i.e., the product, the process and the production.We propose representing this knowledge in semantically specified knowledge graphs, essentially creating three through features interconnected ontologies each representing a facet of manufacturing. Finally, we present an exemplary implementation of a Cloud Manufacturing platform using this representation and its advantages.