Refine
Year of publication
Document Type
- Conference Proceeding (307)
- Article (288)
- Master's Thesis (113)
- Part of a Book (53)
- Book (19)
- Doctoral Thesis (9)
- Report (6)
- Preprint (5)
- Working Paper (4)
- Other (3)
Institute
- Forschungszentrum Mikrotechnik (246)
- Forschungszentrum Business Informatics (149)
- Technik | Engineering & Technology (125)
- Department of Computer Science (Ende 2021 aufgelöst; Integration in die übergeordnete OE Technik) (112)
- Wirtschaft (106)
- Forschungszentrum Energie (79)
- Didaktik (mit 31.03.2021 aufgelöst; Integration ins TELL Center) (37)
- Forschungszentrum Human Centred Technologies (35)
- Soziales & Gesundheit (34)
- Josef Ressel Zentrum für Materialbearbeitung (27)
Language
- English (814) (remove)
Keywords
- Laser ablation (11)
- Y-branch splitter (11)
- arrayed waveguide gratings (11)
- photonics (8)
- Evolution strategy (7)
- Demand side management (6)
- Optimization (6)
- integrated optics (6)
- AWG (5)
- Arrayed waveguide gratings (5)
In this paper, a 256-channel, 10-GHz arrayed waveguide gratings demultiplexer for ultra-dense wavelength division multiplexing was designed using an in-house developed tool called AWG-Parameters. The AWG demultiplexer was designed for a central wavelength of 1550 nm and the structure was simulated in PHASAR tool from Optiwave. Two different AWG designs were developed and the influence of the design parameters on the AWG performance was studied.
The paper shows concepts of optical splitting based on three dimensional (3D) optical splitters based on multimode interference principle. This paper is focused on the design, fabrication and characterization of 3D MMI splitter with formed output waveguides based on IP-Dip polymer for direct application on optical fiber. The MMI optical splitter was simulated and fabricated using direct laser writing process. Output characteristics were characterized by highly resolved near-field scanning optical microscope (NSOM) and compared with 3D MMI splitter without output waveguides.
In this paper, we propose and simulate a new type of three-dimensional (3D) optical splitter based on multimode interference (MMI) for the wavelength of 1550 nm. The splitter was proposed on the square basis with the width of 20 x 20 µm2 using the IP-Dip polymer as a standard material for 3D laser lithography. We present the optical field distribution in the proposed MMI splitter and its integration possibility on optical fiber. The design is aimed to the possible fabrication process using the 3D laser lithography for forthcoming experiments.
In this paper, we document optical splitters based on Y-branch and also on MMI splitting principle. The 1×4 Y-branch splitter was prepared in 3D geometry fully from polymer approaching the single mode transmission at 1550 nm. We also prepared new concept of 1×4 MMI optical splitter. Their optical properties and character of output optical field were measured by near-field scanning optical microscope. Splitting properties and optical outputs of both splitters are very promising and increase an attractiveness of presented 3D technology and polymers.
We present a new concept of 3D polymer-based 1 × 4 beam splitter for wavelength splitting around 1550 nm. The beam splitter consists of IP-Dip polymer as a core and polydimethylsiloxane (PDMS) Sylgard 184 as a cladding. The splitter was designed and simulated with two different photonics tools and the results show high splitting ratio for single-mode and multi-mode operation with low losses. Based on the simulations, a 3D beam splitter was designed and realized using direct laser writing (DLW) process with adaptation to coupling to standard single-mode fiber. With respect to the technological limits, the multi-mode splitter having core of (4 × 4) μm 2 was designed and fabricated together with supporting stable mechanical construction. Splitting properties were investigated by intensity monitoring of splitter outputs using optical microscopy and near-field scanning optical microscopy. In the development phase, the optical performance of fabricated beam splitter was examined by splitting of short visible wavelengths using red light emitting diode. Finally, the splitting of 1550 nm laser light was studied in detail by near-field measurements and compared with the simulated results. The nearly single-mode operation was observed and the shape of propagating mode and mode field diameter was well recognized.
Power plant operators increasingly rely on predictive models to diagnose and monitor their systems. Data-driven prediction models are generally simple and can have high precision, making them superior to physics-based or knowledge-based models, especially for complex systems like thermal power plants. However, the accuracy of data-driven predictions depends on (1) the quality of the dataset, (2) a suitable selection of sensor signals, and (3) an appropriate selection of the training period. In some instances, redundancies and irrelevant sensors may even reduce the prediction quality.
We investigate ideal configurations for predicting the live steam production of a solid fuel-burning thermal power plant in the pulp and paper industry for different modes of operation. To this end, we benchmark four machine learning algorithms on two feature sets and two training sets to predict steam production. Our results indicate that with the best possible configuration, a coefficient of determination of R^2 = 0.95 and a mean absolute error of MAE=1.2 t/h with an average steam production of 35.1 t/h is reached. On average, using a dynamic dataset for training lowers MAE by 32% compared to a static dataset for training. A feature set based on expert knowledge lowers MAE by an additional 32 %, compared to a simple feature set representing the fuel inputs. We can conclude that based on the static training set and the basic feature set, machine learning algorithms can identify long-term changes. When using a dynamic dataset the performance parameters of thermal power plants are predicted with high accuracy and allow for detecting short-term problems.
This thesis aims to support the product development process. Therefore, an approach is developed, implemented as a prototype and evaluated, for automated solution space exploration of formally predefined design automation tasks holding the product knowledge of engineers. For this reason, a classification of product development tasks related to the representation of the mathematical model is evaluated based on the parameters defined in this thesis. In a second step, the mathematical model should be solved. A Solver is identified able to handle the given problem class.
Due to the context of this work, System Modelling Language (SysML) is chosen for the product knowledge formalisation. In the next step the given SysML model has to be translated into an object-oriented model. This translation is implemented by extracting information of a ".xml"-file using the XML Metadata Interchanging (XMI) standard. The information contained in the file is structured using the Unified Modelling Language (UML) profile for SysML. Afterwards a mathematical model in MiniZinc language is generated. MiniZinc is a mathematical modelling language interpretable by many different Solvers. The generated mathematical model is classified related to the Variable Type and Linearity of the Constraints and Objective of the generated mathematical model. The output is stored in a ".txt"-file.
To evaluate the functionality of the prototype, time consumption of the different performed procedures is measured. This data shows that models containing Continuous Variables need a longer time to be classified and optimised. Another observation shows that the transformation into an object-oriented model and the translation of this model into a mathematical representation are dependent on the number of SysML model elements. Using MiniZinc resulted in the restriction that models which use non-linear functions and Boolean Expressions cannot be solved. This is because the implementation of non-linear Solvers at MiniZinc is still in the development phase. An investigation of the optimally of the results, provided by the Solvers, was left for further work.
The Digital Factory Vorarlberg is the youngest Research Center of Vorarlberg University of Applied Sciences. In the lab of the research center a research and learning factory has been established for educating students and employees of industrial partners. Showcases and best practice scenarios for various topics of digitalization in the manufacturing industry are demonstrated. In addition, novel methods and technologies for digital production, cloud-based manufacturing, data analytics, IT- and OT-security or digital twins are being developed. The factory comprises only a minimum core of logistics and fabrication processes to guarantee manageability within an academic setup. As a product, fidget spinners are being fabricated. A webshop allows customers to individually design their products and directly place orders in the factory. A centralized SCADA-System is the core data hub for the factory. Various data analytic tools and methods and a novel database for IoT-applications are connected to the SCADA-System. As an alternative to on premise manufacturing, orders can be pushed into a cloud-based manufacturing platform, which has been developed at the Digital Factory. A broker system allows fabrication in distributed facilities and offers various optimization services. Concepts, such as outsourcing product configuration to customers or new types of engineering services in cloud-based manufacturing can be explored and demonstrated. In this paper, we present the basic concept of the Digital Factory Vorarlberg, as well as some of the newly developed topics.
Flexibility estimation is the first step necessary to incorporate building energy systems into demand side management programs. We extend a known method for temporal flexibility estimation from literature to a real-world residential heat pump system, solely based on historical cloud data. The method proposed relies on robust simplifications and estimates employing process knowledge, energy balances and manufacturer's information. Resulting forced and delayed temporal flexibility, covering both domestic hot water and space heating demands as constraints, allows to derive a flexibility range for the heat pump system. The resulting temporal flexibility lay within the range of 24 minutes and 6 hours for forced and delayed flexibility, respectively. This range provides new insights into the system's behaviour and is the basis for estimating power and energy flexibility - the first step necessary to incorporate building energy systems into demand side management programs.
A covariance matrix self-adaptation evolution strategy for optimization under linear constraints
(2018)
Purpose – The purpose of this study is to explore the exogenous and endogenous drivers of the high-growth of Unicorn start-ups along their life cycle, with a particular focus on Unicorns in the fintech industry.
Design/methodology/approach – The study employs an explorative longitudinal analysis with a matched pair of two cases of Unicorns start-ups with similar antecedent features to understand holistically drivers over the longer term.
Findings – High-growth patterns over the longer term are the result of a combined industry- and company-life cycle perspective. Drivers and growth patterns vary significantly according to the time of entry in the industry and
its development status. The findings are systematised within a set of propositions to be tested in future research.
Research limitations/implications – The limitations lie in empirical evidence, as the analysis is limited to one matched-pair. The revealed Unicorns’ drivers for long-term growth might encourage future research to further investigate these drivers on a larger scale.
Practical implications – The study offers practical recommendations for start-ups with high-growth ambitions and advice to policy makers regarding the development of tailor-made support programs.
Originality/value – The study significantly extends extant work on growth and high-growth by examining endogenous and exogenous triggers over time and by linking the Unicorn-life cycle to the industry life cycle, an approach which has, to the best of the authors’ knowledge, not yet been applied.
A modified matrix adaptation evolution strategy with restarts for constrained real-world problems
(2020)
In combination with successful constraint handling techniques, a Matrix Adaptation Evolution Strategy (MA-ES) variant (the εMAg-ES) turned out to be a competitive algorithm on the constrained optimization problems proposed for the CEC 2018 competition on constrained single objective real-parameter optimization. A subsequent analysis points to additional potential in terms of robustness and solution quality. The consideration of a restart scheme and adjustments in the constraint handling techniques put this into effect and simplify the configuration. The resulting BP-εMAg-ES algorithm is applied to the constrained problems proposed for the IEEE CEC 2020 competition on Real-World Single-Objective Constrained optimization. The novel MA-ES variant realizes improvements over the original εMAg-ES in terms of feasibility and effectiveness on many of the real-world benchmarks. The BP-εMAg-ES realizes a feasibility rate of 100% on 44 out of 57 real-world problems and improves the best-known solution in 5 cases.
A multi-recombinative active matrix adaptation evolution strategy for constrained optimization
(2019)
A novel calorimetric technique for the analysis of gas-releasing endothermic dissociation reactions
(2020)
Synthetic polymers, such as polyamide (PA), inherently possess a moderate number of surface functionalities compared to natural polymers, which negatively impacts the uniformity of metallic coatings obtained through wet-chemical methods like electroless plating. The paper presents the use of a siloxane interlayer formed from the condensation of the hydrolyzed 3-triethoxysilylpropyl succinic anhydride (TESPSA) precursor as a strategy to modify the surface properties of polyamide 6.6 (PA66) fabrics and improve the uniformity of the copper surface coating. The application of the siloxane intermediate coating demonstrates a significant improvement in electrical conductivity, up to 20 times higher than fabrics without the interlayer. The morphology of the coatings was investigated using scanning electron (SEM) and laser confocal scanning microscopy (LSM). In addition, dye adsorption, flexural rigidity, air permeability and contact angle measurements were conducted to monitor the change in the PA66 properties after the siloxane functionalization.
A quantum-light source that delivers photons with a high brightness and a high degree of entanglement is fundamental for the development of efficient entanglement-based quantum-key distribution systems. Among all possible candidates, epitaxial quantum dots are currently emerging as one of the brightest sources of highly entangled photons. However, the optimization of both brightness and entanglement currently requires different technologies that are difficult to combine in a scalable manner. In this work, we overcome this challenge by developing a novel device consisting of a quantum dot embedded in a circular Bragg resonator, in turn, integrated onto a micromachined piezoelectric actuator. The resonator engineers the light-matter interaction to empower extraction efficiencies up to 0.69(4). Simultaneously, the actuator manipulates strain fields that tune the quantum dot for the generation of entangled photons with fidelities up to 0.96(1). This hybrid technology has the potential to overcome the limitations of the key rates that plague current approaches to entanglement-based quantum key distribution and entanglement-based quantum networks. Introduction
In engineering design, optimization methods are frequently used to improve the initial design of a product. However, the selection of an appropriate method is challenging since many
methods exist, especially for the case of simulation-based optimization. This paper proposes a systematic procedure to support this selection process. Building upon quality function deployment, end-user and design use case requirements can be systematically taken into account via a decision
matrix. The design and construction of the decision matrix are explained in detail. The proposed
procedure is validated by two engineering optimization problems arising within the design of box-type boom cranes. For each problem, the problem statement and the respectively applied optimization methods are explained in detail. The results obtained by optimization validate the use
of optimization approaches within the design process. The application of the decision matrix shows the successful incorporation of customer requirements to the algorithm selection.
A systemic-constructivist approach to the facilitation and debriefing of simulations and games
(2010)
Purpose: The purpose of this qualitative phenomenological study is to explore the of self-initiated expatriates prior to and during acculturation to life in a smaller periphery region such as Vorarlberg, Austria. By providing insights into their lived experience this research aims to fill in the gaps of missing information on motivators, success factors to adjustment, issues, and stressors, and more that SIEs experience when adjusting. Specifically, what items promote adjustment and what items hinder adjustment.
Findings: Developed a better understanding of how and what motivational factors lead to expatriation. Furthermore, that opportunities arise by chance. During acculturation, language factors (dialect), cultural differences act as stressors. While social support, and organizational support, learning of the language act as promoters of acculturation.
Further Research could be done including ethnicities, SIEs moving from developed to developing countries, adjustment in regions with dialect vs no dialect.
Key words: self-initiated expatriates, expatriation, acculturation, adjustment, promoting acculturation, hindering acculturation.
Issues with professional conduct and discrimination against Lesbian, Gay, Bisexual, Transgender (LGBT+) people in health and social care, continue to exist in most EU countries and worldwide.
The project IENE9 titled: “Developing a culturally competent and compassionate LGBT+ curriculum in health and social care education” aims to enable teacher/trainers of theory and practice to enhance their skills regarding LGBT+ issues and develop teaching tools to support the inclusion of LGBT+ issues within health and social care curricula. The newly culturally competent and compassionate LGBT+ curriculum will be delivered though a Massive Open Online Course (MOOC) which is aimed at health and social care workers, professionals and learners across Europe and worldwide.
We have identified educational policies and guidelines at institutions teaching in health and social care, taken into account for developing the learning/teaching resources. The MOOC will be an innovative training model based on the Papadopoulos (2014) model for “Culturally Competent Compassion”. The module provides a logical and easy to follow structure based on its four constructs 'Culturally Aware and Compassionate Learning', 'Culturally Knowledgeable and Compassionate Learning', 'Culturally Sensitive and Compassionate Learning', 'Culturally Competent and Compassionate Learning'.
Specific training may result in better knowledge and skills of the health and social care workforce, which helps to reduce inequalities and communication with LGBT+ people, as well as diminishing the feelings of stigma or discrimination experienced.
Active demand side management with domestic hot water heaters using binary integer programming
(2013)
A rapid change to remote work during the beginning of the Covid-19 pandemic allowed many organizations to roll out new collaboration platforms to rapidly digitalize their workflows and processes in order to continue operation. This sudden shift to remote work revealed to employees the potential benefits of working remotely in the form of additional flexibility and also showed the challenges and barriers organizations could face by introducing such a strategy. This thesis aims to uncover the key considerations that the organizations of the industrial sector in Vorarlberg need to consider establishing a remote work strategy. According to the results from the research, the Covid-19 pandemic was as a paradigm change for the interviewed decision makers about how they thought about remote work and how they transformed their respective organizations too continue to operate. After the initial phase of Covid-19 restrictions organizations started to experiment with a remote work strategy of their own, based on their past experiences. For now, most of the interviewed organizations use already different remote work concepts and evaluate which one suits best their needs. The main considerations as to why an organization introduced a remote work strategy are to be an attractive employer and to stay ahead in the search for new talent. Further by introducing a remote work strategy, organizations need to change their rules of collaboration, adapt their core values to fit a remote workplace and to introduce collaboration platforms which are designed to support a remote workforce.
Creating a schedule to perform certain actions in a realworld environment typically involves multiple types of uncertainties. To create a plan which is robust towards uncertainties, it must stay flexible while attempting to be reliable and as close to optimal as possible. A plan is reliable if an adjustment to accommodate for a new requirement causes only a few disruptions. The system needs to be able to adapt to the schedule if unforeseen circumstances make planned actions impossible, or if an unlikely event would enable the system to follow a better path. To handle uncertainties, the used methods need to be dynamic and adaptive. The planning algorithms must be able to re-schedule planned actions and need to adapt the previously created plan to accommodate new requirements without causing critical disruptions to other required actions.
Adaptive indirect fieldoriented control of an induction machine in the armature control range
(2012)
Scrum has been a prominent project management framework for managing software development projects. The scrum team embodies values such as commitment, focus, respect, courage, and openness to develop trust, which serves as the foundation of the scrum framework. However, in recent years, scrum teams are shifting towards a work-from-home environment which is relatively new to most of them and known to present various challenges. Looking at the benefits of adhering to scrum values, this study aims to investigate the challenges scrum teams experience in adhering to scrum values while operating virtually, as well as to explore practical strategies to overcome the identified challenges, particularly during the storming stage of team development. This research employed a qualitative methodology using semi-structured interviews with scrum team members who have experience working in a virtual environment. Through qualitative content analysis of semi-structured interviews, this research identifies significant challenges within five main categories: communication, collaboration, interpersonal dynamics, the virtual work environment, and personal workspace issues. However, beyond the challenges, the study reveals practical strategies as well for successful team dynamics and higher efficiency. The strategies derived from team members' experiences are categorized into six categories: enhanced meeting management, leveraging in-person engagements, optimizing tools & technology, effective communication strategies, team-building, and nurturing a positive work culture.
Traditional power grids are mainly based on centralized power generation and subsequent distribution. The increasing penetration of distributed renewable energy sources and the growing number of electrical loads is creating difficulties in balancing supply and demand and threatens the secure and efficient operation of power grids. At the same time, households hold an increasing amount of flexibility, which can be exploited by demand-side management to decrease customer cost and support grid operation. Compared to the collection of individual flexibilities, aggregation reduces optimization complexity, protects households’ privacy, and lowers the communication effort. In mathematical terms, each flexibility is modeled by a set of power profiles, and the aggregated flexibility is modeled by the Minkowski sum of individual flexibilities. As the exact Minkowski sum calculation is generally computationally prohibitive, various approximations can be found in the literature. The main contribution of this paper is a comparative evaluation of several approximation algorithms in terms of novel quality criteria, computational complexity, and communication effort using realistic data. Furthermore, we investigate the dependence of selected comparison criteria on the time horizon length and on the number of households. Our results indicate that none of the algorithms perform satisfactorily in all categories. Hence, we provide guidelines on the application-dependent algorithm choice. Moreover, we demonstrate a major drawback of some inner approximations, namely that they may lead to situations in which not using the flexibility is impossible, which may be suboptimal in certain situations.
Alleviating the curse of dimensionality in minkowski sum approximations of storage flexibility
(2023)
Many real-world applications require the joint optimization of a large number of flexible devices over some time horizon. The flexibility of multiple batteries, thermostatically controlled loads, or electric vehicles, e.g., can be used to support grid operations and to reduce operation costs. Using piecewise constant power values, the flexibility of each device over d time periods can be described as a polytopic subset in power space. The aggregated flexibility is given by the Minkowski sum of these polytopes. As the computation of Minkowski sums is in general demanding, several approximations have been proposed in the literature. Yet, their application potential is often objective-dependent and limited by the curse of dimensionality. In this paper, we show that up to 2d vertices of each polytope can be computed efficiently and that the convex hull of their sums provides a computationally efficient inner approximation of the Minkowski sum. Via an extensive simulation study, we illustrate that our approach outperforms ten state-of-the-art inner approximations in terms of computational complexity and accuracy for different objectives. Moreover, we propose an efficient disaggregation method applicable to any vertex-based approximation. The proposed methods provide an efficient means to aggregate and to disaggregate typical battery storages in quarter-hourly periods over an entire day with reasonable accuracy for aggregated cost and for peak power optimization.
With Cloud Computing and multi-core CPUs parallel computing resources are becoming more and more affordable and commonly available. Parallel programming should as well be easily accessible for everyone. Unfortunately, existing frameworks and systems are powerful but often very complex to use for anyone who lacks the knowledge about underlying concepts. This paper introduces a software framework and execution environment whose objective is to provide a system which should be easily usable for everyone who could benefit from parallel computing. Some real-world examples are presented with an explanation of all the steps that are necessary for computing in a parallel and distributed manner.
An electrochemical study with three redox substances on a carbon based nanogap electrode array
(2020)
Bubble column humidifiers (BCHs) are frequently used for the humidification of air in various water treatment applications. A potential but not yet profoundly investigated application of such devices is the treatment of oily wastewater. To evaluate this application, the accumulation of an oil-water emulsion using a BCH is experimentally analyzed. The amount of evaporating water vapor can be evaluated by measuring the humidity ratio of the outlet air. However, humidity measurements are difficult in close to saturated conditions, as the formation of liquid droplets on the sensor impacts the measurement accuracy. We use a heating section after the humidifier, such that no liquid droplets are formed on the sensor. This enables us a more accurate humidity measurement. Two batch measurement runs are conducted with (1) tap water and (2) an oil-water emulsion as the respective liquid phase. The humidity measurement in high humidity conditions is highly accurate with an error margin of below 3 % and can be used to predict the oil concentration of the remaining liquid during operation. The measured humidity ratio corresponds with the removed amount of water vapor for both tap water and the accumulation of an oil-water emulsion. Our measurements show that the residual water content
in the oil-water emulsion is below 4 %.
Vast amounts of oily wastewater are byproducts of the petrochemical and the shipping industry and to this day frequently discharged into water bodies either without or after insufficient treatment. To alleviate the resulting pollution, water treatment processes are in great demand. Bubble column humidifiers (BCHs) as part of humidification–dehumidification systems are predestined for such a task, since they are insensitive to different feed liquids, simple in design and have low maintenance requirements. While humidification in a bubble column has been investigated plentiful for desalination, a systematic investigation of oily wastewater treatment is missing in literature. We filled this gap by analyzing the treatment of an oil–water emulsion experimentally to derive recommendations for future design and operation of BCHs. Our humidity measurements indicate that the air stream is always saturated after humidification for a liquid height of only 10 cm. A residual water mass fraction of 3.5 wt% is measured after a batch run of six hours. Furthermore, continuous measurements show that an increase in oil mass fraction leads to a decrease in system productivity especially for high oil mass fractions. This decrease is caused by the heterogeneity of the liquid temperature profile. A lower liquid height mitigates this heterogeneity, therefore decreasing the heat demand and improving the overall efficiency. The oil content of the produced condensate is below 15 ppm, allowing discharge into various water bodies. The results of our systematic investigation prove suitability and indicate a strong future potential for the use of BCHs in oily wastewater treatment.
An implementation approach of the gap navigation tree using the TurtleBot 3 Burger and ROS Kinetic
(2020)
The creation of a spatial model of the environment is an important task to allow the planning of routes through the environment. Depending on the number of sensor inputs different ways of creating a spatial environment model are possible. This thesis introduces an implementation approach of the Gap Navigation Tree which is aimed for usage with robots that have a limited amount of sensors. The Gap Navigation Tree is a tree structure based on depth discontinuities constructed from the data of a laser scanner. Using the simulated TurtleBot 3 Burger and ROS kinetic a framework is created that implements the theory of the Gap Navigation Tree. The framework is structured in a way that allows using different robots with different sensor types by separating the detection of depth discontinuities from the building and updating of the Gap Navigation Tree.
Skiing is one of the most popular winter sports in the world and especially in the alps. As the skiers enjoy their time on the slopes the most annoying thing that could happen is long waiting times at a lift. Unfortunately, because of climate changes, this happens more regularly because smaller skiing areas at lower altitudes have to close and the number of good skiing days decreases as well. This leads to a increase in the number of skiers in the skiing areas which inevitably leads to longer waiting times and dissatisfied skiers. To prevent this from happening, the carriers of the skiing areas have to manage the skiers flow and distribution and what better way to analyse the current situation and possible changes then by simulating the whole area. A simulation has the advantage of being flexible with regards to time as well as configuration. Be it simulating a skiing day and look into detail of the behaviour of a single skier and how it moves in the area by simulating in real time or setting the focus to the whole area and find out when and where queues are forming throughout the whole day by speeding up the time and simulate the day in only seconds, everything is possible. Even simulating a scenario where some part of the area is closed and the skiers cannot take specific lifts due to some technical error or some slopes because of to less snow. By simulating and analysing all these scenarios not only does the experts of the skiing area gain valuable statistical information about the area but can also simulate changes to the system like a crowd fl ow control or an increase or decrease in capacity of a lift. The simulation built in context with this work for the skiing area of Mellau shows all those applications but can also be used as a basis for further improvements of the skiing area or be expanded to other areas like Damüls. The simulation was implemented using the Anylogic simulation environment and the statistical evaluation was also performed in this program.
This master’s thesis provides an overview of a more efficient, future-oriented living concept in Dornbirn, Austria. The use of a combined heat and power unit (CHP), in combination with a thermal storage, as a heating system is specifically investigated. In order to make this heating system more attractive for the consumer, the sale of the generated electricity from the CHP is considered. The more efficient use of energy for heating increases the attractiveness by a minimisation of the living space. This master’s thesis aims to draw attention to the issue and to achieve a rethinking in the planning of future living space. For the research and elaboration of this thesis, statistics and trustworthy literature were used, and physical modelling was applied. This Master’s thesis can be assigned to the fields of energy technology, mechatronics, architecture and civil engineering. It contributes for students, researchers, and other interested person in these sectors.
Analysis of the (μ/μI,λ)-CSA-ES with repair by projection applied to a conically constrained problem
(2019)
In contrast to fossil energy sources, the supply by renewable energy sources likewind and photovoltaics can not be controlled. Therefore, flexibilities on the demandside of the electric power grid, like electro-chemical energy storage systems, are usedincreasingly to match electric supply and demand at all times. To control those flex-ibilities, we consider two algorithms that both lead to linear programming problems.These are solved autonomously on the demand side, i.e., by household computers.In the classic approach, an energy price signal is sent by the electric utility to thehouseholds, which, in turn, optimize the cost of consumption within their constraints.Instead of an energy price signal, we claim that an appropriate power signal that istracked in L1-norm as close as possible by the household has favorable character-istics. We argue that an interior point of the household’s feasibility region is neveran optimal price-based point but can result in a L1-norm optimal point. Thus, pricesignals can not parametrize the complete feasibility region which may not lead to anoptimal allocation of consumption.We compare the price and power tracking algorithms over a year on the base ofone-day optimizations regarding different information settings and using a large dataset of daily household load profiles. The computational task constitutes an embarrassingly parallel problem. To this end, the performance of the two parallel computation frameworks DEF [1] and Ray [2] are investigated. The Ray framework is used to run the Python applications locally on several cores. With the DEF frameworkwe execute our Python routines parallelly in a cloud. All in all, the results providean understanding of when which computation framework and autonomous algorithmwill outperform the other.
Activation of heat pump flexibilities is a viable solution to support balancing the grid via Demand Side Management measures and fulfill the need for flexibility options. Aggregators as interface between prosumers, distribution system operators and balance responsible parties face the challenge due to data privacy and technical restrictions to transform prosumer information into aggregated available flexibility to enable trading thereof. Thereby, literature lacks a generic, applicable and widely accepted flexibility estimation method for heat pumps,which incorporates reduced sensor and system information, system- and demand-dependent behaviour. In this paper, we adapt and extend a method from literature, by incorporating domain knowledge to overcome reduced sensor and system information. We apply data of five real-world heat pump systems, distinguish operation modes, estimate power and energy flexibility of each single heat pump system, proof transferability of the method, and aggregate the flexibilities available to showcase a small HP pool as a proof of concept.
The demand for managing data across multiple domains for product creation is steadily increasing. Model-Driven Systems Engineering (MDSE) is a solution for this problem. With MDSE, domain-specific data is formalized inside a model with a custom language, for example, the Unified Modelling Language (UML). These models can be created with custom editors, and specialized domains can be integrated with extensions to UML, e.g., the Systems Modeling Language (SysML). The most dominant editor in the open-source sector is Eclipse Papyrus SysML 1.6 (Papyrus), an editor to create SysML diagrams for MDSE.
In the pursuit of creating a model and diagrams, the editor does not support the user appropriately or even hinders them. Therefore, paradigms from the diagram modelling and Human Computer Interaction (HCI) domains, as well as perceptual and design theory, are applied to create an editor prototype from scratch. The changes fall into the categories of hierarchy, aid in the diagram composition, and navigation. The prototype is compared with Papyrus in a user test to determine if the changes have the effect of improving usability.
The study involved 10 participants with different knowledge levels of UML, ranging from beginners to experts. Each participant was tested on a navigation and modelling task in both the newly created editor, named Modelling Studio, and Papyrus. The study was evaluated through a questionnaire and analysis of the diagrams produced by the tasks.
The findings are that Modelling Studio’s changes to the hierarchical elements improved their rating. Furthermore, aid for diagram composition could be reinforced by changes to the alignment helper tool and adjustments to the default arrow behaviour of a diagram. Lastly, model navigation adjustments improve a link’s visibility and rating of a specialized link (best practice). The introduction of breadcrumbs had limited success in bettering navigation usability. The prototype deployed a broad spectrum of changes that found improvement already, which can, however, be further improved and tested more thoroughly.
Application of various tools to design, simulate and evaluate optical demultiplexers based on AWG
(2015)
Zeros can cause many issues in data analysis and dealing with them requires specialized procedures. We differentiate between rounded zeros, structural zeros and missing values. Rounded zeros occur when the true value of a variable is hidden because of a detection limit in whatever mechanism was used to acquire the data. Structural zeros are values which are truly zero, often coming about due to a hidden mechanism separate from the one which generates values greater than 0. Missing values are values that are completely missing for unknown or known reasons. This thesis outlines various methods for dealing with different kinds of zeros in different contexts. Many of these methods are very specific in their ideal usecase. They are separated based on which kind of zero they are intended for and if they are better suited for compositional or for standard data.
For rounded zeros we impute the zeros with an estimated value below the detection limit. The author describes multiplicative replacement, a simple procedure that imputes values at a fixed fraction of the detection limit. As a more advanced technique, the author describes Kaplan Meier smoothing spline replacement, which interpolates a spline on a Kaplan Meier curve and uses the spline below the detection limit to impute values in a more natural distribution. Rounded zeros cannot be imputed with the same techniques that would be used for regular missing values, since there is more information available on the true value of a rounded zero than there would be for a regular missing value.
Structural zeros cannot be imputed since they are a true zero. Imputing them would falsify their values and produce a value where there should be none. Because of this, we apply modelling techniques that can work around structural zeros and incorporate them. For standard data, the zero inflated Poisson model is presented. This model utilizes a mixture of a logistic and a Poisson distribution to accurately model data with a large amount of structural zeros. While the Poisson distribution is only applicable to count data, the zero inflation concept can be applied to different kinds of distributions. For compositional data, the zero adjusted Dirichlet model is introduced. This model mixes Dirichlet distributions for every pattern of zeros found within the data. Non-algorithmic techniques to reduce the amount of structural zeros present are also shown. These techniques being amalgamation, which combines columns with structural zeros into more broad descriptors and classification, which changes columns into categorical values based on a structural zero being present or not.
Missing values are values that are completely missing for various known or unknown reasons. Different imputation techniques are introduced. For standard data, MissForest imputation is introduced, which utilizes a RandomForest regression to impute mixed type missing values. Another imputation technique shown utilizes both a genetic algorithm and a neural network to impute values based on the genetic algorithm minimizing the error of an autoencoder neural network. In the case of compositional data, knn imputation is presented, which utilizes the knn concept also found in knn clustering to impute the values based on the closest samples with a value available.
All of these methods are explained and demonstrated to give readers a guide to finding the suitable methods to use in different scenarios.
The thesis also provides a general guide on dealing with zeros in data, with decision flowcharts and more detailed descriptions for both compositional and standard data being presented. General tips on getting better results when zeros are involved are also given and explained. This general guide was then applied to a dataset to show it in action.
Arrayed Waveguide Gratings
(2016)
Arrayed Waveguide Grating (AWG) is a passive optical component, which have found applications in a wide range of photonic applications including telecommunications and medicine. Silica-on-Silicon (SoS) based AWGs use a low refractive-index contrast between the core (waveguide) and the cladding which leads to some significant advantages such as low propagation losses and low fiber coupling losses between the AWG waveguides and the fibres. Therefore, they are an attractive DWDM solution offering higher channel count technology and good performance characteristics compared to other methods. However, the very low refractive-index contrast means the bending radius of the waveguides needs to be very large (on the order of several millimeters) and may not fall below a particular critical value to suppress bending losses. As a result, silica-based waveguide devices usually have a very large size that limits the integration density of SiO2-based photonic integrated devices. High-index contrast AWGs (such as silicon, silicon nitride or polymer-based waveguide devices) feature much smaller waveguide size compared to low index contrast AWGs. Such compact devices can easily be implemented on a chip and have already found applications in emerging applications such as optical sensors, devices for DNA diagnostics and optical spectrometers for infrared spectroscopy.In this work, we present the design, simulation, technological verification and applications of both, the low-index contrast and high-index contrast AWGs. For telecommunication applications AWG-MUX/Demux with up to 128-channels will be presented. For medical applications the AWG-spectrometer with up to 512-channels will be presented.This work was carried out in the framework of the projects: ADOPT No. SK-AT-20-0012, NOVASiN No. SK-AT-20-0017 and AUTOPIC No. APVV-17-0662 from Slovak research and development agency of Ministry of Education, Science, Research and Sport of the Slovak Republic and No. SK 07/2021 and SK 08/2021 from Austrian Agency for International Cooperation in Education and Research (OeAD-GmbH); and project PASTEL, no. 2020-10-15-001, funded by SAIA.
Nowadays, the area of customer management strives for omni-channel and state-of-the-art CRM concepts including Artificial Intelligence and the approach of Customer Experience. As a result, modern CRM solutions are essential tools for supporting customer processes in Marketing, Sales and Service. AI-driven CRM accelerates sales cycles, improves lead generation and qualification, and enables highly personalized marketing. The focus of this thesis is to present the basics of Customer Relationship Management, to show the latest Gartner insights about CRM and CX, and to demonstrate an AI Business Framework, which introduces AI use cases that are used as a basis for the expert interviews conducted in an international B2B company. AI will transform CX through a better understanding of customer behavior. The following research questions are answered in this thesis: In which AI use cases can Sales and CRM be improved? How can Customer Experience be improved with AI-driven CRM?
Assessing antecedents of entrepreneurial activities of academics at south african universities
(2016)
Companies develop and implement strategies with the aim to address the needs of their customers. Acquisition is one market expansion strategy that companies can use to acquire new market access, technologies and/or to grow organically. In recent years, Chinese companies have been active in acquiring companies all over the globe to develop their strategic position. This caused certain contra reaction in Europe and as well in the Swiss media against cross-border acquisitions of Swiss companies.
Swiss companies and particularly the Swiss-MEM (Machinery, Electrical and Mechanical) industry is highly export oriented and their value proposition builds on attributes like knowledge, technology, and differentiating products. Among them are many “hidden champions” and niche players who successfully dominate the market segment.
As observed with Chinese companies, Indian companies also started to become more active outside of their domestic markets by increasing their foreign direct investments into Europe, Asia and North America, over the last decades. The lasting and good relationship of India and Switzerland might trigger the wish for Indian companies to acquire Swiss and particularly Swiss-MEM companies for acquisitions.
This Master’s Thesis assesses how often Indian investments into public and privately owned Swiss-MEM companies by acquisition happen, how are the attempts of acquisitions perceived by the stakeholders and what measures Swiss and Swiss-MEM companies can take, to protect themselves from being acquired. To access the research topic, several sub-questions will be analysed with the aid of primary and secondary research to assess the situation.
The research topic is of particular interested to the author since he spent over 20 years working in the Swiss-MEM industry, involved in international affairs and in recent years specifically with India. The observation of Chinese acquisition activities and insight into the size and potential of India were the drivers for researching whether India might follow China’s example.
In conclusion, Indian companies are not explicitly targeting Swiss and Swiss-MEM companies, but there are reasons to believe that it would make sense for Indian companies to look into the acquisition of Swiss and Swiss-MEM companies. The perception of such acquisitions varies, but there are arguments for and against them. Companies must take strategic and organisational measures in order to prevent themselves from becoming the target of an acquisition. However, it is known that the state should not interfere in the market and a discussion at a political level, planning how to deal with cross-border acquisition, is needed.
Further areas for research based on this Master’s Thesis could be the review of how the targeting of Swiss and Swiss-MEM companies by Indian companies would look, and also the topic of the succession planning in Swiss secondary sector in conjunction with Indian targeting for acquisitions. A third area to research might be investigating the political aspects involved in the research questions.
The boom of information technology development created high demand for skilled labour force in IT occupations. IT professionals install, test, build, repair or maintain hardware and software and can do the job from any location in the world.
Demand for the workforce significantly outstrips the global supply. In a situation of staff shortage employers have to compete on local and global labour markets. The ability of a firm to attract and retain the best talent would become a source of its sustainable competitive advantage.
Aim of the study is to understand what influences perception of employment attractiveness by IT professionals the most. This study intends to expend the existing knowledge about employees´ needs and “psychological contract” concept.
The research was conducted with the participation of 4 IT and 4 HR English-speaking experts who live and work in Austria. In the study the grounded theory approach and the descriptive qualitative methods were applied.
The research findings explain which factors influence the decision of IT professionals to join, stay or leave an employer. The results are discussed in relation to talent attraction and retention practices of Austrian employers.
The photonic integrated circuits are required in the next generations of coherent terabit optical communications. The software tools for automated adjustment and coupling of optical fiber arrays to photonic integrated circuits has been developed. The obtained results are needed in final production phase in the technology process of photonic integrated circuits packaging.
The usage of data gathered for Industry 4.0 and smart factory scenarios continues to be a problem for companies of all sizes. This is often the case because they aim to start with complicated and time-intensive Machine Learning scenarios. This work evaluates the Process Capability Analysis (PCA) as a pragmatic, easy and quick way of leveraging the gathered machine data from the production process. The area of application considered is injection molding. After describing all the required domain knowledge, the paper presents an approach for a continuous analysis of all parts produced. Applying PCA results in multiple key performance indicators that allow for fast and comprehensible process monitoring. The corresponding visualizations provide the quality department with a tool to efficiently choose where and when quality checks need to be performed. The presented case study indicates the benefit of analyzing whole process data instead of considering only selected production samples. The use of machine data enables additional insights to be drawn about process stability and the associated product quality.
With the digitalisation, and the increased connectivity between manufacturing systems emerging in this context, manufacturing is shifting towards decentralised, distributed concepts. Still, for manufacturing scenarios manual input or augmentation of data is required at system boundaries. Especially in distributed manufacturing environments, like Cloud Manufacturing (CMfg) systems, constant changes to the available manufacturing resources and products pose challenges for establishing connections between them. We propose a feature-oriented representation of concepts, especially from the manufacturing domain, which serves as the basis for (semi-) automatically linking, e.g., manufacturing resources and products. This linking methodologies, as well as knowledge inferred using it, is then used to support distributed manufacturing, especially in CMfg environments, and enhance product development. The concepts and methodologies are to be evaluated in a real world learning factory.
Load shifting of resistive domestic hot water heaters has been done in Europe since the 1930s, primarily to ease the power supply during peak times. However, the pursued and already commenced energy transition in Europe changes the requirements for the underlying logic. In this more general context, demand side management is considered a viable approach to utilize the flexibility of thermal and electrochemical storage systems for buffering energy generated from renewables. In this work, an autonomous approach for demand side management of energy storage systems is developed, which is based on unidirectional communication of an incentive. This concept is then applied to the specific problem of resistive domestic hot water heaters.
The basic algorithms for an optimized operation are developed and evaluated based on simulation studies. The optimization problem considered, maps the search for the optimal heating schedule, while ensuring the temperature limits defined: Firstly, a maximum, which is defined by the hysteresis set point temperature; Secondly, during hot water draw offs, the outlet temperature should not fall below a set minimum. To establish this, the time series of hot water usage has to be predicted.
Depending on the complexity of the hot water heater model used, the formulation of the problem ranges from a linear to non-linear optimization with discontinuous constraints. The simulation studies presented, comprise a formulation as binary linear optimization problem, as well as a solution based on a heuristic direct method to solve the non-linear version. In contrast to the first linear approach, the latter takes stratification inside the tank into account. One-year simulations based on realistic hot water draw profiles are used to investigate the potentials with respect to load shift and energy efficiency improvements. Additional to assuming perfect prediction of user behavior, this work also considers the k-nearest neighbors algorithm to predict the time series. If compared to usual night-tariff switched operation, assuming perfect prediction shows 30 % savings on the electricity market when stratification is taken into account. The user prediction proposed leads to 16 % cost savings, while 6 % of the electric energy is conserved.
Based on the linear approach, a prototype is developed and used in a field test. A micro computer processes the sensor information for local data acquisition, receives electricity spot market prices up to 34 hours in advance, solves the optimization problem for this time horizon, and switches the power supply of the resistive heating element accordingly. Beside the temperature of the environment, the inlet and outlet temperatures, the temperature inside the tank is measured at five points, as well as the water volume flow rate and the electric power recorded. Two test runs of 18 days each, compare the night-tariff switched operation to the price-based optimization in a real-world environment. Results show a significant increase of 6 % in thermal efficiency during the operation based on the algorithm developed, which can be contributed to the optimization accounting for the usage expected.
To facilitate the technical and economic feasibility for retrofit-able implementations of the method proposed for autonomous demand side management, the sensors used must be kept to a minimum. A sufficiently accurate state estimation of the storage has to be achieved, to facilitate a useful model predictive control. Therefore, the last part of this work focuses on the aspect of automated system identification and state estimation of resistive domestic hot water heaters. To that end, real hot water usage profiles and schedules gathered in a field test are used in a lab setup, to collect data on the temperature distribution inside the tank during realistic operating conditions. Four different thermal models, common in literature, are considered for state estimation and system identification. Based on the data collected in the lab, they are evaluated with respect to robustness, computational costs, and estimation accuracy. Based on the observations made in the experiments, an extension of the one-node model by a single additional parameter is proposed. By this adaption, a linear temperature distribution in the lower part of the tank can be modeled during heating. The resulting model exhibits improved robustness and lower computational costs, when compared to the original model. At the same time, the average temperature in the storage tank is estimated nearly as accurate (6 % mean average percentage error) as in the case of the about 50 times more computationally expensive multi-layer model (4 % mean average percentage error).
Demand-side management approaches that exploit the temporal flexibility of electric vehicles have attracted much attention in recent years due to the increasing market penetration. These demand-side management measures contribute to alleviating the burden on the power system, especially in distribution grids where bottlenecks are more prevalent. Electric vehicles can be defined as an attractive asset for distribution system operators, which have the potential to provide grid services if properly managed. In this thesis, first, a systematic investigation is conducted for two typically employed demand-side management methods reported in the literature: A voltage droop control-based approach and a market-driven approach. Then a control scheme of decentralized autonomous demand side management for electric vehicle charging scheduling which relies on a unidirectionally communicated grid-induced signal is proposed. In all the topics considered, the implications on the distribution grid operation are evaluated using a set of time series load flow simulations performed for representative Austrian distribution grids. Droop control mechanisms are discussed for electric vehicle charging control which requires no communication. The method provides an economically viable solution at all penetrations if electric vehicles charge at low nominal power rates. However, with the current market trends in residential charging equipment especially in the European context where most of the charging equipment is designed for 11 kW charging, the technical feasibility of the method, in the long run, is debatable. As electricity demand strongly correlates with energy prices, a linear optimization algorithm is proposed to minimize charging costs, which uses next-day market prices as the grid-induced incentive function under the assumption of perfect user predictions. The constraints on the state of charge guarantee the energy required for driving is delivered without failure. An average energy cost saving of 30% is realized at all penetrations. Nevertheless, the avalanche effect due to simultaneous charging during low price periods introduces new power peaks exceeding those of uncontrolled charging. This obstructs the grid-friendly integration of electric vehicles.