Refine
Year of publication
Document Type
- Conference Proceeding (308)
- Article (288)
- Master's Thesis (113)
- Part of a Book (53)
- Book (19)
- Doctoral Thesis (9)
- Report (6)
- Preprint (5)
- Working Paper (4)
- Other (3)
Institute
- Forschungszentrum Mikrotechnik (247)
- Forschungszentrum Business Informatics (149)
- Technik | Engineering & Technology (127)
- Department of Computer Science (Ende 2021 aufgelöst; Integration in die übergeordnete OE Technik) (112)
- Wirtschaft (106)
- Forschungszentrum Energie (79)
- Didaktik (mit 31.03.2021 aufgelöst; Integration ins TELL Center) (37)
- Forschungszentrum Human Centred Technologies (35)
- Soziales & Gesundheit (34)
- Josef Ressel Zentrum für Materialbearbeitung (27)
Language
- English (815) (remove)
Keywords
- Laser ablation (11)
- Y-branch splitter (11)
- arrayed waveguide gratings (11)
- photonics (8)
- Evolution strategy (7)
- Demand side management (6)
- Optimization (6)
- integrated optics (6)
- AWG (5)
- Arrayed waveguide gratings (5)
Traditional power grids are mainly based on centralized power generation and subsequent distribution. The increasing penetration of distributed renewable energy sources and the growing number of electrical loads is creating difficulties in balancing supply and demand and threatens the secure and efficient operation of power grids. At the same time, households hold an increasing amount of flexibility, which can be exploited by demand-side management to decrease customer cost and support grid operation. Compared to the collection of individual flexibilities, aggregation reduces optimization complexity, protects households’ privacy, and lowers the communication effort. In mathematical terms, each flexibility is modeled by a set of power profiles, and the aggregated flexibility is modeled by the Minkowski sum of individual flexibilities. As the exact Minkowski sum calculation is generally computationally prohibitive, various approximations can be found in the literature. The main contribution of this paper is a comparative evaluation of several approximation algorithms in terms of novel quality criteria, computational complexity, and communication effort using realistic data. Furthermore, we investigate the dependence of selected comparison criteria on the time horizon length and on the number of households. Our results indicate that none of the algorithms perform satisfactorily in all categories. Hence, we provide guidelines on the application-dependent algorithm choice. Moreover, we demonstrate a major drawback of some inner approximations, namely that they may lead to situations in which not using the flexibility is impossible, which may be suboptimal in certain situations.
Alleviating the curse of dimensionality in minkowski sum approximations of storage flexibility
(2023)
Many real-world applications require the joint optimization of a large number of flexible devices over some time horizon. The flexibility of multiple batteries, thermostatically controlled loads, or electric vehicles, e.g., can be used to support grid operations and to reduce operation costs. Using piecewise constant power values, the flexibility of each device over d time periods can be described as a polytopic subset in power space. The aggregated flexibility is given by the Minkowski sum of these polytopes. As the computation of Minkowski sums is in general demanding, several approximations have been proposed in the literature. Yet, their application potential is often objective-dependent and limited by the curse of dimensionality. In this paper, we show that up to 2d vertices of each polytope can be computed efficiently and that the convex hull of their sums provides a computationally efficient inner approximation of the Minkowski sum. Via an extensive simulation study, we illustrate that our approach outperforms ten state-of-the-art inner approximations in terms of computational complexity and accuracy for different objectives. Moreover, we propose an efficient disaggregation method applicable to any vertex-based approximation. The proposed methods provide an efficient means to aggregate and to disaggregate typical battery storages in quarter-hourly periods over an entire day with reasonable accuracy for aggregated cost and for peak power optimization.
With Cloud Computing and multi-core CPUs parallel computing resources are becoming more and more affordable and commonly available. Parallel programming should as well be easily accessible for everyone. Unfortunately, existing frameworks and systems are powerful but often very complex to use for anyone who lacks the knowledge about underlying concepts. This paper introduces a software framework and execution environment whose objective is to provide a system which should be easily usable for everyone who could benefit from parallel computing. Some real-world examples are presented with an explanation of all the steps that are necessary for computing in a parallel and distributed manner.
An electrochemical study with three redox substances on a carbon based nanogap electrode array
(2020)
Bubble column humidifiers (BCHs) are frequently used for the humidification of air in various water treatment applications. A potential but not yet profoundly investigated application of such devices is the treatment of oily wastewater. To evaluate this application, the accumulation of an oil-water emulsion using a BCH is experimentally analyzed. The amount of evaporating water vapor can be evaluated by measuring the humidity ratio of the outlet air. However, humidity measurements are difficult in close to saturated conditions, as the formation of liquid droplets on the sensor impacts the measurement accuracy. We use a heating section after the humidifier, such that no liquid droplets are formed on the sensor. This enables us a more accurate humidity measurement. Two batch measurement runs are conducted with (1) tap water and (2) an oil-water emulsion as the respective liquid phase. The humidity measurement in high humidity conditions is highly accurate with an error margin of below 3 % and can be used to predict the oil concentration of the remaining liquid during operation. The measured humidity ratio corresponds with the removed amount of water vapor for both tap water and the accumulation of an oil-water emulsion. Our measurements show that the residual water content
in the oil-water emulsion is below 4 %.
Vast amounts of oily wastewater are byproducts of the petrochemical and the shipping industry and to this day frequently discharged into water bodies either without or after insufficient treatment. To alleviate the resulting pollution, water treatment processes are in great demand. Bubble column humidifiers (BCHs) as part of humidification–dehumidification systems are predestined for such a task, since they are insensitive to different feed liquids, simple in design and have low maintenance requirements. While humidification in a bubble column has been investigated plentiful for desalination, a systematic investigation of oily wastewater treatment is missing in literature. We filled this gap by analyzing the treatment of an oil–water emulsion experimentally to derive recommendations for future design and operation of BCHs. Our humidity measurements indicate that the air stream is always saturated after humidification for a liquid height of only 10 cm. A residual water mass fraction of 3.5 wt% is measured after a batch run of six hours. Furthermore, continuous measurements show that an increase in oil mass fraction leads to a decrease in system productivity especially for high oil mass fractions. This decrease is caused by the heterogeneity of the liquid temperature profile. A lower liquid height mitigates this heterogeneity, therefore decreasing the heat demand and improving the overall efficiency. The oil content of the produced condensate is below 15 ppm, allowing discharge into various water bodies. The results of our systematic investigation prove suitability and indicate a strong future potential for the use of BCHs in oily wastewater treatment.
An implementation approach of the gap navigation tree using the TurtleBot 3 Burger and ROS Kinetic
(2020)
The creation of a spatial model of the environment is an important task to allow the planning of routes through the environment. Depending on the number of sensor inputs different ways of creating a spatial environment model are possible. This thesis introduces an implementation approach of the Gap Navigation Tree which is aimed for usage with robots that have a limited amount of sensors. The Gap Navigation Tree is a tree structure based on depth discontinuities constructed from the data of a laser scanner. Using the simulated TurtleBot 3 Burger and ROS kinetic a framework is created that implements the theory of the Gap Navigation Tree. The framework is structured in a way that allows using different robots with different sensor types by separating the detection of depth discontinuities from the building and updating of the Gap Navigation Tree.
Skiing is one of the most popular winter sports in the world and especially in the alps. As the skiers enjoy their time on the slopes the most annoying thing that could happen is long waiting times at a lift. Unfortunately, because of climate changes, this happens more regularly because smaller skiing areas at lower altitudes have to close and the number of good skiing days decreases as well. This leads to a increase in the number of skiers in the skiing areas which inevitably leads to longer waiting times and dissatisfied skiers. To prevent this from happening, the carriers of the skiing areas have to manage the skiers flow and distribution and what better way to analyse the current situation and possible changes then by simulating the whole area. A simulation has the advantage of being flexible with regards to time as well as configuration. Be it simulating a skiing day and look into detail of the behaviour of a single skier and how it moves in the area by simulating in real time or setting the focus to the whole area and find out when and where queues are forming throughout the whole day by speeding up the time and simulate the day in only seconds, everything is possible. Even simulating a scenario where some part of the area is closed and the skiers cannot take specific lifts due to some technical error or some slopes because of to less snow. By simulating and analysing all these scenarios not only does the experts of the skiing area gain valuable statistical information about the area but can also simulate changes to the system like a crowd fl ow control or an increase or decrease in capacity of a lift. The simulation built in context with this work for the skiing area of Mellau shows all those applications but can also be used as a basis for further improvements of the skiing area or be expanded to other areas like Damüls. The simulation was implemented using the Anylogic simulation environment and the statistical evaluation was also performed in this program.
This master’s thesis provides an overview of a more efficient, future-oriented living concept in Dornbirn, Austria. The use of a combined heat and power unit (CHP), in combination with a thermal storage, as a heating system is specifically investigated. In order to make this heating system more attractive for the consumer, the sale of the generated electricity from the CHP is considered. The more efficient use of energy for heating increases the attractiveness by a minimisation of the living space. This master’s thesis aims to draw attention to the issue and to achieve a rethinking in the planning of future living space. For the research and elaboration of this thesis, statistics and trustworthy literature were used, and physical modelling was applied. This Master’s thesis can be assigned to the fields of energy technology, mechatronics, architecture and civil engineering. It contributes for students, researchers, and other interested person in these sectors.
Analysis of the (μ/μI,λ)-CSA-ES with repair by projection applied to a conically constrained problem
(2019)
In contrast to fossil energy sources, the supply by renewable energy sources likewind and photovoltaics can not be controlled. Therefore, flexibilities on the demandside of the electric power grid, like electro-chemical energy storage systems, are usedincreasingly to match electric supply and demand at all times. To control those flex-ibilities, we consider two algorithms that both lead to linear programming problems.These are solved autonomously on the demand side, i.e., by household computers.In the classic approach, an energy price signal is sent by the electric utility to thehouseholds, which, in turn, optimize the cost of consumption within their constraints.Instead of an energy price signal, we claim that an appropriate power signal that istracked in L1-norm as close as possible by the household has favorable character-istics. We argue that an interior point of the household’s feasibility region is neveran optimal price-based point but can result in a L1-norm optimal point. Thus, pricesignals can not parametrize the complete feasibility region which may not lead to anoptimal allocation of consumption.We compare the price and power tracking algorithms over a year on the base ofone-day optimizations regarding different information settings and using a large dataset of daily household load profiles. The computational task constitutes an embarrassingly parallel problem. To this end, the performance of the two parallel computation frameworks DEF [1] and Ray [2] are investigated. The Ray framework is used to run the Python applications locally on several cores. With the DEF frameworkwe execute our Python routines parallelly in a cloud. All in all, the results providean understanding of when which computation framework and autonomous algorithmwill outperform the other.
Activation of heat pump flexibilities is a viable solution to support balancing the grid via Demand Side Management measures and fulfill the need for flexibility options. Aggregators as interface between prosumers, distribution system operators and balance responsible parties face the challenge due to data privacy and technical restrictions to transform prosumer information into aggregated available flexibility to enable trading thereof. Thereby, literature lacks a generic, applicable and widely accepted flexibility estimation method for heat pumps,which incorporates reduced sensor and system information, system- and demand-dependent behaviour. In this paper, we adapt and extend a method from literature, by incorporating domain knowledge to overcome reduced sensor and system information. We apply data of five real-world heat pump systems, distinguish operation modes, estimate power and energy flexibility of each single heat pump system, proof transferability of the method, and aggregate the flexibilities available to showcase a small HP pool as a proof of concept.
The demand for managing data across multiple domains for product creation is steadily increasing. Model-Driven Systems Engineering (MDSE) is a solution for this problem. With MDSE, domain-specific data is formalized inside a model with a custom language, for example, the Unified Modelling Language (UML). These models can be created with custom editors, and specialized domains can be integrated with extensions to UML, e.g., the Systems Modeling Language (SysML). The most dominant editor in the open-source sector is Eclipse Papyrus SysML 1.6 (Papyrus), an editor to create SysML diagrams for MDSE.
In the pursuit of creating a model and diagrams, the editor does not support the user appropriately or even hinders them. Therefore, paradigms from the diagram modelling and Human Computer Interaction (HCI) domains, as well as perceptual and design theory, are applied to create an editor prototype from scratch. The changes fall into the categories of hierarchy, aid in the diagram composition, and navigation. The prototype is compared with Papyrus in a user test to determine if the changes have the effect of improving usability.
The study involved 10 participants with different knowledge levels of UML, ranging from beginners to experts. Each participant was tested on a navigation and modelling task in both the newly created editor, named Modelling Studio, and Papyrus. The study was evaluated through a questionnaire and analysis of the diagrams produced by the tasks.
The findings are that Modelling Studio’s changes to the hierarchical elements improved their rating. Furthermore, aid for diagram composition could be reinforced by changes to the alignment helper tool and adjustments to the default arrow behaviour of a diagram. Lastly, model navigation adjustments improve a link’s visibility and rating of a specialized link (best practice). The introduction of breadcrumbs had limited success in bettering navigation usability. The prototype deployed a broad spectrum of changes that found improvement already, which can, however, be further improved and tested more thoroughly.
Application of various tools to design, simulate and evaluate optical demultiplexers based on AWG
(2015)
Zeros can cause many issues in data analysis and dealing with them requires specialized procedures. We differentiate between rounded zeros, structural zeros and missing values. Rounded zeros occur when the true value of a variable is hidden because of a detection limit in whatever mechanism was used to acquire the data. Structural zeros are values which are truly zero, often coming about due to a hidden mechanism separate from the one which generates values greater than 0. Missing values are values that are completely missing for unknown or known reasons. This thesis outlines various methods for dealing with different kinds of zeros in different contexts. Many of these methods are very specific in their ideal usecase. They are separated based on which kind of zero they are intended for and if they are better suited for compositional or for standard data.
For rounded zeros we impute the zeros with an estimated value below the detection limit. The author describes multiplicative replacement, a simple procedure that imputes values at a fixed fraction of the detection limit. As a more advanced technique, the author describes Kaplan Meier smoothing spline replacement, which interpolates a spline on a Kaplan Meier curve and uses the spline below the detection limit to impute values in a more natural distribution. Rounded zeros cannot be imputed with the same techniques that would be used for regular missing values, since there is more information available on the true value of a rounded zero than there would be for a regular missing value.
Structural zeros cannot be imputed since they are a true zero. Imputing them would falsify their values and produce a value where there should be none. Because of this, we apply modelling techniques that can work around structural zeros and incorporate them. For standard data, the zero inflated Poisson model is presented. This model utilizes a mixture of a logistic and a Poisson distribution to accurately model data with a large amount of structural zeros. While the Poisson distribution is only applicable to count data, the zero inflation concept can be applied to different kinds of distributions. For compositional data, the zero adjusted Dirichlet model is introduced. This model mixes Dirichlet distributions for every pattern of zeros found within the data. Non-algorithmic techniques to reduce the amount of structural zeros present are also shown. These techniques being amalgamation, which combines columns with structural zeros into more broad descriptors and classification, which changes columns into categorical values based on a structural zero being present or not.
Missing values are values that are completely missing for various known or unknown reasons. Different imputation techniques are introduced. For standard data, MissForest imputation is introduced, which utilizes a RandomForest regression to impute mixed type missing values. Another imputation technique shown utilizes both a genetic algorithm and a neural network to impute values based on the genetic algorithm minimizing the error of an autoencoder neural network. In the case of compositional data, knn imputation is presented, which utilizes the knn concept also found in knn clustering to impute the values based on the closest samples with a value available.
All of these methods are explained and demonstrated to give readers a guide to finding the suitable methods to use in different scenarios.
The thesis also provides a general guide on dealing with zeros in data, with decision flowcharts and more detailed descriptions for both compositional and standard data being presented. General tips on getting better results when zeros are involved are also given and explained. This general guide was then applied to a dataset to show it in action.
Arrayed Waveguide Gratings
(2016)
Arrayed Waveguide Grating (AWG) is a passive optical component, which have found applications in a wide range of photonic applications including telecommunications and medicine. Silica-on-Silicon (SoS) based AWGs use a low refractive-index contrast between the core (waveguide) and the cladding which leads to some significant advantages such as low propagation losses and low fiber coupling losses between the AWG waveguides and the fibres. Therefore, they are an attractive DWDM solution offering higher channel count technology and good performance characteristics compared to other methods. However, the very low refractive-index contrast means the bending radius of the waveguides needs to be very large (on the order of several millimeters) and may not fall below a particular critical value to suppress bending losses. As a result, silica-based waveguide devices usually have a very large size that limits the integration density of SiO2-based photonic integrated devices. High-index contrast AWGs (such as silicon, silicon nitride or polymer-based waveguide devices) feature much smaller waveguide size compared to low index contrast AWGs. Such compact devices can easily be implemented on a chip and have already found applications in emerging applications such as optical sensors, devices for DNA diagnostics and optical spectrometers for infrared spectroscopy.In this work, we present the design, simulation, technological verification and applications of both, the low-index contrast and high-index contrast AWGs. For telecommunication applications AWG-MUX/Demux with up to 128-channels will be presented. For medical applications the AWG-spectrometer with up to 512-channels will be presented.This work was carried out in the framework of the projects: ADOPT No. SK-AT-20-0012, NOVASiN No. SK-AT-20-0017 and AUTOPIC No. APVV-17-0662 from Slovak research and development agency of Ministry of Education, Science, Research and Sport of the Slovak Republic and No. SK 07/2021 and SK 08/2021 from Austrian Agency for International Cooperation in Education and Research (OeAD-GmbH); and project PASTEL, no. 2020-10-15-001, funded by SAIA.
Nowadays, the area of customer management strives for omni-channel and state-of-the-art CRM concepts including Artificial Intelligence and the approach of Customer Experience. As a result, modern CRM solutions are essential tools for supporting customer processes in Marketing, Sales and Service. AI-driven CRM accelerates sales cycles, improves lead generation and qualification, and enables highly personalized marketing. The focus of this thesis is to present the basics of Customer Relationship Management, to show the latest Gartner insights about CRM and CX, and to demonstrate an AI Business Framework, which introduces AI use cases that are used as a basis for the expert interviews conducted in an international B2B company. AI will transform CX through a better understanding of customer behavior. The following research questions are answered in this thesis: In which AI use cases can Sales and CRM be improved? How can Customer Experience be improved with AI-driven CRM?
Assessing antecedents of entrepreneurial activities of academics at south african universities
(2016)
Companies develop and implement strategies with the aim to address the needs of their customers. Acquisition is one market expansion strategy that companies can use to acquire new market access, technologies and/or to grow organically. In recent years, Chinese companies have been active in acquiring companies all over the globe to develop their strategic position. This caused certain contra reaction in Europe and as well in the Swiss media against cross-border acquisitions of Swiss companies.
Swiss companies and particularly the Swiss-MEM (Machinery, Electrical and Mechanical) industry is highly export oriented and their value proposition builds on attributes like knowledge, technology, and differentiating products. Among them are many “hidden champions” and niche players who successfully dominate the market segment.
As observed with Chinese companies, Indian companies also started to become more active outside of their domestic markets by increasing their foreign direct investments into Europe, Asia and North America, over the last decades. The lasting and good relationship of India and Switzerland might trigger the wish for Indian companies to acquire Swiss and particularly Swiss-MEM companies for acquisitions.
This Master’s Thesis assesses how often Indian investments into public and privately owned Swiss-MEM companies by acquisition happen, how are the attempts of acquisitions perceived by the stakeholders and what measures Swiss and Swiss-MEM companies can take, to protect themselves from being acquired. To access the research topic, several sub-questions will be analysed with the aid of primary and secondary research to assess the situation.
The research topic is of particular interested to the author since he spent over 20 years working in the Swiss-MEM industry, involved in international affairs and in recent years specifically with India. The observation of Chinese acquisition activities and insight into the size and potential of India were the drivers for researching whether India might follow China’s example.
In conclusion, Indian companies are not explicitly targeting Swiss and Swiss-MEM companies, but there are reasons to believe that it would make sense for Indian companies to look into the acquisition of Swiss and Swiss-MEM companies. The perception of such acquisitions varies, but there are arguments for and against them. Companies must take strategic and organisational measures in order to prevent themselves from becoming the target of an acquisition. However, it is known that the state should not interfere in the market and a discussion at a political level, planning how to deal with cross-border acquisition, is needed.
Further areas for research based on this Master’s Thesis could be the review of how the targeting of Swiss and Swiss-MEM companies by Indian companies would look, and also the topic of the succession planning in Swiss secondary sector in conjunction with Indian targeting for acquisitions. A third area to research might be investigating the political aspects involved in the research questions.
The boom of information technology development created high demand for skilled labour force in IT occupations. IT professionals install, test, build, repair or maintain hardware and software and can do the job from any location in the world.
Demand for the workforce significantly outstrips the global supply. In a situation of staff shortage employers have to compete on local and global labour markets. The ability of a firm to attract and retain the best talent would become a source of its sustainable competitive advantage.
Aim of the study is to understand what influences perception of employment attractiveness by IT professionals the most. This study intends to expend the existing knowledge about employees´ needs and “psychological contract” concept.
The research was conducted with the participation of 4 IT and 4 HR English-speaking experts who live and work in Austria. In the study the grounded theory approach and the descriptive qualitative methods were applied.
The research findings explain which factors influence the decision of IT professionals to join, stay or leave an employer. The results are discussed in relation to talent attraction and retention practices of Austrian employers.
The photonic integrated circuits are required in the next generations of coherent terabit optical communications. The software tools for automated adjustment and coupling of optical fiber arrays to photonic integrated circuits has been developed. The obtained results are needed in final production phase in the technology process of photonic integrated circuits packaging.
The usage of data gathered for Industry 4.0 and smart factory scenarios continues to be a problem for companies of all sizes. This is often the case because they aim to start with complicated and time-intensive Machine Learning scenarios. This work evaluates the Process Capability Analysis (PCA) as a pragmatic, easy and quick way of leveraging the gathered machine data from the production process. The area of application considered is injection molding. After describing all the required domain knowledge, the paper presents an approach for a continuous analysis of all parts produced. Applying PCA results in multiple key performance indicators that allow for fast and comprehensible process monitoring. The corresponding visualizations provide the quality department with a tool to efficiently choose where and when quality checks need to be performed. The presented case study indicates the benefit of analyzing whole process data instead of considering only selected production samples. The use of machine data enables additional insights to be drawn about process stability and the associated product quality.
With the digitalisation, and the increased connectivity between manufacturing systems emerging in this context, manufacturing is shifting towards decentralised, distributed concepts. Still, for manufacturing scenarios manual input or augmentation of data is required at system boundaries. Especially in distributed manufacturing environments, like Cloud Manufacturing (CMfg) systems, constant changes to the available manufacturing resources and products pose challenges for establishing connections between them. We propose a feature-oriented representation of concepts, especially from the manufacturing domain, which serves as the basis for (semi-) automatically linking, e.g., manufacturing resources and products. This linking methodologies, as well as knowledge inferred using it, is then used to support distributed manufacturing, especially in CMfg environments, and enhance product development. The concepts and methodologies are to be evaluated in a real world learning factory.
Load shifting of resistive domestic hot water heaters has been done in Europe since the 1930s, primarily to ease the power supply during peak times. However, the pursued and already commenced energy transition in Europe changes the requirements for the underlying logic. In this more general context, demand side management is considered a viable approach to utilize the flexibility of thermal and electrochemical storage systems for buffering energy generated from renewables. In this work, an autonomous approach for demand side management of energy storage systems is developed, which is based on unidirectional communication of an incentive. This concept is then applied to the specific problem of resistive domestic hot water heaters.
The basic algorithms for an optimized operation are developed and evaluated based on simulation studies. The optimization problem considered, maps the search for the optimal heating schedule, while ensuring the temperature limits defined: Firstly, a maximum, which is defined by the hysteresis set point temperature; Secondly, during hot water draw offs, the outlet temperature should not fall below a set minimum. To establish this, the time series of hot water usage has to be predicted.
Depending on the complexity of the hot water heater model used, the formulation of the problem ranges from a linear to non-linear optimization with discontinuous constraints. The simulation studies presented, comprise a formulation as binary linear optimization problem, as well as a solution based on a heuristic direct method to solve the non-linear version. In contrast to the first linear approach, the latter takes stratification inside the tank into account. One-year simulations based on realistic hot water draw profiles are used to investigate the potentials with respect to load shift and energy efficiency improvements. Additional to assuming perfect prediction of user behavior, this work also considers the k-nearest neighbors algorithm to predict the time series. If compared to usual night-tariff switched operation, assuming perfect prediction shows 30 % savings on the electricity market when stratification is taken into account. The user prediction proposed leads to 16 % cost savings, while 6 % of the electric energy is conserved.
Based on the linear approach, a prototype is developed and used in a field test. A micro computer processes the sensor information for local data acquisition, receives electricity spot market prices up to 34 hours in advance, solves the optimization problem for this time horizon, and switches the power supply of the resistive heating element accordingly. Beside the temperature of the environment, the inlet and outlet temperatures, the temperature inside the tank is measured at five points, as well as the water volume flow rate and the electric power recorded. Two test runs of 18 days each, compare the night-tariff switched operation to the price-based optimization in a real-world environment. Results show a significant increase of 6 % in thermal efficiency during the operation based on the algorithm developed, which can be contributed to the optimization accounting for the usage expected.
To facilitate the technical and economic feasibility for retrofit-able implementations of the method proposed for autonomous demand side management, the sensors used must be kept to a minimum. A sufficiently accurate state estimation of the storage has to be achieved, to facilitate a useful model predictive control. Therefore, the last part of this work focuses on the aspect of automated system identification and state estimation of resistive domestic hot water heaters. To that end, real hot water usage profiles and schedules gathered in a field test are used in a lab setup, to collect data on the temperature distribution inside the tank during realistic operating conditions. Four different thermal models, common in literature, are considered for state estimation and system identification. Based on the data collected in the lab, they are evaluated with respect to robustness, computational costs, and estimation accuracy. Based on the observations made in the experiments, an extension of the one-node model by a single additional parameter is proposed. By this adaption, a linear temperature distribution in the lower part of the tank can be modeled during heating. The resulting model exhibits improved robustness and lower computational costs, when compared to the original model. At the same time, the average temperature in the storage tank is estimated nearly as accurate (6 % mean average percentage error) as in the case of the about 50 times more computationally expensive multi-layer model (4 % mean average percentage error).
Demand-side management approaches that exploit the temporal flexibility of electric vehicles have attracted much attention in recent years due to the increasing market penetration. These demand-side management measures contribute to alleviating the burden on the power system, especially in distribution grids where bottlenecks are more prevalent. Electric vehicles can be defined as an attractive asset for distribution system operators, which have the potential to provide grid services if properly managed. In this thesis, first, a systematic investigation is conducted for two typically employed demand-side management methods reported in the literature: A voltage droop control-based approach and a market-driven approach. Then a control scheme of decentralized autonomous demand side management for electric vehicle charging scheduling which relies on a unidirectionally communicated grid-induced signal is proposed. In all the topics considered, the implications on the distribution grid operation are evaluated using a set of time series load flow simulations performed for representative Austrian distribution grids. Droop control mechanisms are discussed for electric vehicle charging control which requires no communication. The method provides an economically viable solution at all penetrations if electric vehicles charge at low nominal power rates. However, with the current market trends in residential charging equipment especially in the European context where most of the charging equipment is designed for 11 kW charging, the technical feasibility of the method, in the long run, is debatable. As electricity demand strongly correlates with energy prices, a linear optimization algorithm is proposed to minimize charging costs, which uses next-day market prices as the grid-induced incentive function under the assumption of perfect user predictions. The constraints on the state of charge guarantee the energy required for driving is delivered without failure. An average energy cost saving of 30% is realized at all penetrations. Nevertheless, the avalanche effect due to simultaneous charging during low price periods introduces new power peaks exceeding those of uncontrolled charging. This obstructs the grid-friendly integration of electric vehicles.