Refine
Year of publication
Document Type
- Conference Proceeding (307)
- Article (279)
- Part of a Book (53)
- Book (19)
- Doctoral Thesis (9)
- Report (6)
- Working Paper (4)
- Other (3)
- Periodical (3)
- Part of Periodical (3)
Institute
- Forschungszentrum Mikrotechnik (235)
- Forschungszentrum Business Informatics (149)
- Technik | Engineering & Technology (127)
- Department of Computer Science (Ende 2021 aufgelöst; Integration in die übergeordnete OE Technik) (112)
- Wirtschaft (105)
- Forschungszentrum Energie (77)
- Didaktik (mit 31.03.2021 aufgelöst; Integration ins TELL Center) (37)
- Forschungszentrum Human Centred Technologies (35)
- Soziales & Gesundheit (33)
- Josef Ressel Zentrum für Materialbearbeitung (27)
Language
- English (687) (remove)
Is part of the Bibliography
- yes (687) (remove)
Keywords
- Laser ablation (11)
- Y-branch splitter (11)
- arrayed waveguide gratings (11)
- photonics (8)
- Evolution strategy (7)
- Demand side management (6)
- Optimization (6)
- integrated optics (6)
- Arrayed waveguide gratings (5)
- Evolution Strategies (5)
In this paper, a 256-channel, 10-GHz arrayed waveguide gratings demultiplexer for ultra-dense wavelength division multiplexing was designed using an in-house developed tool called AWG-Parameters. The AWG demultiplexer was designed for a central wavelength of 1550 nm and the structure was simulated in PHASAR tool from Optiwave. Two different AWG designs were developed and the influence of the design parameters on the AWG performance was studied.
The paper shows concepts of optical splitting based on three dimensional (3D) optical splitters based on multimode interference principle. This paper is focused on the design, fabrication and characterization of 3D MMI splitter with formed output waveguides based on IP-Dip polymer for direct application on optical fiber. The MMI optical splitter was simulated and fabricated using direct laser writing process. Output characteristics were characterized by highly resolved near-field scanning optical microscope (NSOM) and compared with 3D MMI splitter without output waveguides.
In this paper, we propose and simulate a new type of three-dimensional (3D) optical splitter based on multimode interference (MMI) for the wavelength of 1550 nm. The splitter was proposed on the square basis with the width of 20 x 20 µm2 using the IP-Dip polymer as a standard material for 3D laser lithography. We present the optical field distribution in the proposed MMI splitter and its integration possibility on optical fiber. The design is aimed to the possible fabrication process using the 3D laser lithography for forthcoming experiments.
In this paper, we document optical splitters based on Y-branch and also on MMI splitting principle. The 1×4 Y-branch splitter was prepared in 3D geometry fully from polymer approaching the single mode transmission at 1550 nm. We also prepared new concept of 1×4 MMI optical splitter. Their optical properties and character of output optical field were measured by near-field scanning optical microscope. Splitting properties and optical outputs of both splitters are very promising and increase an attractiveness of presented 3D technology and polymers.
We present a new concept of 3D polymer-based 1 × 4 beam splitter for wavelength splitting around 1550 nm. The beam splitter consists of IP-Dip polymer as a core and polydimethylsiloxane (PDMS) Sylgard 184 as a cladding. The splitter was designed and simulated with two different photonics tools and the results show high splitting ratio for single-mode and multi-mode operation with low losses. Based on the simulations, a 3D beam splitter was designed and realized using direct laser writing (DLW) process with adaptation to coupling to standard single-mode fiber. With respect to the technological limits, the multi-mode splitter having core of (4 × 4) μm 2 was designed and fabricated together with supporting stable mechanical construction. Splitting properties were investigated by intensity monitoring of splitter outputs using optical microscopy and near-field scanning optical microscopy. In the development phase, the optical performance of fabricated beam splitter was examined by splitting of short visible wavelengths using red light emitting diode. Finally, the splitting of 1550 nm laser light was studied in detail by near-field measurements and compared with the simulated results. The nearly single-mode operation was observed and the shape of propagating mode and mode field diameter was well recognized.
Power plant operators increasingly rely on predictive models to diagnose and monitor their systems. Data-driven prediction models are generally simple and can have high precision, making them superior to physics-based or knowledge-based models, especially for complex systems like thermal power plants. However, the accuracy of data-driven predictions depends on (1) the quality of the dataset, (2) a suitable selection of sensor signals, and (3) an appropriate selection of the training period. In some instances, redundancies and irrelevant sensors may even reduce the prediction quality.
We investigate ideal configurations for predicting the live steam production of a solid fuel-burning thermal power plant in the pulp and paper industry for different modes of operation. To this end, we benchmark four machine learning algorithms on two feature sets and two training sets to predict steam production. Our results indicate that with the best possible configuration, a coefficient of determination of R^2 = 0.95 and a mean absolute error of MAE=1.2 t/h with an average steam production of 35.1 t/h is reached. On average, using a dynamic dataset for training lowers MAE by 32% compared to a static dataset for training. A feature set based on expert knowledge lowers MAE by an additional 32 %, compared to a simple feature set representing the fuel inputs. We can conclude that based on the static training set and the basic feature set, machine learning algorithms can identify long-term changes. When using a dynamic dataset the performance parameters of thermal power plants are predicted with high accuracy and allow for detecting short-term problems.
The Digital Factory Vorarlberg is the youngest Research Center of Vorarlberg University of Applied Sciences. In the lab of the research center a research and learning factory has been established for educating students and employees of industrial partners. Showcases and best practice scenarios for various topics of digitalization in the manufacturing industry are demonstrated. In addition, novel methods and technologies for digital production, cloud-based manufacturing, data analytics, IT- and OT-security or digital twins are being developed. The factory comprises only a minimum core of logistics and fabrication processes to guarantee manageability within an academic setup. As a product, fidget spinners are being fabricated. A webshop allows customers to individually design their products and directly place orders in the factory. A centralized SCADA-System is the core data hub for the factory. Various data analytic tools and methods and a novel database for IoT-applications are connected to the SCADA-System. As an alternative to on premise manufacturing, orders can be pushed into a cloud-based manufacturing platform, which has been developed at the Digital Factory. A broker system allows fabrication in distributed facilities and offers various optimization services. Concepts, such as outsourcing product configuration to customers or new types of engineering services in cloud-based manufacturing can be explored and demonstrated. In this paper, we present the basic concept of the Digital Factory Vorarlberg, as well as some of the newly developed topics.
A covariance matrix self-adaptation evolution strategy for optimization under linear constraints
(2018)
Purpose – The purpose of this study is to explore the exogenous and endogenous drivers of the high-growth of Unicorn start-ups along their life cycle, with a particular focus on Unicorns in the fintech industry.
Design/methodology/approach – The study employs an explorative longitudinal analysis with a matched pair of two cases of Unicorns start-ups with similar antecedent features to understand holistically drivers over the longer term.
Findings – High-growth patterns over the longer term are the result of a combined industry- and company-life cycle perspective. Drivers and growth patterns vary significantly according to the time of entry in the industry and
its development status. The findings are systematised within a set of propositions to be tested in future research.
Research limitations/implications – The limitations lie in empirical evidence, as the analysis is limited to one matched-pair. The revealed Unicorns’ drivers for long-term growth might encourage future research to further investigate these drivers on a larger scale.
Practical implications – The study offers practical recommendations for start-ups with high-growth ambitions and advice to policy makers regarding the development of tailor-made support programs.
Originality/value – The study significantly extends extant work on growth and high-growth by examining endogenous and exogenous triggers over time and by linking the Unicorn-life cycle to the industry life cycle, an approach which has, to the best of the authors’ knowledge, not yet been applied.
A modified matrix adaptation evolution strategy with restarts for constrained real-world problems
(2020)
In combination with successful constraint handling techniques, a Matrix Adaptation Evolution Strategy (MA-ES) variant (the εMAg-ES) turned out to be a competitive algorithm on the constrained optimization problems proposed for the CEC 2018 competition on constrained single objective real-parameter optimization. A subsequent analysis points to additional potential in terms of robustness and solution quality. The consideration of a restart scheme and adjustments in the constraint handling techniques put this into effect and simplify the configuration. The resulting BP-εMAg-ES algorithm is applied to the constrained problems proposed for the IEEE CEC 2020 competition on Real-World Single-Objective Constrained optimization. The novel MA-ES variant realizes improvements over the original εMAg-ES in terms of feasibility and effectiveness on many of the real-world benchmarks. The BP-εMAg-ES realizes a feasibility rate of 100% on 44 out of 57 real-world problems and improves the best-known solution in 5 cases.
A multi-recombinative active matrix adaptation evolution strategy for constrained optimization
(2019)
In engineering design, optimization methods are frequently used to improve the initial design of a product. However, the selection of an appropriate method is challenging since many
methods exist, especially for the case of simulation-based optimization. This paper proposes a systematic procedure to support this selection process. Building upon quality function deployment, end-user and design use case requirements can be systematically taken into account via a decision
matrix. The design and construction of the decision matrix are explained in detail. The proposed
procedure is validated by two engineering optimization problems arising within the design of box-type boom cranes. For each problem, the problem statement and the respectively applied optimization methods are explained in detail. The results obtained by optimization validate the use
of optimization approaches within the design process. The application of the decision matrix shows the successful incorporation of customer requirements to the algorithm selection.
A systemic-constructivist approach to the facilitation and debriefing of simulations and games
(2010)
Issues with professional conduct and discrimination against Lesbian, Gay, Bisexual, Transgender (LGBT+) people in health and social care, continue to exist in most EU countries and worldwide.
The project IENE9 titled: “Developing a culturally competent and compassionate LGBT+ curriculum in health and social care education” aims to enable teacher/trainers of theory and practice to enhance their skills regarding LGBT+ issues and develop teaching tools to support the inclusion of LGBT+ issues within health and social care curricula. The newly culturally competent and compassionate LGBT+ curriculum will be delivered though a Massive Open Online Course (MOOC) which is aimed at health and social care workers, professionals and learners across Europe and worldwide.
We have identified educational policies and guidelines at institutions teaching in health and social care, taken into account for developing the learning/teaching resources. The MOOC will be an innovative training model based on the Papadopoulos (2014) model for “Culturally Competent Compassion”. The module provides a logical and easy to follow structure based on its four constructs 'Culturally Aware and Compassionate Learning', 'Culturally Knowledgeable and Compassionate Learning', 'Culturally Sensitive and Compassionate Learning', 'Culturally Competent and Compassionate Learning'.
Specific training may result in better knowledge and skills of the health and social care workforce, which helps to reduce inequalities and communication with LGBT+ people, as well as diminishing the feelings of stigma or discrimination experienced.
Active demand side management with domestic hot water heaters using binary integer programming
(2013)
Creating a schedule to perform certain actions in a realworld environment typically involves multiple types of uncertainties. To create a plan which is robust towards uncertainties, it must stay flexible while attempting to be reliable and as close to optimal as possible. A plan is reliable if an adjustment to accommodate for a new requirement causes only a few disruptions. The system needs to be able to adapt to the schedule if unforeseen circumstances make planned actions impossible, or if an unlikely event would enable the system to follow a better path. To handle uncertainties, the used methods need to be dynamic and adaptive. The planning algorithms must be able to re-schedule planned actions and need to adapt the previously created plan to accommodate new requirements without causing critical disruptions to other required actions.
Adaptive indirect fieldoriented control of an induction machine in the armature control range
(2012)
Traditional power grids are mainly based on centralized power generation and subsequent distribution. The increasing penetration of distributed renewable energy sources and the growing number of electrical loads is creating difficulties in balancing supply and demand and threatens the secure and efficient operation of power grids. At the same time, households hold an increasing amount of flexibility, which can be exploited by demand-side management to decrease customer cost and support grid operation. Compared to the collection of individual flexibilities, aggregation reduces optimization complexity, protects households’ privacy, and lowers the communication effort. In mathematical terms, each flexibility is modeled by a set of power profiles, and the aggregated flexibility is modeled by the Minkowski sum of individual flexibilities. As the exact Minkowski sum calculation is generally computationally prohibitive, various approximations can be found in the literature. The main contribution of this paper is a comparative evaluation of several approximation algorithms in terms of novel quality criteria, computational complexity, and communication effort using realistic data. Furthermore, we investigate the dependence of selected comparison criteria on the time horizon length and on the number of households. Our results indicate that none of the algorithms perform satisfactorily in all categories. Hence, we provide guidelines on the application-dependent algorithm choice. Moreover, we demonstrate a major drawback of some inner approximations, namely that they may lead to situations in which not using the flexibility is impossible, which may be suboptimal in certain situations.
With Cloud Computing and multi-core CPUs parallel computing resources are becoming more and more affordable and commonly available. Parallel programming should as well be easily accessible for everyone. Unfortunately, existing frameworks and systems are powerful but often very complex to use for anyone who lacks the knowledge about underlying concepts. This paper introduces a software framework and execution environment whose objective is to provide a system which should be easily usable for everyone who could benefit from parallel computing. Some real-world examples are presented with an explanation of all the steps that are necessary for computing in a parallel and distributed manner.
An electrochemical study with three redox substances on a carbon based nanogap electrode array
(2020)
Bubble column humidifiers (BCHs) are frequently used for the humidification of air in various water treatment applications. A potential but not yet profoundly investigated application of such devices is the treatment of oily wastewater. To evaluate this application, the accumulation of an oil-water emulsion using a BCH is experimentally analyzed. The amount of evaporating water vapor can be evaluated by measuring the humidity ratio of the outlet air. However, humidity measurements are difficult in close to saturated conditions, as the formation of liquid droplets on the sensor impacts the measurement accuracy. We use a heating section after the humidifier, such that no liquid droplets are formed on the sensor. This enables us a more accurate humidity measurement. Two batch measurement runs are conducted with (1) tap water and (2) an oil-water emulsion as the respective liquid phase. The humidity measurement in high humidity conditions is highly accurate with an error margin of below 3 % and can be used to predict the oil concentration of the remaining liquid during operation. The measured humidity ratio corresponds with the removed amount of water vapor for both tap water and the accumulation of an oil-water emulsion. Our measurements show that the residual water content
in the oil-water emulsion is below 4 %.
Vast amounts of oily wastewater are byproducts of the petrochemical and the shipping industry and to this day frequently discharged into water bodies either without or after insufficient treatment. To alleviate the resulting pollution, water treatment processes are in great demand. Bubble column humidifiers (BCHs) as part of humidification–dehumidification systems are predestined for such a task, since they are insensitive to different feed liquids, simple in design and have low maintenance requirements. While humidification in a bubble column has been investigated plentiful for desalination, a systematic investigation of oily wastewater treatment is missing in literature. We filled this gap by analyzing the treatment of an oil–water emulsion experimentally to derive recommendations for future design and operation of BCHs. Our humidity measurements indicate that the air stream is always saturated after humidification for a liquid height of only 10 cm. A residual water mass fraction of 3.5 wt% is measured after a batch run of six hours. Furthermore, continuous measurements show that an increase in oil mass fraction leads to a decrease in system productivity especially for high oil mass fractions. This decrease is caused by the heterogeneity of the liquid temperature profile. A lower liquid height mitigates this heterogeneity, therefore decreasing the heat demand and improving the overall efficiency. The oil content of the produced condensate is below 15 ppm, allowing discharge into various water bodies. The results of our systematic investigation prove suitability and indicate a strong future potential for the use of BCHs in oily wastewater treatment.
Analysis of the (μ/μI,λ)-CSA-ES with repair by projection applied to a conically constrained problem
(2019)
In contrast to fossil energy sources, the supply by renewable energy sources likewind and photovoltaics can not be controlled. Therefore, flexibilities on the demandside of the electric power grid, like electro-chemical energy storage systems, are usedincreasingly to match electric supply and demand at all times. To control those flex-ibilities, we consider two algorithms that both lead to linear programming problems.These are solved autonomously on the demand side, i.e., by household computers.In the classic approach, an energy price signal is sent by the electric utility to thehouseholds, which, in turn, optimize the cost of consumption within their constraints.Instead of an energy price signal, we claim that an appropriate power signal that istracked in L1-norm as close as possible by the household has favorable character-istics. We argue that an interior point of the household’s feasibility region is neveran optimal price-based point but can result in a L1-norm optimal point. Thus, pricesignals can not parametrize the complete feasibility region which may not lead to anoptimal allocation of consumption.We compare the price and power tracking algorithms over a year on the base ofone-day optimizations regarding different information settings and using a large dataset of daily household load profiles. The computational task constitutes an embarrassingly parallel problem. To this end, the performance of the two parallel computation frameworks DEF [1] and Ray [2] are investigated. The Ray framework is used to run the Python applications locally on several cores. With the DEF frameworkwe execute our Python routines parallelly in a cloud. All in all, the results providean understanding of when which computation framework and autonomous algorithmwill outperform the other.
Activation of heat pump flexibilities is a viable solution to support balancing the grid via Demand Side Management measures and fulfill the need for flexibility options. Aggregators as interface between prosumers, distribution system operators and balance responsible parties face the challenge due to data privacy and technical restrictions to transform prosumer information into aggregated available flexibility to enable trading thereof. Thereby, literature lacks a generic, applicable and widely accepted flexibility estimation method for heat pumps,which incorporates reduced sensor and system information, system- and demand-dependent behaviour. In this paper, we adapt and extend a method from literature, by incorporating domain knowledge to overcome reduced sensor and system information. We apply data of five real-world heat pump systems, distinguish operation modes, estimate power and energy flexibility of each single heat pump system, proof transferability of the method, and aggregate the flexibilities available to showcase a small HP pool as a proof of concept.
Application of various tools to design, simulate and evaluate optical demultiplexers based on AWG
(2015)
Arrayed Waveguide Gratings
(2016)
Arrayed Waveguide Grating (AWG) is a passive optical component, which have found applications in a wide range of photonic applications including telecommunications and medicine. Silica-on-Silicon (SoS) based AWGs use a low refractive-index contrast between the core (waveguide) and the cladding which leads to some significant advantages such as low propagation losses and low fiber coupling losses between the AWG waveguides and the fibres. Therefore, they are an attractive DWDM solution offering higher channel count technology and good performance characteristics compared to other methods. However, the very low refractive-index contrast means the bending radius of the waveguides needs to be very large (on the order of several millimeters) and may not fall below a particular critical value to suppress bending losses. As a result, silica-based waveguide devices usually have a very large size that limits the integration density of SiO2-based photonic integrated devices. High-index contrast AWGs (such as silicon, silicon nitride or polymer-based waveguide devices) feature much smaller waveguide size compared to low index contrast AWGs. Such compact devices can easily be implemented on a chip and have already found applications in emerging applications such as optical sensors, devices for DNA diagnostics and optical spectrometers for infrared spectroscopy.In this work, we present the design, simulation, technological verification and applications of both, the low-index contrast and high-index contrast AWGs. For telecommunication applications AWG-MUX/Demux with up to 128-channels will be presented. For medical applications the AWG-spectrometer with up to 512-channels will be presented.This work was carried out in the framework of the projects: ADOPT No. SK-AT-20-0012, NOVASiN No. SK-AT-20-0017 and AUTOPIC No. APVV-17-0662 from Slovak research and development agency of Ministry of Education, Science, Research and Sport of the Slovak Republic and No. SK 07/2021 and SK 08/2021 from Austrian Agency for International Cooperation in Education and Research (OeAD-GmbH); and project PASTEL, no. 2020-10-15-001, funded by SAIA.
Assessing antecedents of entrepreneurial activities of academics at south african universities
(2016)
The photonic integrated circuits are required in the next generations of coherent terabit optical communications. The software tools for automated adjustment and coupling of optical fiber arrays to photonic integrated circuits has been developed. The obtained results are needed in final production phase in the technology process of photonic integrated circuits packaging.
The usage of data gathered for Industry 4.0 and smart factory scenarios continues to be a problem for companies of all sizes. This is often the case because they aim to start with complicated and time-intensive Machine Learning scenarios. This work evaluates the Process Capability Analysis (PCA) as a pragmatic, easy and quick way of leveraging the gathered machine data from the production process. The area of application considered is injection molding. After describing all the required domain knowledge, the paper presents an approach for a continuous analysis of all parts produced. Applying PCA results in multiple key performance indicators that allow for fast and comprehensible process monitoring. The corresponding visualizations provide the quality department with a tool to efficiently choose where and when quality checks need to be performed. The presented case study indicates the benefit of analyzing whole process data instead of considering only selected production samples. The use of machine data enables additional insights to be drawn about process stability and the associated product quality.
With the digitalisation, and the increased connectivity between manufacturing systems emerging in this context, manufacturing is shifting towards decentralised, distributed concepts. Still, for manufacturing scenarios manual input or augmentation of data is required at system boundaries. Especially in distributed manufacturing environments, like Cloud Manufacturing (CMfg) systems, constant changes to the available manufacturing resources and products pose challenges for establishing connections between them. We propose a feature-oriented representation of concepts, especially from the manufacturing domain, which serves as the basis for (semi-) automatically linking, e.g., manufacturing resources and products. This linking methodologies, as well as knowledge inferred using it, is then used to support distributed manufacturing, especially in CMfg environments, and enhance product development. The concepts and methodologies are to be evaluated in a real world learning factory.
Load shifting of resistive domestic hot water heaters has been done in Europe since the 1930s, primarily to ease the power supply during peak times. However, the pursued and already commenced energy transition in Europe changes the requirements for the underlying logic. In this more general context, demand side management is considered a viable approach to utilize the flexibility of thermal and electrochemical storage systems for buffering energy generated from renewables. In this work, an autonomous approach for demand side management of energy storage systems is developed, which is based on unidirectional communication of an incentive. This concept is then applied to the specific problem of resistive domestic hot water heaters.
The basic algorithms for an optimized operation are developed and evaluated based on simulation studies. The optimization problem considered, maps the search for the optimal heating schedule, while ensuring the temperature limits defined: Firstly, a maximum, which is defined by the hysteresis set point temperature; Secondly, during hot water draw offs, the outlet temperature should not fall below a set minimum. To establish this, the time series of hot water usage has to be predicted.
Depending on the complexity of the hot water heater model used, the formulation of the problem ranges from a linear to non-linear optimization with discontinuous constraints. The simulation studies presented, comprise a formulation as binary linear optimization problem, as well as a solution based on a heuristic direct method to solve the non-linear version. In contrast to the first linear approach, the latter takes stratification inside the tank into account. One-year simulations based on realistic hot water draw profiles are used to investigate the potentials with respect to load shift and energy efficiency improvements. Additional to assuming perfect prediction of user behavior, this work also considers the k-nearest neighbors algorithm to predict the time series. If compared to usual night-tariff switched operation, assuming perfect prediction shows 30 % savings on the electricity market when stratification is taken into account. The user prediction proposed leads to 16 % cost savings, while 6 % of the electric energy is conserved.
Based on the linear approach, a prototype is developed and used in a field test. A micro computer processes the sensor information for local data acquisition, receives electricity spot market prices up to 34 hours in advance, solves the optimization problem for this time horizon, and switches the power supply of the resistive heating element accordingly. Beside the temperature of the environment, the inlet and outlet temperatures, the temperature inside the tank is measured at five points, as well as the water volume flow rate and the electric power recorded. Two test runs of 18 days each, compare the night-tariff switched operation to the price-based optimization in a real-world environment. Results show a significant increase of 6 % in thermal efficiency during the operation based on the algorithm developed, which can be contributed to the optimization accounting for the usage expected.
To facilitate the technical and economic feasibility for retrofit-able implementations of the method proposed for autonomous demand side management, the sensors used must be kept to a minimum. A sufficiently accurate state estimation of the storage has to be achieved, to facilitate a useful model predictive control. Therefore, the last part of this work focuses on the aspect of automated system identification and state estimation of resistive domestic hot water heaters. To that end, real hot water usage profiles and schedules gathered in a field test are used in a lab setup, to collect data on the temperature distribution inside the tank during realistic operating conditions. Four different thermal models, common in literature, are considered for state estimation and system identification. Based on the data collected in the lab, they are evaluated with respect to robustness, computational costs, and estimation accuracy. Based on the observations made in the experiments, an extension of the one-node model by a single additional parameter is proposed. By this adaption, a linear temperature distribution in the lower part of the tank can be modeled during heating. The resulting model exhibits improved robustness and lower computational costs, when compared to the original model. At the same time, the average temperature in the storage tank is estimated nearly as accurate (6 % mean average percentage error) as in the case of the about 50 times more computationally expensive multi-layer model (4 % mean average percentage error).
Demand-side management approaches that exploit the temporal flexibility of electric vehicles have attracted much attention in recent years due to the increasing market penetration. These demand-side management measures contribute to alleviating the burden on the power system, especially in distribution grids where bottlenecks are more prevalent. Electric vehicles can be defined as an attractive asset for distribution system operators, which have the potential to provide grid services if properly managed. In this thesis, first, a systematic investigation is conducted for two typically employed demand-side management methods reported in the literature: A voltage droop control-based approach and a market-driven approach. Then a control scheme of decentralized autonomous demand side management for electric vehicle charging scheduling which relies on a unidirectionally communicated grid-induced signal is proposed. In all the topics considered, the implications on the distribution grid operation are evaluated using a set of time series load flow simulations performed for representative Austrian distribution grids. Droop control mechanisms are discussed for electric vehicle charging control which requires no communication. The method provides an economically viable solution at all penetrations if electric vehicles charge at low nominal power rates. However, with the current market trends in residential charging equipment especially in the European context where most of the charging equipment is designed for 11 kW charging, the technical feasibility of the method, in the long run, is debatable. As electricity demand strongly correlates with energy prices, a linear optimization algorithm is proposed to minimize charging costs, which uses next-day market prices as the grid-induced incentive function under the assumption of perfect user predictions. The constraints on the state of charge guarantee the energy required for driving is delivered without failure. An average energy cost saving of 30% is realized at all penetrations. Nevertheless, the avalanche effect due to simultaneous charging during low price periods introduces new power peaks exceeding those of uncontrolled charging. This obstructs the grid-friendly integration of electric vehicles.
The electricity demand due to the increasing number of EVs presents new challenges for the operation of the electricity network, especially for the distribution grids. The existing grid infrastructure may not be sufficient to meet the new demands imposed by the integration of EVs. Thus, EV charging may possibly lead to reliability and stability issues, especially during the peak demand periods. Demand side management (DSM) is a potential and promising approach for mitigation of the resulting impacts. In this work, we developed an autonomous DSM strategy for optimal charging of EVs to minimize the charging cost and we conducted a simulation study to evaluate the impacts to the grid operation. The proposed approach only requires a one way communicated incentive. Real profiles from an Austrian study on mobility behavior are used to simulate the usage of the EVs. Furthermore, real smart meter data are used to simulate the household base load profiles and a real low voltage grid topology is considered in the load flow simulation. Day-ahead electricity stock market prices are used as the incentive to drive the optimization. The results for the optimum charging strategy is determined and compared to uncontrolled EV charging. The results for the optimum charging strategy show a potential cost saving of about 30.8% compared to uncontrolled EV charging. Although autonomous DSM of EVs achieves a shift of load as pursued, distribution grid operation may be substantially affected by it. We show that in the case of real time price driven operation, voltage drops and elevated peak to average powers result from the coincident charging of vehicles during favourable time slots.
A new software tool, called AWG-Channel-Spacing, is developed to calculate accurate channel spacing of an arrayed waveguide gratings (AWG) optical multiplexer/demultiplexer. This tool has been developed with the application framework QT in the programming language C++. The tool was evaluated with a design of 20-channel 200 GHz AWG. The achieved simulated transmission characteristics prove the correct functionality of the tool.
By a simple femtosecond laser process, we fabricated metal-oxide/gold composite films for electrical and optical gas sensors. We designed a dripple wavelength AWG-spectrometer, matched to the plasma absorption wavelength region of the composite films. H2/CO absorptions fit well with the AWG design for multi gas detection sensor arrays
A new software tool, called AWG-Wuckler, is developed to calculate geometric parameters of arrayed waveguide grating structures for telecommunication and medical applications. These parameters are crucial for a AWG layout which will be created and simulated using commercial photonic design tools. The design process of AWG is very complex because its geometric dimensions depend on a large number of input design parameters and other input design parameters. Often geometric constraints require an adjustment of the input design parameters and vice versa. Calculation and adjustment of the geometric parameters is a time-consuming process that is currently not fully supported by any commercial photonic tool. AWG-Wuckler tool overcomes this issue and offers a fast and easy to use solution. The tool was already applied in various AWG designs and is technologically well proven.
Back to the future of gaming
(2014)
Pooled data from published reports on infants with clinically diagnosed vitamin B12 (B12) deficiency were analyzed with the purpose of describing the presentation, diagnostic approaches, and risk factors for the condition to inform prevention strategies. An electronic (PubMed database) and manual literature search following the PRISMA approach was conducted (preregistration with the Open Science Framework, accessed on 15 February 2023). Data were described and analyzed using correlation analyses, Chi-square tests, ANOVAs, and regression analyses, and 102 publications (292 cases) were analyzed. The mean age at first symptoms (anemia, various neurological symptoms) was four months; the mean time to diagnosis was 2.6 months. Maternal B12 at diagnosis, exclusive breastfeeding, and a maternal diet low in B12 predicted infant B12, methylmalonic acid, and total homocysteine. Infant B12 deficiency is still not easily diagnosed. Methylmalonic acid and total homocysteine are useful diagnostic parameters in addition to B12 levels. Since maternal B12 status predicts infant B12 status, it would probably be advantageous to target women in early pregnancy or even preconceptionally to prevent infant B12 deficiency, rather than to rely on newborn screening that often does not reliably identify high-risk children.