Refine
Year of publication
Document Type
- Article (58)
- Conference Proceeding (19)
- Part of a Book (2)
- Working Paper (2)
- Book (1)
- Doctoral Thesis (1)
- Other (1)
Institute
- Forschungszentrum Mikrotechnik (25)
- Forschungszentrum Business Informatics (20)
- Forschungszentrum Energie (18)
- Technik | Engineering & Technology (12)
- Department of Computer Science (Ende 2021 aufgelöst; Integration in die übergeordnete OE Technik) (11)
- Forschungszentrum Human Centred Technologies (8)
- Soziales & Gesundheit (6)
- Forschungszentrum Digital Factory Vorarlberg (4)
- Josef Ressel Zentrum für Intelligente Thermische Energiesysteme (4)
- Forschung (2)
Language
- English (84) (remove)
Has Fulltext
- yes (84) (remove)
Is part of the Bibliography
- yes (84) (remove)
Keywords
- Global optimization (3)
- Peripheral arterial disease (3)
- Rastrigin function (3)
- Y-branch splitter (3)
- Bubble column humidifier (2)
- Cloud manufacturing (2)
- Demand response (2)
- Demand side management (2)
- Desalination (2)
- Distribution grids (2)
The utilization of lasers in dentistry expands greatly in recent years. For instance, fs-lasers are effective for both drilling and caries prevention, while cw-lasers are useful for adhesive hardening. A cutting-edge application of lasers in dentistry is the debonding of veneers. While there are pre-existing tools for this purpose, there is still potential for improvement. Initial efforts to investigate laser assisted debonding mechanisms with measurements of the optical and mechanical properties of teeth and prosthetic ceramics are presented. Preliminary tests conducted with a laser system used for debonding that is commercially available showed differences in the output power set at the systems console to that at specified distances from the handpiece. Furthermore, the optical properties of the samples (human teeth and ceramics) were characterised. The optical properties of the ceramics should closely resemble those of teeth in terms of look and feel, but they also influence the laser assisted debonding technique and thus must be taken into account. In addition first attempts were performed to investigate the mechanical properties of the samples by means of pump-probe-elastography under a microscope. By analyzing the sample surface up to 20 ns after a fs-laser pulse impact, pressure and shock waves could be detected, which can be utilized to determine the elastic constants of specific materials. Together such investigations are needed to shape the basis for a purely optical approach of debonding of veneers utilizing acoustic waves.
Power plant operators increasingly rely on predictive models to diagnose and monitor their systems. Data-driven prediction models are generally simple and can have high precision, making them superior to physics-based or knowledge-based models, especially for complex systems like thermal power plants. However, the accuracy of data-driven predictions depends on (1) the quality of the dataset, (2) a suitable selection of sensor signals, and (3) an appropriate selection of the training period. In some instances, redundancies and irrelevant sensors may even reduce the prediction quality.
We investigate ideal configurations for predicting the live steam production of a solid fuel-burning thermal power plant in the pulp and paper industry for different modes of operation. To this end, we benchmark four machine learning algorithms on two feature sets and two training sets to predict steam production. Our results indicate that with the best possible configuration, a coefficient of determination of R^2 = 0.95 and a mean absolute error of MAE=1.2 t/h with an average steam production of 35.1 t/h is reached. On average, using a dynamic dataset for training lowers MAE by 32% compared to a static dataset for training. A feature set based on expert knowledge lowers MAE by an additional 32 %, compared to a simple feature set representing the fuel inputs. We can conclude that based on the static training set and the basic feature set, machine learning algorithms can identify long-term changes. When using a dynamic dataset the performance parameters of thermal power plants are predicted with high accuracy and allow for detecting short-term problems.
Highly-sensitive single-step sensing of levodopa by swellable microneedle-mounted nanogap sensors
(2023)
Microneedle (MN) sensing of biomarkers in interstitial fluid (ISF) can overcome the challenges of self-diagnosis of diseases by a patient, such as blood sampling, handling, and measurement analysis. However, the MN sensing technologies still suffer from poor measurement accuracy due to the small amount of target molecules present in ISF, and require multiple steps of ISF extraction, ISF isolation from MN, and measurement with additional equipment. Here, we present a swellable MN-mounted nanogap sensor that can be inserted into the skin tissue, absorb ISF rapidly, and measure biomarkers in situ by amplifying the measurement signals by redox cycling in nanogap electrodes. We demonstrate that the MN-nanogap sensor measures levodopa (LDA), medication for Parkinson disease, down to 100 nM in an aqueous solution, and 1 μM in both the skin-mimicked gelatin phantom and porcine skin.
Organic acidurias (OAs), urea-cycle disorders (UCDs), and maple syrup urine disease (MSUD) belong to the category of intoxication-type inborn errors of metabolism (IT-IEM). Liver transplantation (LTx) is increasingly utilized in IT-IEM. However, its impact has been mainly focused on clinical outcome measures and rarely on health-related quality of life (HRQoL). Aim of the study was to investigate the impact of LTx on HrQoL in IT-IEMs. This single center prospective study involved 32 patients (15 OA, 11 UCD, 6 MSUD; median age at LTx 3.0 years, range 0.8–26.0). HRQoL was assessed pre/post transplantation by PedsQL-General Module 4.0 and by MetabQoL 1.0, a specifically designed tool for IT-IEM. PedsQL highlighted significant post-LTx improvements in total and physical functioning in both patients' and parents' scores. According to age at transplantation (≤3 vs. >3 years), younger patients showed higher post-LTx scores on Physical (p = 0.03), Social (p < 0.001), and Total (p =0.007) functioning. MetabQoL confirmed significant post-LTx changes in Total and Physical functioning in both patients and parents scores (p ≤ 0.009). Differently from PedsQL, MetabQoL Mental (patients p = 0.013, parents p = 0.03) and Social scores (patients p = 0.02, parents p = 0.012) were significantly higher post-LTx. Significant improvements (p = 0.001–0.04) were also detected both in self- and proxy-reports for almost all MetabQoL subscales. This study shows the importance of assessing the impact of transplantation on HrQoL, a meaningful outcome reflecting patients' wellbeing. LTx is associated with significant improvements of HrQol in both self- and parentreports. The comparison between PedsQL-GM and MetabQoL highlighted that MetabQoL demonstrated higher sensitivity in the assessment of diseasespecific domains than the generic PedsQL tool.
Long-Term outcome of infantile onset pompe disease patients treated with enzyme replacement therapy
(2024)
Background: Enzyme replacement therapy (ERT) with recombinant human alglucosidase alfa (rhGAA) was approved in Europe in 2006. Nevertheless, data on the long-term outcome of infantile onset Pompe disease (IOPD) patients at school age is still limited.
Objective: We analyzed in detail cardiac, respiratory, motor, and cognitive function of 15 German-speaking patients aged 7 and older who started ERT at a median age of 5 months.
Results: Starting dose was 20 mg/kg biweekly in 12 patients, 20 mg/kg weekly in 2, and 40 mg/kg weekly in one patient. CRIM-status was positive in 13 patients (86.7%) and negative or unknown in one patient each (6.7%). Three patients (20%) received immunomodulation. Median age at last assessment was 9.1 (7.0–19.5) years. At last follow-up 1 patient (6.7%) had mild cardiac hypertrophy, 6 (42.9%) had cardiac arrhythmias, and 7 (46.7%) required assisted ventilation. Seven patients (46.7%) achieved the ability to walk independently and 5 (33.3%) were still ambulatory at last follow-up. Six patients (40%) were able to sit without support, while the remaining 4 (26.7%) were tetraplegic. Eleven patients underwent cognitive testing (Culture Fair Intelligence Test), while 4 were unable to meet the requirements for cognitive testing. Intelligence quotients (IQs) ranged from normal (IQ 117, 102, 96, 94) in 4 patients (36.4%) to mild developmental delay (IQ 81) in one patient (9.1%) to intellectual disability (IQ 69, 63, 61, 3x < 55) in 6 patients (54.5%). White matter abnormalities were present in 10 out of 12 cerebral MRIs from 7 patients.
Measuring what matters
(2023)
Patient reported outcomes (PROs) are generally defined as ‘any report of the status of a patient's health condition that comes directly from the patient, without interpretation of the patient's response by a clinician or anyone else’. A broader definition of PRO also includes ‘any information on the outcomes of health care obtained directly from patients without modification by clinicians or other health care professionals’. Following this approach, PROs encompass subjective perceptions of patients on how they function or feel not only in relation to a health condition but also to its treatment as well as concepts such as health-related quality of life (HrQoL), information on the functional status of a patient, signs and symptoms and symptom burden. PRO measurement instruments (PROMs) are mostly questionnaires and inform about what patients can do and how they feel. PROs and PROMs have not yet found unconditional acceptance and wide use in the field of inborn errors of metabolism. This review summarises the importance and usefulness of PROs in research, drug legislation and clinical care and informs about quality standards, development, and potential methodological shortfalls of PROMs. Inclusion of PROs measured with high-quality, well-selected PROMs into clinical care, drug legislation, and research helps to identify unmet needs, improve quality of care, and define outcomes that are meaningful to patients. The field of IEM should open to new methodological approaches such as the definition of core sets of variables including PROs to be systematically assessed in specific metabolic conditions and new collaborations with PRO experts, such as psychologists to facilitate the systematic collection of meaningful data.
X-ray microtomography is a nondestructive, three-dimensional inspection technique applied across a vast range of fields and disciplines, ranging from research to industrial, encompassing engineering, biology, and medical research. Phasecontrast imaging extends the domain of application of x-ray microtomography to classes of samples that exhibit weak attenuation, thus appearing with poor contrast in standard x-ray imaging. Notable examples are low-atomic-number materials, like carbon-fiber composites, soft matter, and biological soft tissues.We report on a compact and cost-effective system for x-ray phase-contrast microtomography. The system features high sensitivity to phase gradients and high resolution, requires a low-power sealed x-ray tube, a single optical element, and fits in a small footprint. It is compatible with standard x-ray detector technologies: in our experiments, we have observed that single-photon counting offered higher angular sensitivity, whereas flat panels provided a larger field of view. The system is benchmarked against knownmaterial phantoms, and its potential for soft-tissue three-dimensional imaging is demonstrated on small-animal organs: a piglet esophagus and a rat heart.We believe that the simplicity of the setupwe are proposing, combined with its robustness and sensitivity, will facilitate accessing quantitative x-ray phase-contrast microtomography as a research tool across disciplines, including tissue engineering, materials science, and nondestructive testing in general.
Pooled data from published reports on infants with clinically diagnosed vitamin B12 (B12) deficiency were analyzed with the purpose of describing the presentation, diagnostic approaches, and risk factors for the condition to inform prevention strategies. An electronic (PubMed database) and manual literature search following the PRISMA approach was conducted (preregistration with the Open Science Framework, accessed on 15 February 2023). Data were described and analyzed using correlation analyses, Chi-square tests, ANOVAs, and regression analyses, and 102 publications (292 cases) were analyzed. The mean age at first symptoms (anemia, various neurological symptoms) was four months; the mean time to diagnosis was 2.6 months. Maternal B12 at diagnosis, exclusive breastfeeding, and a maternal diet low in B12 predicted infant B12, methylmalonic acid, and total homocysteine. Infant B12 deficiency is still not easily diagnosed. Methylmalonic acid and total homocysteine are useful diagnostic parameters in addition to B12 levels. Since maternal B12 status predicts infant B12 status, it would probably be advantageous to target women in early pregnancy or even preconceptionally to prevent infant B12 deficiency, rather than to rely on newborn screening that often does not reliably identify high-risk children.
Grey Box models provide an important approach for control analysis in the Heating, Ventilation and Air Conditioning (HVAC) sector. Grey Box models consist of physical models where parameters are estimated from data. Due to the vast amount of component models that can be found in literature, the question arises, which component models perform best on a given system or dataset? This question is investigated systematically using a test case system with real operational data. The test case system consists of a HVAC system containing an energy recovery unit (ER), a heating coil (HC) and a cooling coil (CC). For each component, several suitable model variants from the literature are adapted appropriately and implemented. Four model variants are implemented for the ER and five model variants each for the HC and CC. Further, three global optimization algorithms and four local optimization algorithms to solve the nonlinear least squares system identification are implemented, leading to a total of 700 combinations. The comparison of all variants shows that the global optimization algorithms do not provide significantly better solutions. Their runtimes are significantly higher. Analysis of the models shows a dependency of the model accuracy on the number of total parameters.
The production of liquid-gas mixtures with desired properties still places high demands on process technology and is usually realized in bubble columns. The physical calculation models used have individual dimensionless factors which, depending on the application, are only valid for small ranges consisting of flow velocity, nozzle geometry and test setup. An iterative but time-consuming design of such dispersion processes is used in industry for producing a liquid-gas mixture according to desired requirements. In the present investigation, we accelerate the necessary design loops by setting up a physical model, which consists of several subsystems that are enriched by dedicated experiments to realize liquid-gas dispersions with low volume fraction and small air bubble diameters in oil. Our approach allows the extraction of individual dimensionless factors from maps of the introduced subsystems. These maps allow for targeted corrective measures of a production process for keeping the quality. The calculation-based approach avoids the need for performing iterative design loops. Overall, this approach supports the controlled generation of liquid-gas mixtures.
Creating a schedule to perform certain actions in a realworld environment typically involves multiple types of uncertainties. To create a plan which is robust towards uncertainties, it must stay flexible while attempting to be reliable and as close to optimal as possible. A plan is reliable if an adjustment to accommodate for a new requirement causes only a few disruptions. The system needs to be able to adapt to the schedule if unforeseen circumstances make planned actions impossible, or if an unlikely event would enable the system to follow a better path. To handle uncertainties, the used methods need to be dynamic and adaptive. The planning algorithms must be able to re-schedule planned actions and need to adapt the previously created plan to accommodate new requirements without causing critical disruptions to other required actions.
A model is presented that allows for the calculation of the success probability by which a vanilla Evolution Strategy converges to the global optimizer of the Rastrigin test function. As a result a population size scaling formula will be derived that allows for an estimation of the population size needed to ensure a high convergence security depending on the search space dimensionality.
Activation of heat pump flexibilities is a viable solution to support balancing the grid via Demand Side Management measures and fulfill the need for flexibility options. Aggregators as interface between prosumers, distribution system operators and balance responsible parties face the challenge due to data privacy and technical restrictions to transform prosumer information into aggregated available flexibility to enable trading thereof. Thereby, literature lacks a generic, applicable and widely accepted flexibility estimation method for heat pumps,which incorporates reduced sensor and system information, system- and demand-dependent behaviour. In this paper, we adapt and extend a method from literature, by incorporating domain knowledge to overcome reduced sensor and system information. We apply data of five real-world heat pump systems, distinguish operation modes, estimate power and energy flexibility of each single heat pump system, proof transferability of the method, and aggregate the flexibilities available to showcase a small HP pool as a proof of concept.
Open tracing tools
(2023)
Background: Coping with the rapid growing complexity in contemporary software architecture, tracing has become an increasingly critical practice and been adopted widely by software engineers. By adopting tracing tools, practitioners are able to monitor, debug, and optimize distributed software architectures easily. However, with excessive number of valid candidates, researchers and practitioners have a hard time finding and selecting the suitable tracing tools by systematically considering their features and advantages. Objective: To such a purpose, this paper aims to provide an overview of popular Open tracing tools via comparison. Methods: Herein, we first identified 30 tools in an objective, systematic, and reproducible manner adopting the Systematic Multivocal Literature Review protocol. Then, we characterized each tool looking at the 1) measured features, 2) popularity both in peer-reviewed literature and online media, and 3) benefits and issues. We used topic modeling and sentiment analysis to extract and summarize the benefits and issues. Specially, we adopted ChatGPT to support the topic interpretation. Results: As a result, this paper presents a systematic comparison amongst the selected tracing tools in terms of their features, popularity, benefits and issues. Conclusion: The result mainly shows that each tracing tool provides a unique combination of features with also different pros and cons. The contribution of this paper is to provide the practitioners better understanding of the tracing tools facilitating their adoption.
Immersive educational spaces
(2023)
"If only we had had such opportunities to grasp history like this when I was young" – words by an almost 80-year-old woman holding an iPad on which both, the buildings in the background and a tower in the form of a virtual 3D object, appear within reach. To "grasp" history - what an apt use of this action-oriented word for an augmented reality application built on considerations of thinking and acting in history. This telling image emerged during the first test run of the app i.appear which will be the focus of this article's considerations on the use of immersive learning environments. The application i.appear has been used in the city of Dornbirn (Austria) for a year now to teach historical content through location-based augmented reality and other interactive and multimedia technologies. After a brief description of the potential of such applications, the epistemological structure of the hosting app i.appear and its functionality will be outlined. This article will focus on the “Baroque Master Builders” tour of the hosting app that was created and tested as part of the current research.
The production of liquid-gas dispersions places high demands on the process technology, which requires knowledge of the bubble formation mechanisms, as well as the phase parameters of the media combinations used. To obtain the bubble sizes introduced to a flow not knowing the phase parameters, different process parameters are investigated. Their quality and applicability are evaluated. The results obtained make it possible to simplify long design processes of dispersion processes in manufacturing plants and to ensure the product quality of the products manufactured, by reducing waste.
In previous studies of linear rotary systems with active magnetic bearings, parametric excitation was introduced as an open-loop control strategy. The parametric excitation was realized by a periodic, in-phase variation of the bearing stiffness. At the difference between two of the eigenfrequencies of the system, a stabilizing effect, called anti-resonance, was found numerically and validated in experiments. In this work, preliminary results of further exploration of the parametric excitation are shared. A Jeffcott rotor with two active magnetic bearings and a disk is investigated. Using Floquet theory, a deeper insight into the dynamic behavior of the system is obtained. Aiming at a further increase of stability, a phase difference between excitation terms is introduced.
Vast amounts of oily wastewater are byproducts of the petrochemical and the shipping industry and to this day frequently discharged into water bodies either without or after insufficient treatment. To alleviate the resulting pollution, water treatment processes are in great demand. Bubble column humidifiers (BCHs) as part of humidification–dehumidification systems are predestined for such a task, since they are insensitive to different feed liquids, simple in design and have low maintenance requirements. While humidification in a bubble column has been investigated plentiful for desalination, a systematic investigation of oily wastewater treatment is missing in literature. We filled this gap by analyzing the treatment of an oil–water emulsion experimentally to derive recommendations for future design and operation of BCHs. Our humidity measurements indicate that the air stream is always saturated after humidification for a liquid height of only 10 cm. A residual water mass fraction of 3.5 wt% is measured after a batch run of six hours. Furthermore, continuous measurements show that an increase in oil mass fraction leads to a decrease in system productivity especially for high oil mass fractions. This decrease is caused by the heterogeneity of the liquid temperature profile. A lower liquid height mitigates this heterogeneity, therefore decreasing the heat demand and improving the overall efficiency. The oil content of the produced condensate is below 15 ppm, allowing discharge into various water bodies. The results of our systematic investigation prove suitability and indicate a strong future potential for the use of BCHs in oily wastewater treatment.
Industrial demand side management has shown significant potential to increase the efficiency of industrial energy systems via flexibility management by model-driven optimization methods. We propose a grey-box model of an industrial food processing plant. The model relies on physical and process knowledge and mass and energy balances. The model parameters are estimated using a predictive error method. Optimization methods are applied to separately reduce the total energy consumption, total energy costs and the peak electricity demand of the plant. A viable potential for demand side management in the plant is identified by increasing the energy efficiency, shifting cooling power to low price periods or by peak load reduction.
A trend from centralized to decentralized production is emerging in the manufacturing domain leading to new and innovative approaches for long-established production methods. A technology supporting this trend is Cloud Manufacturing, which adapts technologies and concepts known from cloud computing to the manufacturing domain. A core aspect of Cloud Manufacturing is representing knowledge about manufacturing, e.g., machine capabilities, in a suitable form. This knowledge representation should be flexible and adaptable so that it fits across various manufacturing domains, but, at the same time, should also be specific and exhaustive. We identify three core capabilities that such a platform has to support, i.e., the product, the process and the production.We propose representing this knowledge in semantically specified knowledge graphs, essentially creating three through features interconnected ontologies each representing a facet of manufacturing. Finally, we present an exemplary implementation of a Cloud Manufacturing platform using this representation and its advantages.
Bubble column humidifiers (BCHs) are frequently used for the humidification of air in various water treatment applications. A potential but not yet profoundly investigated application of such devices is the treatment of oily wastewater. To evaluate this application, the accumulation of an oil-water emulsion using a BCH is experimentally analyzed. The amount of evaporating water vapor can be evaluated by measuring the humidity ratio of the outlet air. However, humidity measurements are difficult in close to saturated conditions, as the formation of liquid droplets on the sensor impacts the measurement accuracy. We use a heating section after the humidifier, such that no liquid droplets are formed on the sensor. This enables us a more accurate humidity measurement. Two batch measurement runs are conducted with (1) tap water and (2) an oil-water emulsion as the respective liquid phase. The humidity measurement in high humidity conditions is highly accurate with an error margin of below 3 % and can be used to predict the oil concentration of the remaining liquid during operation. The measured humidity ratio corresponds with the removed amount of water vapor for both tap water and the accumulation of an oil-water emulsion. Our measurements show that the residual water content
in the oil-water emulsion is below 4 %.
Grid-scale electrical energy storage (EES) is a key component in cost-effective transition scenarios to renewable energy sources. The requirement of scalability favors EES approaches such as pumped-storage hydroelectricity (PSH) or compressed-air energy storage (CAES), which utilize the cheap and abundant storage materials water and air, respectively. To overcome the site restriction and low volumetric energy densities attributed to PSH and CAES, liquid-air energy storage (LAES) has been devised; however, it suffers from a rather small round-trip efficiency (RTE) and challenging storage conditions. Aiming to overcome these drawbacks, a novel system for EES is developed using solidified air (i.e., clathrate hydrate of air) as the storable phase of air. A reference plant for solidified-air energy storage (SAES) is conceptualized and modeled thermodynamically using the software CoolProp for water and air as well as empirical data and first-order approximations for the solidified air (SA). The reference plant exhibits a RTE of 52% and a volumetric storage density of 47 kWh per m3 of SA. While this energy density relates to only one half of that in LAES plants, the modeled RTE of SAES is comparable already. Since improved thermal management and the use of thermodynamic promoters can further increase the RTEs in SAES, the technical potential of SAES is in place already. Yet, for a successful implementation of the concept - in addition to economic aspects - questions regarding the stability of SA must be first clarified and challenges related to the processing of SA resolved.
The impact of global warming and climate change has forced countries to introduce strict policies and decarbonization goals toward sustainable development. To achieve the decarbonization of the economy, a substantial increase of renewable energy sources is required to meed energy demand and to transition away from fossil fuels. However, renewables are sensitive to environmental conditions, which may lead to imbalances between energy supply and demand. Battery energy storage systems are gaining more attention for balancing energy systems in existing grid networks at various levels such as bulk power management, transmission and distribution, and for end-users. Integrating battery energy storage systems with renewables can also solve reliability issues related to transient energy production and be used as a buffer source for electrical vehicle fast charging. Despite these advantages, batteries are still expensive and typically built for a single application – either for an energy- or power-dense application – which limits economic feasibility and flexibility. This paper presents a theoretical approach of a hybrid energy storage system that utilizes both energy- and power-dense batteries serving multiple grid applications. The proposed system will employ second use electrical vehicle batteries in order to maximise the potential of battery waste. The approach is based on a survey of battery modelling techniques and control methods. It was found that equivalent circuit models as well as unified control methods are best suited for modelling hybrid energy storages for grid applications. This approach for hybrid modelling is intended to help accelerate the renewable energy transition by providing reliable energy storage.
Increasing electric vehicle penetration leads to undesirable peaks in power if no proper coordination in charging is implemented. We tested the feasibility of electric vehicles acting as flexible demands responding to power signals to minimize the system peaks. The proposed hierarchical autonomous demand side management algorithm is formulated as an optimal power tracking problem. The distribution grid operator determines a power signal for filling the valleys in the non-electric vehicle load profile using the electric vehicle demand flexibility and sends it to all electric vehicle controllers. After receiving the control signal, each electric vehicle controller re-scales it to the expected individual electric vehicle energy demand and determines the optimal charging schedule to track the re-scaled signal. No information concerning the electric vehicles are reported back to the utility, hence the approach can be implemented using unidirectional communication with reduced infrastructural requirements. The achieved results show that the optimal power tracking approach has the potential to eliminate additional peak demands induced by electric vehicle charging and performs comparably to its central implementation. The reduced complexity and computational overhead permits also convenient deployment in practice.
Violation-mitigation-based method for PV hosting capacity quantification in low voltage grids
(2022)
Hosting capacity knowledge is of great importance for distribution utilities to assess the amount of PV capacity possible to accommodate without troubling the operation of the grid. In this paper, a novel method to quantify the hosting capacity of low voltage grids is presented. The method starts considering a state of fully exploited building rooftop solar potential. A downward process is proposed - from the starting state with expected violations on the grid operation to a state with no violations. In this process, the installed PV capacity is progressively reduced. The reductions are made sequentially and selectively aiming to mitigate specific violations: nodes overvoltage, lines overcurrent and transformer overloading. Evaluated on real data of fourteen low voltage grids from Austria, the method proposed exhibits benefits in terms of higher hosting capacities and lower computational costs compared to stochastic methods. Furthermore, it also quantifies hosting capacity expansions achievable by overcoming the effect of the violations. The usage of a potential different from solar rooftops is also presented, demonstrating that a user-defined potential allows to quantify the hosting capacity in a more general setting with the method proposed.
Traditional power grids are mainly based on centralized power generation and subsequent distribution. The increasing penetration of distributed renewable energy sources and the growing number of electrical loads is creating difficulties in balancing supply and demand and threatens the secure and efficient operation of power grids. At the same time, households hold an increasing amount of flexibility, which can be exploited by demand-side management to decrease customer cost and support grid operation. Compared to the collection of individual flexibilities, aggregation reduces optimization complexity, protects households’ privacy, and lowers the communication effort. In mathematical terms, each flexibility is modeled by a set of power profiles, and the aggregated flexibility is modeled by the Minkowski sum of individual flexibilities. As the exact Minkowski sum calculation is generally computationally prohibitive, various approximations can be found in the literature. The main contribution of this paper is a comparative evaluation of several approximation algorithms in terms of novel quality criteria, computational complexity, and communication effort using realistic data. Furthermore, we investigate the dependence of selected comparison criteria on the time horizon length and on the number of households. Our results indicate that none of the algorithms perform satisfactorily in all categories. Hence, we provide guidelines on the application-dependent algorithm choice. Moreover, we demonstrate a major drawback of some inner approximations, namely that they may lead to situations in which not using the flexibility is impossible, which may be suboptimal in certain situations.
Bubble columns are recently used for the humidification of air in water treatment systems and fuel cells. They are well applicable due to their excellent heat and mass transfer and their low technical complexity. To design and operate such devices with high efficiency, the humidification process and the impact of the operating parameters need to be understood to a sufficient degree. To extend this knowledge, we use a refined and novel method to determine the volumetric air–liquid heat and mass transfer coefficients and the humidifier efficiency for various parametric settings. The volumetric transfer coefficients increase with both of the superficial air velocity and the liquid temperature. It is further shown that the decrease of vapor pressure with an increase of the salinity results in a corresponding decrease in the outlet humidity ratio. In contrast to previous studies, liquid heights smaller than 0.1 m are investigated and significant changes in the humidifier efficiency are seen in this range. We present the expected humidifier efficiency with respect to the superficial air velocity and the liquid height in an efficiency chart, such that optimal operating conditions can be determined. Based on this efficiency chart, recommendations for industrial applications as well as future scientific challenges are drawn.
If left uncontrolled, electric vehicle charging poses severe challenges to distribution grid operation. Resulting issues are expected to be mitigated by charging control. In particular, voltage-based charging control, by relying only on the local measurements of voltage at the point of connection, provides an autonomous communication-free solution. The controller, attached to the charging equipment, compares the measured voltage to a reference voltage and adapts the charging power using a droop control characteristic. We present a systematic study of the voltage-based droop control method for electric vehicles to establish the usability of the method for all the currently available residential electric vehicle charging possibilities considering a wide range of electric vehicle penetrations. Voltage limits are evaluated according to the international standard EN50160, using long-term load flow simulations based on a real distribution grid topology and real load profiles. The results achieved show that the voltage-based droop controller is able to mitigate the under voltage problems completely in distribution grids in cases either deploying low charging power levels or exhibiting low penetration rates. For high charging rates and high penetrations, the control mechanism improves the overall voltage profile, but it does not remedy the under voltage problems completely. The evaluation also shows the controller’s ability to reduce the peak power at the transformer and indicates the impact it has on users due to the reduction in the average charging rates. The outcomes of the paper provide the distribution grid operators an insight on the voltage-based droop control mechanism for the future grid planning and investments.
In recent years, ultrashort pulsed lasers have increased their applicability for industrial requirements, as reliable femtosecond and picosecond laser sources with high output power are available on the market. Compared to conventional laser sources, high quality processing of a large number of material classes with different mechanical and optical properties is possible. In the field of laser cutting, these properties enable the cutting of multilayer substrates with changing material properties. In this work, the femtosecond laser cutting of phosphor sheets is demonstrated. The substrate contains a 230 micrometer thick silicone layer filled with phosphor, which is embedded between two glass plates. Due to the softness and thermal sensitivity of the silicone layer in combination with the hard and brittle dielectric material, the separation of such a material combination is challenging for both mechanical separation processes and cutting with conventional laser sources. In our work, we show that the femtosecond laser is suitable to cut the substrate with a high cutting edge quality. In addition to the experimental results of the laser dicing process, we present a universal model that allows predicting the final cutting edge geometry of a multilayer substrate.
Entangled photon generation at 1550 nm in the telecom C-band is of critical importance as it enables the realization of quantum communication protocols over long distance using deployed telecommunication infrastructure. InAs epitaxial quantum dots have recently enabled on-demand generation of entangled photons in this wavelength range. However, time-dependent state evolution, caused by the fine-structure splitting, currently limits the fidelity to a specific entangled state. Here, we show fine-structure suppression for InAs quantum dots using micromachined piezoelectric actuators and demonstrate generation of highly entangled photons at 1550 nm. At the lowest fine-structure setting, we obtain a maximum fidelity of 90.0 ± 2.7% (concurrence of 87.5 ± 3.1%). The concurrence remains high also for moderate (weak) temporal filtering, with values close to 80% (50%), corresponding to 30% (80%) of collected photons, respectively. The presented fine-structure control opens the way for exploiting entangled photons from quantum dots in fiber-based quantum communication protocols.
Today, optics and photonics is widely regarded as one of the most important key technologies for this century. Many experts even anticipate that the 21st century will be century of photon much as the 20th century was the century of electron. Optics and photonics technologies affect almost all areas of our life and cover a wide range of applications in science and industry, e.g. in information and communication technology, in medicine, life science engineering as well as in energy and environmental technology. However even so attractive, the photonics is not well known by most people. To motivate especially young generation for optics and photonics we worked out a lecture related to the “light” for children aged eight to twelve years. We have prepared many experiments to explain the nature of light and its applications in our everyday life. Finally, we focused on the optical data transmission, i.e. how modern communication over optical networks works. To reach many children at home we recorded this lecture and offered it as a video online in the frame of children’s university at Vorarlberg University of Applied Sciences. By combining the hands-on teaching with having a fun while learning about the basic optics concepts we aroused interest of many children with a very positive feedback.
The increasing digitalisation of daily routines confronts people with frequent privacy decisions. However, obscure data processing often leads to tedious decision-making and results in unreflective choices that unduly compromise privacy. Serious Games could be applied to encourage teenagers and young adults to make more thoughtful privacy decisions. Creating a Serious Game (SG) that promotes privacy awareness while maintaining an engaging gameplay requires, however, a carefully balanced game concept. This study explores the benefits of an online role-playing boardgame as a co-designing activity for creating SGs about privacy. In a between-subjects trial, student groups and educator/researcher groups were taking the roles of player, teacher, researcher and designer to co-design a balanced privacy SG concept. Using predefined design proposal cards or creating their own, students and educators played the online boardgame during a video conference session to generate game ideas, resolve potential conflicts and balance the different SG aspects. The comparative results of the present study indicate that students and educators alike perceive support from role-playing when ideating and balancing SG concepts and are happy with their playfully co-designed game concepts. Implications for supporting SG design with role-playing in remote collaboration scenarios are conclusively synthesised.
With the digitalisation, and the increased connectivity between manufacturing systems emerging in this context, manufacturing is shifting towards decentralised, distributed concepts. Still, for manufacturing scenarios manual input or augmentation of data is required at system boundaries. Especially in distributed manufacturing environments, like Cloud Manufacturing (CMfg) systems, constant changes to the available manufacturing resources and products pose challenges for establishing connections between them. We propose a feature-oriented representation of concepts, especially from the manufacturing domain, which serves as the basis for (semi-) automatically linking, e.g., manufacturing resources and products. This linking methodologies, as well as knowledge inferred using it, is then used to support distributed manufacturing, especially in CMfg environments, and enhance product development. The concepts and methodologies are to be evaluated in a real world learning factory.
Background: Peripheral arterial disease (PAD) is a common and severe disease with a highly increased cardiovascular morbidity and mortality. Through the circulatory disorder and the linked undersupply of oxygen carriers in the lower limbs, the ongoing decrease of the pain-free walking distance occurs with a significant reduction in patients’ quality of life. Studies including activity monitoring for patients with PAD are rare and digital support to increase activity via mobile health technologies is mainly targeted at patients with cardiovascular disease in general. The special requirement of patients with PAD is the need to reach a certain pain level to improve the pain-free walking distance. Unfortunately, both poor adherence and availability of institutional resources are major problems in patient-centered care.
Objective: The objective of this trackPAD pilot study is to evaluate the feasibility of a mobile phone–based self tracking app to promote physical activity and supervised exercise therapy (SET) in particular. We also aim for a subsequent patient centered adjustment of the app prototype based on the results of the app evaluation and process evaluation.
Methods: This study was designed as a closed user group trial, with assessors blinded, and parallel group study with face-to-face components for assessment with a follow-up of 3 months. Patients with symptomatic PAD (Fontaine stage IIa or IIb) and possession of a mobile phone were eligible. Eligible participants were randomly assigned into study and control group, stratified by their distance covered in the 6-min walk test, using the software TENALEA. Participants randomized to the study group received usual care and the mobile intervention (trackPAD) for the follow-up period of 3 months, whereas participants randomized to the control group received only usual care. TrackPAD records the frequency and duration of training sessions and pain level using manual user input. Clinical outcome data were collected at the baseline and after 3 months via validated tools (6-min walk test, ankle-brachial index, and duplex ultrasound at the lower arteries) and self-reported quality of life. Usability and quality of the app was determined using the user version of the Mobile Application Rating Scale.
Results: The study enrolled 45 participants with symptomatic PAD (44% male). Of these participants, 21 (47%) were randomized to the study group and 24 (53%) were randomized to the control group. The distance walked in the 6-min walk test was comparable in both groups at baseline (study group: mean 368.1m [SD 77.6] vs control group: mean 394.6m [SD 100.6]).
Conclusions: This is the first trial to test a mobile intervention called trackPAD that was designed especially for patients with PAD. Its results will provide important insights in terms of feasibility, effectiveness, and patient preferences of an app-based mobile intervention supporting SET for the conservative treatment of PAD.
Background: The development of mobile interventions for noncommunicable diseases has increased in recent years. However, there is a dearth of apps for patients with peripheral arterial disease (PAD), who frequently have an impaired ability to walk.
Objective: Using a patient-centered approach for the development of mobile interventions, we aim to describe the needs and requirements of patients with PAD regarding the overall care situation and the use of mobile interventions to perform supervised exercise therapy (SET).
Methods: A questionnaire survey was conducted in addition to a clinical examination at the vascular outpatient clinic of the West-German Heart and Vascular Center of the University Clinic Essen in Germany. Patients with diagnosed PAD were asked to answer questions on sociodemographic characteristics, PAD-related need for support, satisfaction with their health care situation, smartphone and app use, and requirements for the design of mobile interventions to support SET.
Results: Overall, a need for better support of patients with diagnosed PAD was identified. In total, 59.2% (n=180) expressed their desire for more support for their disease. Patients (n=304) had a mean age of 67 years and half of them (n=157, 51.6%) were smartphone users. We noted an interest in smartphone-supported SET, even for people who did not currently use a smartphone. “Information,” “feedback,” “choosing goals,” and “interaction with physicians and therapists” were rated the most relevant components of a potential app.
Conclusions: A need for the support of patients with PAD was determined. This was particularly evident with regard to disease literacy and the performance of SET. Based on a detailed description of patient characteristics, proposals for the design of mobile interventions adapted to the needs and requirements of patients can be derived.
Background: Mobile health interventions are intended to support complex health care needs in chronic diseases digitally, but they are mainly targeted at general health improvement and neglect disease-specific requirements. Therefore, we designed TrackPAD, a smartphone app to support supervised exercise training in patients with peripheral arterial disease.
Objective: This pilot study aimed to evaluate changes in the 6-minute walking distance (meters) as a primary outcome measure. The secondary outcome measures included changes in physical activity and assessing the patients’ peripheral arterial disease–related quality of life.
Methods: This was a pilot two-arm, single-blinded, randomized controlled trial. Patients with symptomatic PAD (Fontaine stage IIa/b) and access to smartphones were eligible. Eligible participants were randomly assigned to the study, with the control group stratified by the distance covered in the 6-minute walking test using the TENALEA software. Participants randomized to the intervention group received usual care and the mobile intervention (TrackPAD) for the follow-up period of 3 months, whereas participants randomized to the control group received routine care only. TrackPAD records the frequency and duration of training sessions and pain levels using manual user input. Clinical outcome data were collected at the baseline and after 3 months via validated tools (the 6-minute walk test and self-reported quality of life). The usability and quality of the app were determined using the Mobile Application Rating Scale user version.
Results: The intervention group (n=19) increased their mean 6-minute walking distance (83 meters, SD 72.2), while the control group (n=20) decreased their mean distance after 3 months of follow-up (–38.8 meters, SD 53.7; P=.01). The peripheral arterial disease–related quality of life increased significantly in terms of “symptom perception” and “limitations in physical functioning.” Users’ feedback showed increased motivation and a changed attitude toward performing supervised exercise training.
Conclusions: Besides the rating providing a valuable support tool for the user group, the mobile intervention TrackPAD was linked to a change in prognosis-relevant outcome measures combined with enhanced coping with the disease. The influence of mobile interventions on long-term prognosis must be evaluated in the future.
In this paper, we propose and simulate a new type of three-dimensional (3D) optical splitter based on multimode interference (MMI) for the wavelength of 1550 nm. The splitter was proposed on the square basis with the width of 20 x 20 µm2 using the IP-Dip polymer as a standard material for 3D laser lithography. We present the optical field distribution in the proposed MMI splitter and its integration possibility on optical fiber. The design is aimed to the possible fabrication process using the 3D laser lithography for forthcoming experiments.
The goal of this paper is to design a low-loss 1 x 32 Y-branch optical splitter for optical transmission systems, using two different design tools employing Beam Propagation Method. As a first step, a conventional 1 x 32 Y-branch splitter was designed and simulated in two-dimensional environment of OptiBPM photonic tool. The simulated optical properties feature high loss, high asymmetric splitting ratio and a large size of the designed structure, too. In the second step of this work we propose an optimization of the conventional splitter design leading to suppression of the asymmetric splitting ratio to one-third of its initial value and to the improvement of the losses by nearly 2 dB. In addition, 50% size reduction of the designed structure was also achieved. This length-optimized low-loss splitter was then modelled in a three-dimensional environment of RSoft photonic tool and the simulated results confirm the strong improvement of the optical properties.
In this paper, low-loss Y-branch splitters up to 128 splitting ratio are designed, simulated, and optimized by using 2D beam propagation method in OptiBPM tool by Optiwave. For an optical waveguide, a silica-on-silicon material platform is used. The splitters were designed as a planar structure for a telecommunication operating wavelength of 1.55 m. According to the minimum insertion loss and minimum non-uniformity, the optimum length for each Y-branch is determined. The influence of the pre-defined S-Bend waveguide shapes (Arc, Cosine, Sine) and of the waveguide core size reduction on the splitter performance has been also studied. The obtained simulation results of all designed splitters with different S-Bend shape waveguides together with the different waveguide core sizes are discussed and compared with each other.
In the regime of incentive-based autonomous demand response, time dependent prices are typically used to serve as signals from a system operator to consumers. However, this approach has been shown to be problematic from various perspectives. We clarify these shortcomings in a geometric way and thereby motivate the use of power signals instead of price signals. The main contribution of this paper consists of demonstrating in a standard setting that power tracking signals can control flexibilities more efficiently than real-time price signals. For comparison by simulation, German renewable energy production and German standard load profiles are used for daily production and demand profiles, respectively. As for flexibility, an energy storage system with realistic efficiencies is considered. Most critically, the new approach is able to induce consumptions on the demand side that real-time pricing is unable to induce. Moreover, the pricing approach is outperformed with regards to imbalance energy, peak consumption, storage variation, and storage losses without the need for additional communication or computation efforts. It is further shown that the advantages of the optimal power tracking approach compared to the pricing approach increase with the extent of the flexibility. The results indicate that autonomous flexibility control by optimal power tracking is able to integrate renewable energy production efficiently, has additional benefits, and the potential for enhancements. The latter include data uncertainties, systems of flexibilities, and economic implementation.
One goal of the project described in this paper is to create learning algorithms for machines and robots that lack a precise virtual controller for correct simulations. Using a digital twin approach, the developed mixed reality application aims for an overlay of a virtual robot model with the real world counterpart using Microsoft HoloLens 2 smart glasses. The application should help users to have an inside look into the results of the learning algorithm and therefore supervise and improve those results. The main focus of this paper is the visual representation of the digital twin on the smart glasses. One of the challenges is the level of abstraction and specific use of shaders (program code defining material attributes) to help the user differentiating between virtual and real objects. Therefore different presentation methods are described and evaluated. Study results with 48 persons show that the most abstract representation (wireframe) scores lowest, whereas a half-transparent model works best.