Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Big Data ; 7: 1371518, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38946939

RESUMEN

Introduction: Hyperdimensional Computing (HDC) is a brain-inspired and lightweight machine learning method. It has received significant attention in the literature as a candidate to be applied in the wearable Internet of Things, near-sensor artificial intelligence applications, and on-device processing. HDC is computationally less complex than traditional deep learning algorithms and typically achieves moderate to good classification performance. A key aspect that determines the performance of HDC is encoding the input data to the hyperdimensional (HD) space. Methods: This article proposes a novel lightweight approach relying only on native HD arithmetic vector operations to encode binarized images that preserves the similarity of patterns at nearby locations by using point of interest selection and local linear mapping. Results: The method reaches an accuracy of 97.92% on the test set for the MNIST data set and 84.62% for the Fashion-MNIST data set. Discussion: These results outperform other studies using native HDC with different encoding approaches and are on par with more complex hybrid HDC models and lightweight binarized neural networks. The proposed encoding approach also demonstrates higher robustness to noise and blur compared to the baseline encoding.

2.
Sensors (Basel) ; 24(8)2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38676032

RESUMEN

Over the past few years, the scale of sensor networks has greatly expanded. This generates extended spatiotemporal datasets, which form a crucial information resource in numerous fields, ranging from sports and healthcare to environmental science and surveillance. Unfortunately, these datasets often contain missing values due to systematic or inadvertent sensor misoperation. This incompleteness hampers the subsequent data analysis, yet addressing these missing observations forms a challenging problem. This is especially the case when both the temporal correlation of timestamps within a single sensor and the spatial correlation between sensors are important. Here, we apply and evaluate 12 imputation methods to complete the missing values in a dataset originating from large-scale environmental monitoring. As part of a large citizen science project, IoT-based microclimate sensors were deployed for six months in 4400 gardens across the region of Flanders, generating 15-min recordings of temperature and soil moisture. Methods based on spatial recovery as well as time-based imputation were evaluated, including Spline Interpolation, MissForest, MICE, MCMC, M-RNN, BRITS, and others. The performance of these imputation methods was evaluated for different proportions of missing data (ranging from 10% to 50%), as well as a realistic missing value scenario. Techniques leveraging the spatial features of the data tend to outperform the time-based methods, with matrix completion techniques providing the best performance. Our results therefore provide a tool to maximize the benefit from costly, large-scale environmental monitoring efforts.

3.
Front Neurosci ; 18: 1360300, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38680445

RESUMEN

Spiking neural network (SNN) distinguish themselves from artificial neural network (ANN) because of their inherent temporal processing and spike-based computations, enabling a power-efficient implementation in neuromorphic hardware. In this study, we demonstrate that data processing with spiking neurons can be enhanced by co-learning the synaptic weights with two other biologically inspired neuronal features: (1) a set of parameters describing neuronal adaptation processes and (2) synaptic propagation delays. The former allows a spiking neuron to learn how to specifically react to incoming spikes based on its past. The trained adaptation parameters result in neuronal heterogeneity, which leads to a greater variety in available spike patterns and is also found in the brain. The latter enables to learn to explicitly correlate spike trains that are temporally distanced. Synaptic delays reflect the time an action potential requires to travel from one neuron to another. We show that each of the co-learned features separately leads to an improvement over the baseline SNN and that the combination of both leads to state-of-the-art SNN results on all speech recognition datasets investigated with a simple 2-hidden layer feed-forward network. Our SNN outperforms the benchmark ANN on the neuromorphic datasets (Spiking Heidelberg Digits and Spiking Speech Commands), even with fewer trainable parameters. On the 35-class Google Speech Commands dataset, our SNN also outperforms a GRU of similar size. Our study presents brain-inspired improvements in SNN that enable them to excel over an equivalent ANN of similar size on tasks with rich temporal dynamics.

4.
Sensors (Basel) ; 23(23)2023 Dec 03.
Artículo en Inglés | MEDLINE | ID: mdl-38067961

RESUMEN

Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support could increase attention for visual stimuli that do not stand out clearly from an information stream. To this end, we designed an experiment inspired by pip-and-pop but more appropriate for eliciting attention and P3a-event-related potentials (ERPs). In this study, the aim was to distinguish between targets and distractors based on the subject's electroencephalography (EEG) data. We achieved this objective by employing different machine learning (ML) methods for both individual-subject (IS) and cross-subject (CS) models. Finally, we investigated which EEG channels and time points were used by the model to make its predictions using saliency maps. We were able to successfully perform the aforementioned classification task for both the IS and CS scenarios, reaching classification accuracies up to 76%. In accordance with the literature, the model primarily used the parietal-occipital electrodes between 200 ms and 300 ms after the stimulus to make its prediction. The findings from this research contribute to the development of more effective P300-based brain-computer interfaces. Furthermore, they validate the EEG data collected in our experiment.


Asunto(s)
Inteligencia Artificial , Electroencefalografía , Humanos , Estimulación Acústica , Atención , Potenciales Relacionados con Evento P300 , Potenciales Evocados
5.
Neural Comput ; 35(12): 2006-2023, 2023 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-37844327

RESUMEN

Hyperdimensional computing (HDC) has become popular for light-weight and energy-efficient machine learning, suitable for wearable Internet-of-Things devices and near-sensor or on-device processing. HDC is computationally less complex than traditional deep learning algorithms and achieves moderate to good classification performance. This letter proposes to extend the training procedure in HDC by taking into account not only wrongly classified samples but also samples that are correctly classified by the HDC model but with low confidence. We introduce a confidence threshold that can be tuned for each data set to achieve the best classification accuracy. The proposed training procedure is tested on UCIHAR, CTG, ISOLET, and HAND data sets for which the performance consistently improves compared to the baseline across a range of confidence threshold values. The extended training procedure also results in a shift toward higher confidence values of the correctly classified samples, making the classifier not only more accurate but also more confident about its predictions.

6.
J Sports Sci ; 41(3): 298-306, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37139786

RESUMEN

In this study, we investigated the relationship between age and performance in professional road cycling. We considered 1864 male riders present in the yearly top 500 ranking of ProCyclingStats (PCS) since 1993 until 2021 with more than 700 PCS Points. We applied a data-driven approach for finding natural clusters of the rider's speciality (General Classification, One Day, Sprinter or All-Rounder). For each cluster, we divided the riders into the top 50% and bottom 50% based on their total number of PCS points. The athlete's yearly performance was defined as the average number of points collected per race. Age-performance models were constructed using polynomial regression and we obtained that the top 50% of the riders in each cluster have a statistically significant (p < 0.05) higher peak performance age. Considering the best 50% of the riders, general classification riders peak at an older age than the other rider types (p < 0.05). For those top riders, we found ages of peak performance of 26.3, 26.5, 26.2 and 27.5 years for sprinters, all-rounders, one day specialists and general classification riders, respectively. Our findings can be used for scouting purposes, assisting coaches in designing long-term training programmes and benchmarking the athletes' performance development.


Asunto(s)
Rendimiento Atlético , Ciclismo , Humanos , Masculino
7.
Front Neurosci ; 16: 1023470, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36389242

RESUMEN

A liquid state machine (LSM) is a biologically plausible model of a cortical microcircuit. It exists of a random, sparse reservoir of recurrently connected spiking neurons with fixed synapses and a trainable readout layer. The LSM exhibits low training complexity and enables backpropagation-free learning in a powerful, yet simple computing paradigm. In this work, the liquid state machine is enhanced by a set of bio-inspired extensions to create the extended liquid state machine (ELSM), which is evaluated on a set of speech data sets. Firstly, we ensure excitatory/inhibitory (E/I) balance to enable the LSM to operate in edge-of-chaos regime. Secondly, spike-frequency adaptation (SFA) is introduced in the LSM to improve the memory capabilities. Lastly, neuronal heterogeneity, by means of a differentiation in time constants, is introduced to extract a richer dynamical LSM response. By including E/I balance, SFA, and neuronal heterogeneity, we show that the ELSM consistently improves upon the LSM while retaining the benefits of the straightforward LSM structure and training procedure. The proposed extensions led up to an 5.2% increase in accuracy while decreasing the number of spikes in the ELSM up to 20.2% on benchmark speech data sets. On some benchmarks, the ELSM can even attain similar performances as the current state-of-the-art in spiking neural networks. Furthermore, we illustrate that the ELSM input-liquid and recurrent synaptic weights can be reduced to 4-bit resolution without any significant loss in classification performance. We thus show that the ELSM is a powerful, biologically plausible and hardware-friendly spiking neural network model that can attain near state-of-the-art accuracy on speech recognition benchmarks for spiking neural networks.

8.
Sensors (Basel) ; 22(7)2022 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-35408346

RESUMEN

Road weather conditions such as ice, snow, or heavy rain can have a significant impact on driver safety. In this paper, we present an approach to continuously monitor the road conditions in real time by equipping a fleet of vehicles with sensors. Based on the observed conditions, a physical road weather model is used to forecast the conditions for the following hours. This can be used to deliver timely warnings to drivers about potentially dangerous road conditions. To optimally process the large data volumes, we show how artificial intelligence is used to (1) calibrate the sensor measurements and (2) to retrieve relevant weather information from camera images. The output of the road weather model is compared to forecasts at road weather station locations to validate the approach.

9.
Front Sports Act Living ; 3: 714107, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34693282

RESUMEN

Professional road cycling is a very competitive sport, and many factors influence the outcome of the race. These factors can be internal (e.g., psychological preparedness, physiological profile of the rider, and the preparedness or fitness of the rider) or external (e.g., the weather or strategy of the team) to the rider, or even completely unpredictable (e.g., crashes or mechanical failure). This variety makes perfectly predicting the outcome of a certain race an impossible task and the sport even more interesting. Nonetheless, before each race, journalists, ex-pro cyclists, websites and cycling fans try to predict the possible top 3, 5, or 10 riders. In this article, we use easily accessible data on road cycling from the past 20 years and the Machine Learning technique Learn-to-Rank (LtR) to predict the top 10 contenders for 1-day road cycling races. We accomplish this by mapping a relevancy weight to the finishing place in the first 10 positions. We assess the performance of this approach on 2018, 2019, and 2021 editions of six spring classic 1-day races. In the end, we compare the output of the framework with a mass fan prediction on the Normalized Discounted Cumulative Gain (NDCG) metric and the number of correct top 10 guesses. We found that our model, on average, has slightly higher performance on both metrics than the mass fan prediction. We also analyze which variables of our model have the most influence on the prediction of each race. This approach can give interesting insights to fans before a race but can also be helpful to sports coaches to predict how a rider might perform compared to other riders outside of the team.

10.
PLoS One ; 16(9): e0257215, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34559812

RESUMEN

Topological data analysis is a recent and fast growing field that approaches the analysis of datasets using techniques from (algebraic) topology. Its main tool, persistent homology (PH), has seen a notable increase in applications in the last decade. Often cited as the most favourable property of PH and the main reason for practical success are the stability theorems that give theoretical results about noise robustness, since real data is typically contaminated with noise or measurement errors. However, little attention has been paid to what these stability theorems mean in practice. To gain some insight into this question, we evaluate the noise robustness of PH on the MNIST dataset of greyscale images. More precisely, we investigate to what extent PH changes under typical forms of image noise, and quantify the loss of performance in classifying the MNIST handwritten digits when noise is added to the data. The results show that the sensitivity to noise of PH is influenced by the choice of filtrations and persistence signatures (respectively the input and output of PH), and in particular, that PH features are often not robust to noise in a classification task.


Asunto(s)
Artefactos , Diagnóstico por Imagen/instrumentación , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Animales , Humanos , Matemática , Modelos Teóricos , Distribución Normal , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
11.
Sensors (Basel) ; 21(15)2021 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-34372388

RESUMEN

While IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) networks should be equipped to deal with the hard wireless challenges of industrial environments, the sensor networks are often still limited by the characteristics of the used physical (PHY) layer. Therefore, the TSCH community has recently started shifting research efforts to the support of multiple PHY layers, to overcome this limitation. On the one hand, integrating such multi-PHY support implies dealing with the PHY characteristics to fit the resource allocation in the TSCH schedule, and on the other hand, defining policies on how to select the appropriate PHY for each network link. As such, first a heuristic is proposed that is a step towards a distributed PHY and parent selection mechanism for slot bonding multi-PHY TSCH sensor networks. Additionally, a proposal on how this heuristic can be implemented in the IPv6 over the TSCH mode of IEEE 802.15.4e (6TiSCH) protocol stack and its Routing Protocol for Low-power and Lossy network (RPL) layer is also presented. Slot bonding allows the creation of different-sized bonded slots with a duration adapted to the data rate of each chosen PHY. Afterwards, a TSCH slot bonding implementation is proposed in the latest version of the Contiki-NG Industrial Internet of Things (IIoT) operating system. Subsequently, via extensive simulation results, and by deploying the slot bonding implementation on a real sensor node testbed, it is shown that the computationally efficient parent and PHY selection mechanism approximates the packet delivery ratio (PDR) results of a near-optimal, but computationally complex, centralized scheduler.

12.
Sensors (Basel) ; 21(13)2021 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-34202649

RESUMEN

IEEE 802.11 (Wi-Fi) is one of the technologies that provides high performance with a high density of connected devices to support emerging demanding services, such as virtual and augmented reality. However, in highly dense deployments, Wi-Fi performance is severely affected by interference. This problem is even worse in new standards, such as 802.11n/ac, where new features such as Channel Bonding (CB) are introduced to increase network capacity but at the cost of using wider spectrum channels. Finding the best channel assignment in dense deployments under dynamic environments with CB is challenging, given its combinatorial nature. Therefore, the use of analytical or system models to predict Wi-Fi performance after potential changes (e.g., dynamic channel selection with CB, and the deployment of new devices) are not suitable, due to either low accuracy or high computational cost. This paper presents a novel, data-driven approach to speed up this process, using a Graph Neural Network (GNN) model that exploits the information carried in the deployment's topology and the intricate wireless interactions to predict Wi-Fi performance with high accuracy. The evaluation results show that preserving the graph structure in the learning process obtains a 64% increase versus a naive approach, and around 55% compared to other Machine Learning (ML) approaches when using all training features.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación
13.
Sensors (Basel) ; 21(3)2021 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-33498389

RESUMEN

With the emergence of 5G networks and the stringent Quality of Service (QoS) requirements of Mission-Critical Applications (MCAs), co-existing networks are expected to deliver higher-speed connections, enhanced reliability, and lower latency. IEEE 802.11 networks, which co-exist with 5G, continue to be the access choice for indoor networks. However, traditional IEEE 802.11 networks lack sufficient reliability and they have non-deterministic latency. To dynamically control resources in IEEE 802.11 networks, in this paper we propose a delay-aware approach for Medium Access Control (MAC) management via airtime-based network slicing and traffic shaping, as well as user association while using Multi-Criteria Decision Analysis (MCDA). To fulfill the QoS requirements, we use Software-Defined Networking (SDN) for airtime-based network slicing and seamless handovers at the Software-Defined Radio Access Network (SD-RAN), while traffic shaping is done at the Stations (STAs). In addition to throughput, channel utilization, and signal strength, our approach monitors the queueing delay at the Access Points (APs) and uses it for centralized network management. We evaluate our approach in a testbed composed of APs controlled by SD-RAN and SDN controllers, with STAs under different workload combinations. Our results show that, in addition to load balancing flows across APs, our approach avoids the ping-pong effect while enhancing the QoS delivery at runtime. Under varying traffic demands, our approach maintains the queueing delay requirements of 5 ms for most of the experiment run, hence drawing closer to MCA requirements.

14.
Sensors (Basel) ; 18(11)2018 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-30463346

RESUMEN

Software Defined Networking (SDN) centralizes network control to improve network programmability and flexibility. Contrary to wired settings, it is unclear how to support SDN in low power and lossy networks like typical Internet of Things (IoT) ones. Challenges encompass providing reliable in-band connectivity between the centralized controller and out-of-range nodes, and coping with physical limitations of the highly resource-constrained IoT devices. In this work, we present Whisper, an enabler for SDN in low power and lossy networks. The centralized Whisper controller of a network remotely controls nodes' forwarding and cell allocation. To do so, the controller sends carefully computed routing and scheduling messages that are fully compatible with the protocols run in the network. This mechanism ensures the best possible in-band connectivity between the controller and all network nodes, capitalizing on an interface which is already supported by network devices. Whisper's internal algorithms further reduce the number of messages sent by the controller, to make the exerted control as lightweight as possible for the devices. Beyond detailing Whisper's design, we discuss compelling use cases that Whisper unlocks, including rerouting around low-battery devices and providing runtime defense to jamming attacks. We also describe how to implement Whisper in current IoT open standards (RPL and 6TiSCH) without modifying IoT devices' firmware. This shows that Whisper can implement an SDN-like control for distributed low power networks with no specific support for SDN, from legacy to next generation IoT devices. Our testbed experiments show that Whisper successfully controls the network in both the scheduling and routing plane, with significantly less overhead than other SDN-IoT solutions, no additional latency and no packet loss.

15.
Sensors (Basel) ; 18(2)2018 Feb 02.
Artículo en Inglés | MEDLINE | ID: mdl-29393900

RESUMEN

The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.

16.
Sensors (Basel) ; 17(7)2017 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-28677617

RESUMEN

IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.

17.
BMC Med Inform Decis Mak ; 14: 97, 2014 Dec 04.
Artículo en Inglés | MEDLINE | ID: mdl-25476007

RESUMEN

BACKGROUND: The ultimate ambient-intelligent care room contains numerous sensors and devices to monitor the patient, sense and adjust the environment and support the staff. This sensor-based approach results in a large amount of data, which can be processed by current and future applications, e.g., task management and alerting systems. Today, nurses are responsible for coordinating all these applications and supplied information, which reduces the added value and slows down the adoption rate.The aim of the presented research is the design of a pervasive and scalable framework that is able to optimize continuous care processes by intelligently reasoning on the large amount of heterogeneous care data. METHODS: The developed Ontology-based Care Platform (OCarePlatform) consists of modular components that perform a specific reasoning task. Consequently, they can easily be replicated and distributed. Complex reasoning is achieved by combining the results of different components. To ensure that the components only receive information, which is of interest to them at that time, they are able to dynamically generate and register filter rules with a Semantic Communication Bus (SCB). This SCB semantically filters all the heterogeneous care data according to the registered rules by using a continuous care ontology. The SCB can be distributed and a cache can be employed to ensure scalability. RESULTS: A prototype implementation is presented consisting of a new-generation nurse call system supported by a localization and a home automation component. The amount of data that is filtered and the performance of the SCB are evaluated by testing the prototype in a living lab. The delay introduced by processing the filter rules is negligible when 10 or fewer rules are registered. CONCLUSIONS: The OCarePlatform allows disseminating relevant care data for the different applications and additionally supports composing complex applications from a set of smaller independent components. This way, the platform significantly reduces the amount of information that needs to be processed by the nurses. The delay resulting from processing the filter rules is linear in the amount of rules. Distributed deployment of the SCB and using a cache allows further improvement of these performance results.


Asunto(s)
Inteligencia Artificial , Ambiente Controlado , Monitoreo del Ambiente/instrumentación , Monitoreo Fisiológico/instrumentación , Habitaciones de Pacientes/normas , Ontologías Biológicas , Monitoreo del Ambiente/métodos , Humanos , Monitoreo Fisiológico/métodos , Semántica
18.
BMC Med Inform Decis Mak ; 13: 120, 2013 Oct 27.
Artículo en Inglés | MEDLINE | ID: mdl-24160892

RESUMEN

BACKGROUND: As the amount of information in electronic health care systems increases, data operations get more complicated and time-consuming. Intensive Care platforms require a timely processing of data retrievals to guarantee the continuous display of recent data of patients. Physicians and nurses rely on this data for their decision making. Manual optimization of query executions has become difficult to handle due to the increased amount of queries across multiple sources. Hence, a more automated management is necessary to increase the performance of database queries. The autonomic computing paradigm promises an approach in which the system adapts itself and acts as self-managing entity, thereby limiting human interventions and taking actions. Despite the usage of autonomic control loops in network and software systems, this approach has not been applied so far for health information systems. METHODS: We extend the COSARA architecture, an infection surveillance and antibiotic management service platform for the Intensive Care Unit (ICU), with self-managed components to increase the performance of data retrievals. We used real-life ICU COSARA queries to analyse slow performance and measure the impact of optimizations. Each day more than 2 million COSARA queries are executed. Three control loops, which monitor the executions and take action, have been proposed: reactive, deliberative and reflective control loops. We focus on improvements of the execution time of microbiology queries directly related to the visual displays of patients' data on the bedside screens. RESULTS: The results show that autonomic control loops are beneficial for the optimizations in the data executions in the ICU. The application of reactive control loop results in a reduction of 8.61% of the average execution time of microbiology results. The combined application of the reactive and deliberative control loop results in an average query time reduction of 10.92% and the combination of reactive, deliberative and reflective control loops provides a reduction of 13.04%. CONCLUSIONS: We found that by controlled reduction of queries' executions the performance for the end-user can be improved. The implementation of autonomic control loops in an existing health platform, COSARA, has a positive effect on the timely data visualization for the physician and nurse.


Asunto(s)
Sistemas de Administración de Bases de Datos/normas , Sistemas de Información en Salud/normas , Almacenamiento y Recuperación de la Información/normas , Unidades de Cuidados Intensivos/normas , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA