Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 73
Filtrar
1.
Sensors (Basel) ; 24(7)2024 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-38610407

RESUMEN

The Internet of Things (IoT) consists of millions of devices deployed over hundreds of thousands of different networks, providing an ever-expanding resource to improve our understanding of and interactions with the physical world. Global service discovery is key to realizing the opportunities of the IoT, spanning disparate networks and technologies to enable the sharing, discovery, and utilisation of services and data outside of the context in which they are deployed. In this paper, we present Decentralised Service Registries (DSRs), a novel trustworthy decentralised approach to global IoT service discovery and interaction, building on DSF-IoT to allow users to simply create and share public and private service registries, to register and query for relevant services, and to access both current and historical data published by the services they discover. In DSR, services are registered and discovered using signed objects that are cryptographically associated with the registry service, linked into a signature chain, and stored and queried for using a novel verifiable DHT overlay. In contrast to existing centralised and decentralised approaches, DSRs decouple registries from supporting infrastructure, provide privacy and multi-tenancy, and support the verification of registry entries and history, service information, and published data to mitigate risks of service impersonation or the alteration of data. This decentralised approach is demonstrated through the creation and use of a DSR to register and search for real-world IoT devices and their data as well as qualified using a scalable cluster-based testbench for the high-fidelity emulation of peer-to-peer applications. DSRs are evaluated against existing approaches, demonstrating the novelty and utility of DSR to address key IoT challenges and enable the sharing, discovery, and use of IoT services.

2.
Healthc Inform Res ; 30(1): 3-15, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38359845

RESUMEN

OBJECTIVES: Medical artificial intelligence (AI) has recently attracted considerable attention. However, training medical AI models is challenging due to privacy-protection regulations. Among the proposed solutions, federated learning (FL) stands out. FL involves transmitting only model parameters without sharing the original data, making it particularly suitable for the medical field, where data privacy is paramount. This study reviews the application of FL in the medical domain. METHODS: We conducted a literature search using the keywords "federated learning" in combination with "medical," "healthcare," or "clinical" on Google Scholar and PubMed. After reviewing titles and abstracts, 58 papers were selected for analysis. These FL studies were categorized based on the types of data used, the target disease, the use of open datasets, the local model of FL, and the neural network model. We also examined issues related to heterogeneity and security. RESULTS: In the investigated FL studies, the most commonly used data type was image data, and the most studied target diseases were cancer and COVID-19. The majority of studies utilized open datasets. Furthermore, 72% of the FL articles addressed heterogeneity issues, while 50% discussed security concerns. CONCLUSIONS: FL in the medical domain appears to be in its early stages, with most research using open data and focusing on specific data types and diseases for performance verification purposes. Nonetheless, medical FL research is anticipated to be increasingly applied and to become a vital component of multi-institutional research.

3.
PeerJ Comput Sci ; 9: e1682, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38077549

RESUMEN

The integration of Internet of Things (IoT) technologies, particularly the Internet of Medical Things (IoMT), with wireless sensor networks (WSNs) has revolutionized the healthcare industry. However, despite the undeniable benefits of WSNs, their limited communication capabilities and network congestion have emerged as critical challenges in the context of healthcare applications. This research addresses these challenges through a dynamic and on-demand route-finding protocol called P2P-IoMT, based on LOADng for point-to-point routing in IoMT. To reduce congestion, dynamic composite routing metrics allow nodes to select the optimal parent based on the application requirements during the routing discovery phase. Nodes running the proposed routing protocol use the multi-criteria decision-making Skyline technique for parent selection. Experimental evaluation results show that P2P-IoMT protocol outperforms its best rivals in the literature in terms of residual network energy and packet delivery ratio. The network lifetime is extended by 4% while achieving a comparable packet delivery ratio and communication delay compared to LRRE. These performances are offered on top of the dynamic path selection and configurable route metrics capabilities of P2P-IoMT.

4.
Front Big Data ; 6: 1220348, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37576115

RESUMEN

The modern maritime industry is producing data at an unprecedented rate. The capturing and processing of such data is integral to create added value for maritime companies and other maritime stakeholders, but their true potential can only be unlocked by innovative technologies such as extreme-scale analytics, AI, and digital twins, given that existing systems and traditional approaches are unable to effectively collect, store, and process big data. Such innovative systems are not only projected to effectively deal with maritime big data but to also create various tools that can assist maritime companies, in an evolving and complex environment that requires maritime vessels to increase their overall safety and performance and reduce their consumption and emissions. An integral challenge for developing these next-generation maritime applications lies in effectively combining and incorporating the aforementioned innovative technologies in an integrated system. Under this context, the current paper presents the architecture of VesselAI, an EU-funded project that aims to develop, validate, and demonstrate a novel holistic framework based on a combination of the state-of-the-art HPC, Big Data and AI technologies, capable of performing extreme-scale and distributed analytics for fuelling the next-generation digital twins in maritime applications and beyond.

5.
Minds Mach (Dordr) ; 33(2): 293-319, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37456615

RESUMEN

The debate around the notions of a priori knowledge and a posteriori knowledge has proven crucial for the development of many fields in philosophy, such as metaphysics, epistemology, metametaphysics etc. We advocate that the recent debate on the two notions is also fruitful for man-made distributed computing systems and for the epistemic analysis thereof. Following a recently proposed modal and fallibilistic account of a priori knowledge, we elaborate the corresponding concept of a priori belief: We propose a rich taxonomy of types of a priori beliefs and their role for the different agents that participate in the system engineering process, which match the existing view exceedingly well and are particularly promising for explaining and dealing with unexpected behaviors in fault-tolerant distributed systems. Developing such a philosophical foundation will provide a sound basis for eventually implementing our ideas in a suitable epistemic reasoning and analysis framework and, hence, constitutes a mandatory first step for developing methods and tools to cope with the various challenges that emerge in such systems.

6.
Sensors (Basel) ; 23(9)2023 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-37177501

RESUMEN

Crude oil leakages and spills (OLS) are some of the problems attributed to pipeline failures in the oil and gas industry's midstream sector. Consequently, they are monitored via several leakage detection and localisation techniques (LDTs) comprising classical methods and, recently, Internet of Things (IoT)-based systems via wireless sensor networks (WSNs). Although the latter techniques are proven to be more efficient, they are susceptible to other types of failures such as high false alarms or single point of failure (SPOF) due to their centralised implementations. Therefore, in this work, we present a hybrid distributed leakage detection and localisation technique (HyDiLLEch), which combines multiple classical LDTs. The technique is implemented in two versions, a single-hop and a double-hop version. The evaluation of the results is based on the resilience to SPOFs, the accuracy of detection and localisation, and communication efficiency. The results obtained from the placement strategy and the distributed spatial data correlation include increased sensitivity to leakage detection and localisation and the elimination of the SPOF related to the centralised LDTs by increasing the number of node-detecting and localising (NDL) leakages to four and six in the single-hop and double-hop versions, respectively. In addition, the accuracy of leakages is improved from 0 to 32 m in nodes that were physically close to the leakage points while keeping the communication overhead minimal.

7.
Healthc Inform Res ; 29(2): 168-173, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37190741

RESUMEN

OBJECTIVES: Since protecting patients' privacy is a major concern in clinical research, there has been a growing need for privacy-preserving data analysis platforms. For this purpose, a federated learning (FL) method based on the Observational Medical Outcomes Partnership (OMOP) common data model (CDM) was implemented, and its feasibility was demonstrated. METHODS: We implemented an FL platform on FeederNet, which is a distributed clinical data analysis platform based on the OMOP CDM in Korea. We trained it through an artificial neural network (ANN) using data from patients who received steroid prescriptions or injections, with the aim of predicting the occurrence of side effects depending on the prescribed dose. The ANN was trained using the FL platform with the OMOP CDMs of Kyung Hee University Medical Center (KHMC) and Ajou University Hospital (AUH). RESULTS: The area under the receiver operating characteristic curves (AUROCs) for predicting bone fracture, osteonecrosis, and osteoporosis using only data from each hospital were 0.8426, 0.6920, and 0.7727 for KHMC and 0.7891, 0.7049, and 0.7544 for AUH, respectively. In contrast, when using FL, the corresponding AUROCs were 0.8260, 0.7001, and 0.7928 for KHMC and 0.7912, 0.8076, and 0.7441 for AUH, respectively. In particular, FL led to a 14% improvement in performance for osteonecrosis at AUH. CONCLUSIONS: FL can be performed with the OMOP CDM, and FL often shows better performance than using only a single institution's data. Therefore, research using OMOP CDM has been expanded from statistical analysis to machine learning so that researchers can conduct more diverse research.

8.
J Med Internet Res ; 25: e43006, 2023 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-37126398

RESUMEN

BACKGROUND: The proliferation of mobile health (mHealth) applications is partly driven by the advancements in sensing and communication technologies, as well as the integration of artificial intelligence techniques. Data collected from mHealth applications, for example, on sensor devices carried by patients, can be mined and analyzed using artificial intelligence-based solutions to facilitate remote and (near) real-time decision-making in health care settings. However, such data often sit in data silos, and patients are often concerned about the privacy implications of sharing their raw data. Federated learning (FL) is a potential solution, as it allows multiple data owners to collaboratively train a machine learning model without requiring access to each other's raw data. OBJECTIVE: The goal of this scoping review is to gain an understanding of FL and its potential in dealing with sensitive and heterogeneous data in mHealth applications. Through this review, various stakeholders, such as health care providers, practitioners, and policy makers, can gain insight into the limitations and challenges associated with using FL in mHealth and make informed decisions when considering implementing FL-based solutions. METHODS: We conducted a scoping review following the guidelines of PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews). We searched 7 commonly used databases. The included studies were analyzed and summarized to identify the possible real-world applications and associated challenges of using FL in mHealth settings. RESULTS: A total of 1095 articles were retrieved during the database search, and 26 articles that met the inclusion criteria were included in the review. The analysis of these articles revealed 2 main application areas for FL in mHealth, that is, remote monitoring and diagnostic and treatment support. More specifically, FL was found to be commonly used for monitoring self-care ability, health status, and disease progression, as well as in diagnosis and treatment support of diseases. The review also identified several challenges (eg, expensive communication, statistical heterogeneity, and system heterogeneity) and potential solutions (eg, compression schemes, model personalization, and active sampling). CONCLUSIONS: This scoping review has highlighted the potential of FL as a privacy-preserving approach in mHealth applications and identified the technical limitations associated with its use. The challenges and opportunities outlined in this review can inform the research agenda for future studies in this field, to overcome these limitations and further advance the use of FL in mHealth.


Asunto(s)
Aplicaciones Móviles , Telemedicina , Humanos , Personal Administrativo , Inteligencia Artificial , Comunicación , Bases de Datos Factuales , Progresión de la Enfermedad
9.
Stud Health Technol Inform ; 302: 362-363, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203685

RESUMEN

The AKTIN-Emergency Department Registry is a federated and distributed health data network which uses a two-step process for local approval of received data queries and result transmission. For currently establishing distributed research infrastructures, we present our lessons learned from 5 years of established operations.


Asunto(s)
Servicio de Urgencia en Hospital , Sistema de Registros
10.
Sensors (Basel) ; 23(8)2023 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-37112221

RESUMEN

As technology continues to evolve, our society is becoming enriched with more intelligent devices that help us perform our daily activities more efficiently and effectively. One of the most significant technological advancements of our time is the Internet of Things (IoT), which interconnects various smart devices (such as smart mobiles, intelligent refrigerators, smartwatches, smart fire alarms, smart door locks, and many more) allowing them to communicate with each other and exchange data seamlessly. We now use IoT technology to carry out our daily activities, for example, transportation. In particular, the field of smart transportation has intrigued researchers due to its potential to revolutionize the way we move people and goods. IoT provides drivers in a smart city with many benefits, including traffic management, improved logistics, efficient parking systems, and enhanced safety measures. Smart transportation is the integration of all these benefits into applications for transportation systems. However, as a way of further improving the benefits provided by smart transportation, other technologies have been explored, such as machine learning, big data, and distributed ledgers. Some examples of their application are the optimization of routes, parking, street lighting, accident prevention, detection of abnormal traffic conditions, and maintenance of roads. In this paper, we aim to provide a detailed understanding of the developments in the applications mentioned earlier and examine current researches that base their applications on these sectors. We aim to conduct a self-contained review of the different technologies used in smart transportation today and their respective challenges. Our methodology encompassed identifying and screening articles on smart transportation technologies and its applications. To identify articles addressing our topic of review, we searched for articles in the four significant databases: IEEE Xplore, ACM Digital Library, Science Direct, and Springer. Consequently, we examined the communication mechanisms, architectures, and frameworks that enable these smart transportation applications and systems. We also explored the communication protocols enabling smart transportation, including Wi-Fi, Bluetooth, and cellular networks, and how they contribute to seamless data exchange. We delved into the different architectures and frameworks used in smart transportation, including cloud computing, edge computing, and fog computing. Lastly, we outlined current challenges in the smart transportation field and suggested potential future research directions. We will examine data privacy and security issues, network scalability, and interoperability between different IoT devices.

11.
Sensors (Basel) ; 23(7)2023 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-37050823

RESUMEN

An Open Brain-Computer Interface (OpenBCI) provides unparalleled freedom and flexibility through open-source hardware and firmware at a low-cost implementation. It exploits robust hardware platforms and powerful software development kits to create customized drivers with advanced capabilities. Still, several restrictions may significantly reduce the performance of OpenBCI. These limitations include the need for more effective communication between computers and peripheral devices and more flexibility for fast settings under specific protocols for neurophysiological data. This paper describes a flexible and scalable OpenBCI framework for electroencephalographic (EEG) data experiments using the Cyton acquisition board with updated drivers to maximize the hardware benefits of ADS1299 platforms. The framework handles distributed computing tasks and supports multiple sampling rates, communication protocols, free electrode placement, and single marker synchronization. As a result, the OpenBCI system delivers real-time feedback and controlled execution of EEG-based clinical protocols for implementing the steps of neural recording, decoding, stimulation, and real-time analysis. In addition, the system incorporates automatic background configuration and user-friendly widgets for stimuli delivery. Motor imagery tests the closed-loop BCI designed to enable real-time streaming within the required latency and jitter ranges. Therefore, the presented framework offers a promising solution for tailored neurophysiological data processing.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía/métodos , Programas Informáticos , Imágenes en Psicoterapia , Electrodos
12.
Elife ; 122023 04 17.
Artículo en Inglés | MEDLINE | ID: mdl-37067884

RESUMEN

Ant colonies regulate foraging in response to their collective hunger, yet the mechanism behind this distributed regulation remains unclear. Previously, by imaging food flow within ant colonies we showed that the frequency of foraging events declines linearly with colony satiation (Greenwald et al., 2018). Our analysis implied that as a forager distributes food in the nest, two factors affect her decision to exit for another foraging trip: her current food load and its rate of change. Sensing these variables can be attributed to the forager's individual cognitive ability. Here, new analyses of the foragers' trajectories within the nest imply a different way to achieve the observed regulation. Instead of an explicit decision to exit, foragers merely tend toward the depth of the nest when their food load is high and toward the nest exit when it is low. Thus, the colony shapes the forager's trajectory by controlling her unloading rate, while she senses only her current food load. Using an agent-based model and mathematical analysis, we show that this simple mechanism robustly yields emergent regulation of foraging frequency. These findings demonstrate how the embedding of individuals in physical space can reduce their cognitive demands without compromising their computational role in the group.


Asunto(s)
Hormigas , Conducta Alimentaria , Humanos , Animales , Conducta Alimentaria/fisiología , Hormigas/fisiología , Alimentos , Cognición
13.
Sensors (Basel) ; 23(4)2023 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-36850514

RESUMEN

With the development of the Internet and communication technologies, the types of services provided by multitier Web systems are becoming more diverse and complex compared to those of the past. Ensuring a continuous availability of business services is crucial for multitier Web system providers, as service performance issues immediately affect customer experience and satisfaction. Large companies attempt to monitor the system performance indicator (SPI) that summarizes the status of multitier Web systems to detect performance anomalies at an early stage. However, the current anomaly detection methods are designed to monitor a single specific SPI. Moreover, the existing approaches consider performance anomaly detection and its root cause analysis separately, thereby aggravating the burden of resolving the performance anomaly. To support the system provider in diagnosing the performance anomaly, we propose an advanced causative metric analysis (ACMA) framework. First, we draw out 191 performance metrics (PMs) closely related to the target SPI. Among these PMs, the ACMA determines 62 vital PMs that have the most influence on the variance of the target SPI using several statistical methods. Then, we implement a performance anomaly detection model to identify the causative metrics (CMs) between the vital PMs using random forest regression. Even if the target SPI changes, our detection model does not require any change in its model structure and can derive closely related PMs of the target SPI. Based on our experiments, wherein we applied the ACMA to the business services in an enterprise system, we observed that the proposed ACMA could correctly detect various performance anomalies and their CMs.

14.
Food Secur ; 15(1): 59-75, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36186417

RESUMEN

Resilience, defined as the ability of a system to adapt in the presence of a disruptive event, has been of great interest with food systems for some time now. The goal of this research was to build understanding about resilient food systems that will withstand and recover from disruptions in a way that ensures a sufficient supply of food for all. In large, developed countries such as the USA and Canada, the food supply chain relies on a complex web of interconnected systems, such as water and energy systems, and food production and distribution are still very labor-intensive. Thanks to economies of scale and effective use of limited resources, potential cost savings support a push towards a more centralized system. However, distributed systems tend to be more resilient. Although distributed production systems may not be economically justifiable than centralized ones, they may provide a more resilient alternative. This study focused on the supply-side aspects of the food system and the food system's water, energy, and workforce disruptions to be considered for the resilience assessment for the USA, with an example for the state of Texas. After the degree of centralization (DoC) was calculated, the resilience of a food system was measured. Next, the relationship between labor intensity and production of six major food groups was formulated. The example for Texas showed that the decentralization of food systems will improve their resilience in responding to energy and water disruptions. A 40 percent reduction in water supply could decrease the food system performance by 28%. A negative correlation was found between the resilience and DoC for energy disruption scenarios. A 40 percent reduction in energy supply could decrease the food system performance by 34%. In contrast, achieving a more resilient food system in responding to labor shortage supports a push towards a more centralized system the decentralization of food systems can in fact, improve their resilience in responding to disruptions in the energy and water inputs. In contrast, achieving a more resilient food system in responding to labor shortage supports a push towards a more centralized system. Supplementary Information: The online version contains supplementary material available at 10.1007/s12571-022-01321-9.

15.
J Digit Imaging ; 36(2): 700-714, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36417024

RESUMEN

Current AI-driven research in radiology requires resources and expertise that are often inaccessible to small and resource-limited labs. The clinicians who are able to participate in AI research are frequently well-funded, well-staffed, and either have significant experience with AI and computing, or have access to colleagues or facilities that do. Current imaging data is clinician-oriented and is not easily amenable to machine learning initiatives, resulting in inefficient, time consuming, and costly efforts that rely upon a crew of data engineers and machine learning scientists, and all too often preclude radiologists from driving AI research and innovation. We present the system and methodology we have developed to address infrastructure and platform needs, while reducing the staffing and resource barriers to entry. We emphasize a data-first and modular approach that streamlines the AI development and deployment process while providing efficient and familiar interfaces for radiologists, such that they can be the drivers of new AI innovations.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Radiólogos , Radiología/métodos , Aprendizaje Automático , Diagnóstico por Imagen
16.
Sensors (Basel) ; 22(21)2022 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-36365795

RESUMEN

Multi-Agent Systems (MAS) have been seen as an attractive area of research for civil engineering professionals to subdivide complex issues. Based on the assignment's history, nearby agents, and objective, the agent intended to take the appropriate action to complete the task. MAS models complex systems, smart grids, and computer networks. MAS has problems with agent coordination, security, and work distribution despite its use. This paper reviews MAS definitions, attributes, applications, issues, and communications. For this reason, MASs have drawn interest from computer science and civil engineering experts to solve complex difficulties by subdividing them into smaller assignments. Agents have individual responsibilities. Each agent selects the best action based on its activity history, interactions with neighbors, and purpose. MAS uses the modeling of complex systems, smart grids, and computer networks. Despite their extensive use, MAS still confronts agent coordination, security, and work distribution challenges. This study examines MAS's definitions, characteristics, applications, issues, communications, and evaluation, as well as the classification of MAS applications and difficulties, plus research references. This paper should be a helpful resource for MAS researchers and practitioners. MAS in controlling smart grids, including energy management, energy marketing, pricing, energy scheduling, reliability, network security, fault handling capability, agent-to-agent communication, SG-electrical cars, SG-building energy systems, and soft grids, have been examined. More than 100 MAS-based smart grid control publications have been reviewed, categorized, and compiled.


Asunto(s)
Redes de Comunicación de Computadores , Electricidad , Reproducibilidad de los Resultados , Asignación de Recursos
17.
Sensors (Basel) ; 22(21)2022 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-36366082

RESUMEN

Currently, researchers are working to contribute to the emerging fields of cloud computing, edge computing, and distributed systems. The major area of interest is to examine and understand their performance. The major globally leading companies, such as Google, Amazon, ONLIVE, Giaki, and eBay, are truly concerned about the impact of energy consumption. These cloud computing companies use huge data centers, consisting of virtual computers that are positioned worldwide and necessitate exceptionally high-power costs to preserve. The increased requirement for energy consumption in IT firms has posed many challenges for cloud computing companies pertinent to power expenses. Energy utilization is reliant upon numerous aspects, for example, the service level agreement, techniques for choosing the virtual machine, the applied optimization strategies and policies, and kinds of workload. The present paper tries to provide an answer to challenges related to energy-saving through the assistance of both dynamic voltage and frequency scaling techniques for gaming data centers. Also, to evaluate both the dynamic voltage and frequency scaling techniques compared to non-power-aware and static threshold detection techniques. The findings will facilitate service suppliers in how to encounter the quality of service and experience limitations by fulfilling the service level agreements. For this purpose, the CloudSim platform is applied for the application of a situation in which game traces are employed as a workload for analyzing the procedure. The findings evidenced that an assortment of good quality techniques can benefit gaming servers to conserve energy expenditures and sustain the best quality of service for consumers located universally. The originality of this research presents a prospect to examine which procedure performs good (for example, dynamic, static, or non-power aware). The findings validate that less energy is utilized by applying a dynamic voltage and frequency method along with fewer service level agreement violations, and better quality of service and experience, in contrast with static threshold consolidation or non-power aware technique.


Asunto(s)
Nube Computacional , Carga de Trabajo , Fenómenos Físicos
18.
Entropy (Basel) ; 24(11)2022 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-36359705

RESUMEN

The empirical entropy is a key statistical measure of data frequency vectors, enabling one to estimate how diverse the data are. From the computational point of view, it is important to quickly compute, approximate, or bound the entropy. In a distributed system, the representative ("global") frequency vector is the average of the "local" frequency vectors, each residing in a distinct node. Typically, the trivial solution of aggregating the local vectors and computing their average incurs a huge communication overhead. Hence, the challenge is to approximate, or bound, the entropy of the global vector, while reducing communication overhead. In this paper, we develop algorithms which achieve this goal.

19.
Innov Syst Softw Eng ; 18(3): 455-469, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36118299

RESUMEN

In contrast to the breakthroughs in reactive synthesis of monolithic systems, distributed synthesis is not yet practical. Compositional approaches can be a key technique for scalable algorithms. Here, the challenge is to decompose a specification of the global system into local requirements on the individual processes. In this paper, we present and extend a sound and complete compositional synthesis algorithm that constructs for each process, in addition to the strategy, a certificate that captures the necessary interface between the processes. The certificates define an assume-guarantee contract that allows for formulating individual process requirements. By bounding the size of the certificates, we then bias the synthesis procedure towards solutions that are desirable in the sense that they have a small interface. We have implemented our approach and evaluated it on scalable benchmarks: It is much faster than standard methods for distributed synthesis as long as reasonably small certificates exist. Otherwise, the overhead of synthesizing additional certificates is small.

20.
Sensors (Basel) ; 22(16)2022 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-36015735

RESUMEN

Magnetoresistive angle position sensors are, beside Hall effect sensors, especially suitable for usage within servo systems due to their reliability, longevity, and resilience to unfavorable environmental conditions. The proposed distributed method for self-calibration of magnetoresistive angular position sensor uses the data collected during the highest allowed speed shaft movement for the identification of the measurement process model parameters. Data acquisition and initial data processing have been realized as a part of the control process of the servo system, whereas the identification of the model parameters is a service of an application server. The method of minimizing of the sum of algebraic distances of the sensor readings and the parametrized model is employed for the identification of parameters of linear compensation, whereas the average shaft rotation speed has been used as a high precision reference for the identification of parameters of harmonic compensation. The proposed method, in addition to a fast convergence, provides for the increase in measurement accuracy for an order of magnitude. Experimentally obtained measurement uncertainty was better than 0.5°, with the residual variance less than 0.02°, comparable to the sensor resolution.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA