Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 126
Filtrar
1.
SciELO Preprints; ago. 2024.
Preprint en Inglés | SciELO Preprints | ID: pps-9489

RESUMEN

The limited temporal completeness and taxonomic accuracy of species lists, made available in a traditional manner in scientific publications, has always represented a problem. These lists are invariably limited to a few taxonomic groups and do not represent up-to-date knowledge of all species and classification. In this context, the B Brazilian megadiverse fauna is no exception, and the Catálogo Taxonômico da Fauna do Brasil (CTFB) (h[p://fauna.jbrj.gov.br/), made public in 2015, represents a database on biodiversity anchored on a list of valid and expertly recognized scientific names of animals in Brazil. The CTFB is updated in near real time by a team of more than 800 specialists. By January 1, 2024, the CTFB compiled 133,691 nominal species, with 125,138 that were considered valid. Most of the valid species were arthropods (82.3%, with more than 102,000 species) and chordates (7.69%, with over 11,000 species). These taxa were followed by a cluster composed of Mollusca (3,567 species), Platyhelminthes (2,292 species), Annelida (1,833 species), and Nematoda (1,447 species). All remaining groups had less than 1,000 species reported in Brazil, with Cnidaria (831 species), Porifera (628 species), Rotifera (606 species), and Bryozoa (520 species) representing those with more than 500 species. Analysis of the CTFB database can facilitate and direct efforts towards the discovery of new species in Brazil, but it is also fundamental in providing the best available list of valid nominal species to 6 users, including those in science, health, conservation efforts, and any initiative involving animals. The importance of the CTFB is evidenced by the elevated number of citations in the scientific literature in diverse areas of biology, law, anthropology, education, forensic science, and veterinary science, among others.

2.
Curr Protoc ; 4(6): e1065, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38857087

RESUMEN

The European Bioinformatics Institute (EMBL-EBI)'s Job Dispatcher framework provides access to a wide range of core databases and analysis tools that are of key importance in bioinformatics. As well as providing web interfaces to these resources, web services are available using REST and SOAP protocols that enable programmatic access and allow their integration into other applications and analytical workflows and pipelines. This article describes the various options available to researchers and bioinformaticians who would like to use our resources via the web interface employing RESTful web services clients provided in Perl, Python, and Java or who would like to use Docker containers to integrate the resources into analysis pipelines and workflows. © 2024 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: Retrieving data from EMBL-EBI using Dbfetch via the web interface Alternate Protocol 1: Retrieving data from EMBL-EBI using WSDbfetch via the REST interface Alternate Protocol 2: Retrieving data from EMBL-EBI using Dbfetch via RESTful web services with Python client Support Protocol 1: Installing Python REST web services clients Basic Protocol 2: Sequence similarity search using FASTA search via the web interface Alternate Protocol 3: Sequence similarity search using FASTA via RESTful web services with Perl client Support Protocol 2: Installing Perl REST web services clients Basic Protocol 3: Sequence similarity search using NCBI BLAST+ RESTful web services with Python client Basic Protocol 4: Sequence similarity search using HMMER3 phmmer REST web services with Perl client and Docker Support Protocol 3: Installing Docker and running the EMBL-EBI client container Basic Protocol 5: Protein functional analysis using InterProScan 5 RESTful web services with the Python client and Docker Alternate Protocol 4: Protein functional analysis using InterProScan 5 RESTful web services with the Java client Support Protocol 4: Installing Java web services clients Basic Protocol 6: Multiple sequence alignment using Clustal Omega via web interface Alternate Protocol 5: Multiple sequence alignment using Clustal Omega with Perl client and Docker Support Protocol 5: Exploring the RESTful API with OpenAPI User Inferface.


Asunto(s)
Internet , Programas Informáticos , Biología Computacional/métodos , Interfaz Usuario-Computador
3.
Bioengineering (Basel) ; 10(10)2023 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-37892864

RESUMEN

The fusion of machine learning and biomedical research offers novel ways to understand, diagnose, and treat various health conditions. However, the complexities of biomedical data, coupled with the intricate process of developing and deploying machine learning solutions, often pose significant challenges to researchers in these fields. Our pivotal achievement in this research is the introduction of the Automatic Semantic Machine Learning Microservice (AIMS) framework. AIMS addresses these challenges by automating various stages of the machine learning pipeline, with a particular emphasis on the ontology of machine learning services tailored to the biomedical domain. This ontology encompasses everything from task representation, service modeling, and knowledge acquisition to knowledge reasoning and the establishment of a self-supervised learning policy. Our framework has been crafted to prioritize model interpretability, integrate domain knowledge effortlessly, and handle biomedical data with efficiency. Additionally, AIMS boasts a distinctive feature: it leverages self-supervised knowledge learning through reinforcement learning techniques, paired with an ontology-based policy recording schema. This enables it to autonomously generate, fine-tune, and continually adapt to machine learning models, especially when faced with new tasks and data. Our work has two standout contributions demonstrating that machine learning processes in the biomedical domain can be automated, while integrating a rich domain knowledge base and providing a way for machines to have self-learning ability, ensuring they handle new tasks effectively. To showcase AIMS in action, we have highlighted its prowess in three case studies of biomedical tasks. These examples emphasize how our framework can simplify research routines, uplift the caliber of scientific exploration, and set the stage for notable advances.

4.
Animals (Basel) ; 13(20)2023 Oct 18.
Artículo en Inglés | MEDLINE | ID: mdl-37893978

RESUMEN

The health and welfare of livestock are significant for ensuring the sustainability and profitability of the agricultural industry. Addressing efficient ways to monitor and report the health status of individual cows is critical to prevent outbreaks and maintain herd productivity. The purpose of the study is to develop a machine learning (ML) model to classify the health status of milk cows into three categories. In this research, data are collected from existing non-invasive IoT devices and tools in a dairy farm, monitoring the micro- and macroenvironment of the cow in combination with particular information on age, days in milk, lactation, and more. A workflow of various data-processing methods is systematized and presented to create a complete, efficient, and reusable roadmap for data processing, modeling, and real-world integration. Following the proposed workflow, the data were treated, and five different ML algorithms were trained and tested to select the most descriptive one to monitor the health status of individual cows. The highest result for health status assessment is obtained by random forest classifier (RFC) with an accuracy of 0.959, recall of 0.954, and precision of 0.97. To increase the security, speed, and reliability of the work process, a cloud architecture of services is presented to integrate the trained model as an additional functionality in the Amazon Web Services (AWS) environment. The classification results of the ML model are visualized in a newly created interface in the client application.

5.
J Orofac Orthop ; 2023 Sep 29.
Artículo en Inglés | MEDLINE | ID: mdl-37773456

RESUMEN

INTRODUCTION: This study aimed to investigate whether the facial soft tissue changes of individuals who had undergone surgically assisted rapid maxillary expansion (SARME) would be detected by three different well-known facial biometric recognition applications. METHODS: To calculate similarity scores, the pre- and postsurgical photographs of 22 patients who had undergone SARME treatment were examined using three prominent cloud computing-based facial recognition application programming interfaces (APIs): AWS Rekognition (Amazon Web Services, Seattle, WA, USA), Microsoft Azure Cognitive (Microsoft, Redmond, WA, USA), and Face++ (Megvii, Beijing, China). The pre- and post-SARME photographs of the patients (relaxed, smiling, profile, and semiprofile) were used to calculate similarity scores using the APIs. Friedman's two-way analysis of variance and the Wilcoxon signed-rank test were used to compare the similarity scores obtained from the photographs of the different aspects of the face before and after surgery using the different programs. The relationship between measurements on lateral and posteroanterior cephalograms and the similarity scores was evaluated using the Spearman rank correlation. RESULTS: The similarity scores were found to be lower with the Face++ program. When looking at the photo types, it was observed that the similarity scores were higher in the smiling photos. A statistically significant difference in the similarity scores (P < 0.05) was found between the relaxed and smiling photographs using the different programs. The correlation between the cephalometric and posteroanterior measurements and the similarity scores was not significant (P > 0.05). CONCLUSION: SARME treatment caused a significant change in the similarity scores calculated with the help of three different facial recognition programs. The highest similarity scores were found in the smiling photographs, whereas the lowest scores were found in the profile photographs.

6.
BMC Bioinformatics ; 24(1): 221, 2023 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-37259021

RESUMEN

BACKGROUND: As genome sequencing becomes better integrated into scientific research, government policy, and personalized medicine, the primary challenge for researchers is shifting from generating raw data to analyzing these vast datasets. Although much work has been done to reduce compute times using various configurations of traditional CPU computing infrastructures, Graphics Processing Units (GPUs) offer opportunities to accelerate genomic workflows by orders of magnitude. Here we benchmark one GPU-accelerated software suite called NVIDIA Parabricks on Amazon Web Services (AWS), Google Cloud Platform (GCP), and an NVIDIA DGX cluster. We benchmarked six variant calling pipelines, including two germline callers (HaplotypeCaller and DeepVariant) and four somatic callers (Mutect2, Muse, LoFreq, SomaticSniper). RESULTS: We achieved up to 65 × acceleration with germline variant callers, bringing HaplotypeCaller runtimes down from 36 h to 33 min on AWS, 35 min on GCP, and 24 min on the NVIDIA DGX. Somatic callers exhibited more variation between the number of GPUs and computing platforms. On cloud platforms, GPU-accelerated germline callers resulted in cost savings compared with CPU runs, whereas some somatic callers were more expensive than CPU runs because their GPU acceleration was not sufficient to overcome the increased GPU cost. CONCLUSIONS: Germline variant callers scaled well with the number of GPUs across platforms, whereas somatic variant callers exhibited more variation in the number of GPUs with the fastest runtimes, suggesting that, at least with the version of Parabricks used here, these workflows are less GPU optimized and require benchmarking on the platform of choice before being deployed at production scales. Our study demonstrates that GPUs can be used to greatly accelerate genomic workflows, thus bringing closer to grasp urgent societal advances in the areas of biosurveillance and personalized medicine.


Asunto(s)
Gráficos por Computador , Programas Informáticos , Flujo de Trabajo , Genómica
7.
J Pathol Inform ; 14: 100303, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36941960

RESUMEN

Background: Reflexive laboratory testing workflows can improve the assessment of patients receiving pain medications chronically, but complex workflows requiring pathologist input and interpretation may not be well-supported by traditional laboratory information systems. In this work, we describe the development of a web application that improves the efficiency of pathologists and laboratory staff in delivering actionable toxicology results. Method: Before designing the application, we set out to understand the entire workflow including the laboratory workflow and pathologist review. Additionally, we gathered requirements and specifications from stakeholders. Finally, to assess the performance of the implementation of the application, we surveyed stakeholders and documented the approximate amount of time that is required in each step of the workflow. Results: A web-based application was chosen for the ease of access for users. Relevant clinical data was routinely received and displayed in the application. The workflows in the laboratory and during the interpretation process served as the basis of the user interface. With the addition of auto-filing software, the return on investment was significant. The laboratory saved the equivalent of one full-time employee in time by automating file management and result entry. Discussion: Implementation of a purpose-built application to support reflex and interpretation workflows in a clinical pathology practice has led to a significant improvement in laboratory efficiency. Custom- and purpose-built applications can help reduce staff burnout, reduce transcription errors, and allow staff to focus on more critical issues around quality.

8.
IEEE Trans Serv Comput ; 16(1): 162-176, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36776787

RESUMEN

The emergence of cloud and edge computing has enabled rapid development and deployment of Internet-centric distributed applications. There are many platforms and tools that can facilitate users to develop distributed business process (BP) applications by composing relevant service components in a plug and play manner. However, there is no guarantee that a BP application developed in this way is fault-free. In this paper, we formalize the problem of collaborative BP fault resolution which aims to utilize information from existing fault-free BPs that use similar services to resolve faults in a user developed BP. We present an approach based on association analysis of pairwise transformations between a faulty BP and existing BPs to identify the smallest possible set of transformations to resolve the fault(s) in the user developed BP. An extensive experimental evaluation over both synthetically generated faulty BPs and real BPs developed by users shows the effectiveness of our approach.

9.
J Intell Manuf ; 34(6): 2765-2781, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35669337

RESUMEN

The development of modern manufacturing requires key solutions to enhance the intelligence of manufacturing such as digitalization, real-time monitoring, or simulation techniques. For smart robotic manufacturing, the modern approach regarding robot programming and process planning aims for both high efficiency and energy-awareness. During the design and manufacturing stages, optimization becomes crucial and can be fulfilled by means of appropriate digital manufacturing tools. This paper presents the development of a Digital Twin for a robotic deburring workcell along with the process planning and robot programming. Considering a large size workpiece, a new robot programming solution was implemented, based on image processing to safely re-machine only areas where burrs could not be completely removed in the main deburring routine. The work also covers the development of a new web platform to remotely monitor the robotic workcell, to trigger alerts for unexpected events and to allow the control to authorized personnel enabled by the employment of robot web services following an architectural RESTful style which establishes a communication link to the robot virtual controller. The aim of this research is to integrate the Digital Twin with the innovative proposals of Industry 4.0, offering a project-based model of smart robotic manufacturing and experience concepts such as Cyber-Physical System, digitalization, data acquisition, continuous monitoring, and intelligent solutions in a novel approach. Furthermore, the work covers energy consumption strategies for energy-aware robotic manufacturing. Finally, the results of an energy-efficient motion planning along with signal-based scheduling optimization of the robotic deburring cell are discussed.

10.
Front Med (Lausanne) ; 10: 1305415, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38259836

RESUMEN

The growing interest in data-driven medicine, in conjunction with the formation of initiatives such as the European Health Data Space (EHDS) has demonstrated the need for methodologies that are capable of facilitating privacy-preserving data analysis. Distributed Analytics (DA) as an enabler for privacy-preserving analysis across multiple data sources has shown its potential to support data-intensive research. However, the application of DA creates new challenges stemming from its distributed nature, such as identifying single points of failure (SPOFs) in DA tasks before their actual execution. Failing to detect such SPOFs can, for example, result in improper termination of the DA code, necessitating additional efforts from multiple stakeholders to resolve the malfunctions. Moreover, these malfunctions disrupt the seamless conduct of DA and entail several crucial consequences, including technical obstacles to resolve the issues, potential delays in research outcomes, and increased costs. In this study, we address this challenge by introducing a concept based on a method called Smoke Testing, an initial and foundational test run to ensure the operability of the analysis code. We review existing DA platforms and systematically extract six specific Smoke Testing criteria for DA applications. With these criteria in mind, we create an interactive environment called Development Environment for AuTomated and Holistic Smoke Testing of Analysis-Runs (DEATHSTAR), which allows researchers to perform Smoke Tests on their DA experiments. We conduct a user-study with 29 participants to assess our environment and additionally apply it to three real use cases. The results of our evaluation validate its effectiveness, revealing that 96.6% of the analyses created and (Smoke) tested by participants using our approach successfully terminated without any errors. Thus, by incorporating Smoke Testing as a fundamental method, our approach helps identify potential malfunctions early in the development process, ensuring smoother data-driven research within the scope of DA. Through its flexibility and adaptability to diverse real use cases, our solution enables more robust and efficient development of DA experiments, which contributes to their reliability.

11.
Proc Int Conf Distrib Comput Syst ; 2022: 1306-1309, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36506615

RESUMEN

Cloud computing and Internet-ware software paradigm have enabled rapid development of distributed business process (BP) applications. Several tools are available to facilitate automated/ semi-automated development and deployment of such distributed BPs by orchestrating relevant service components in a plug-and-play fashion. However, the BPs developed using such tools are not guaranteed to be fault-free. In this demonstration, we present a tool called BP-DEBUG for debugging and automated repair of faulty BPs. BP-DEBUG implements our Collaborative Fault Resolution (CFR) approach that utilizes the knowledge of existing BPs with a similar set of web services fault detection and resolution in a given user BP. Essentially, CFR attempts to determine any semantic and structural differences between a faulty BP and related BPs and computes a minimum set of transformations which can be used to repair the faulty BP. Demo url: https://youtu.be/mf49oSekLOA.

13.
Earth Sci Inform ; 15(3): 1513-1525, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36003898

RESUMEN

GeoGateway (http://geo-gateway.org) is a web-based interface for analysis and modeling of geodetic imaging data and to support response to related disasters. Geodetic imaging data product currently supported by GeoGateway include Global Navigation Satellite System (GNSS) daily position time series and derived velocities and displacements and airborne Interferometric Synthetic Aperture Radar (InSAR) from NASA's UAVSAR platform. GeoGateway allows users to layer data products in a web map interface and extract information from various tools. Extracted products can be downloaded for further analysis. GeoGateway includes overlays of California fault traces, seismicity from user selected search parameters, and user supplied map files. GeoGateway also provides earthquake nowcasts and hazard maps as well as products created for related response to natural disasters. A user guide is present in the GeoGateway interface. The GeoGateway development team is also growing the user base through workshops, webinars, and video tutorials. GeoGateway is used in the classroom and for research by experts and non-experts including by students.

14.
Sensors (Basel) ; 22(14)2022 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-35890820

RESUMEN

The use of software and IoT services is increasing significantly among people with special needs, who constitute 15% of the world's population. However, selecting appropriate services to create a composite assistive service based on the evolving needs and context of disabled user groups remains a challenging research endeavor. Our research applies a scenario-based design technique to contribute (1) an inclusive disability ontology for assistive service selection, (2) semi-synthetic generated disability service datasets, and (3) a machine learning (ML) framework to choose services adaptively to suit the dynamic requirements of people with special needs. The ML-based selection framework is applied in two complementary phases. In the first phase, all available atomic tasks are assessed to determine their appropriateness to the user goal and profiles, whereas in the subsequent phase, the list of service providers is narrowed by matching their quality-of-service factors against the context and characteristics of the disabled person. Our methodology is centered around a myriad of user characteristics, including their disability profile, preferences, environment, and available IT resources. To this end, we extended the widely used QWS V2.0 and WS-DREAM web services datasets with a fusion of selected accessibility features. To ascertain the validity of our approach, we compared its performance against common multi-criteria decision making (MCDM) models, namely AHP, SAW, PROMETHEE, and TOPSIS. The findings demonstrate superior service selection accuracy in contrast to the other methods while ensuring accessibility requirements are satisfied.


Asunto(s)
Personas con Discapacidad , Humanos , Aprendizaje Automático
15.
Expert Syst Appl ; 210: 118227, 2022 Dec 30.
Artículo en Inglés | MEDLINE | ID: mdl-35880010

RESUMEN

COVID-19 is a global pandemic that mostly affects patients' respiratory systems, and the only way to protect oneself against the virus at present moment is to diagnose the illness, isolate the patient, and provide immunization. In the present situation, the testing used to predict COVID-19 is inefficient and results in more false positives. This difficulty can be solved by developing a remote medical decision support system that detects illness using CT scans or X-ray images with less manual interaction and is less prone to errors. The state-of-art techniques mainly used complex deep learning architectures which are not quite effective when deployed in resource-constrained edge devices. To overcome this problem, a multi-objective Modified Heat Transfer Search (MOMHTS) optimized hybrid Random Forest Deep learning (HRFDL) classifier is proposed in this paper. The MOMHTS algorithm mainly optimizes the deep learning model in the HRFDL architecture by optimizing the hyperparameters associated with it to support the resource-constrained edge devices. To evaluate the efficiency of this technique, extensive experimentation is conducted on two real-time datasets namely the COVID19 lung CT scan dataset and the Chest X-ray images (Pneumonia) datasets. The proposed methodology mainly offers increased speed for communication between the IoT devices and COVID-19 detection via the MOMHTS optimized HRFDL classifier is modified to support the resources which can only support minimal computation and handle minimum storage. The proposed methodology offers an accuracy of 99% for both the COVID19 lung CT scan dataset and the Chest X-ray images (Pneumonia) datasets with minimal computational time, cost, and storage. Based on the simulation outcomes, we can conclude that the proposed methodology is an appropriate fit for edge computing detection to identify the COVID19 and pneumonia with higher detection accuracy.

16.
JAMIA Open ; 5(2): ooac038, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35651522

RESUMEN

Objective: Facilitate the multi-appointment scheduling problems (MASPs) characteristic of longitudinal clinical research studies. Additional goals include: reducing management time, optimizing clinical resources, and securing personally identifiable information. Materials and methods: Following a model view controller architecture, we developed a web-based tool written in Python 3. Results: Smart Scheduling (SMASCH) system facilitates clinical research and integrated care programs in Luxembourg, providing features to better manage MASPs and speed up management tasks. It is available both as a Linux package and Docker image (https://smasch.pages.uni.lu). Discussion: The long-term requirements of longitudinal clinical research studies justify the employment of flexible and well-maintained frameworks and libraries through an iterative software life-cycle suited to respond to rapidly changing scenarios. Conclusions: SMASCH is a free and open-source scheduling system for clinical studies able to satisfy recent data regulations providing features for better data accountability. Better scheduling systems can help optimize several metrics that ultimately affect the success of clinical studies.

17.
Sensors (Basel) ; 22(7)2022 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-35408089

RESUMEN

This paper presents a novel solution in the field of the integration of the Smart Grid and the Internet of Things. The definition of a web platform able to offer a RESTful interface to IEC 61850 Servers to a generic user is proposed. The web platform enables the mapping of information maintained by an IEC 61850 Server into MQTT messages. Suitable mechanisms to introduce interoperable exchange of information were defined. The paper presents the main features offered by the proposed platform. The originality of the proposal is highlighted by comparing it with the current literature. A prototype was realized, and the software implementation choices are described and the main results of its evaluation are presented.


Asunto(s)
Sistemas de Computación , Tecnología , Programas Informáticos
18.
BMC Bioinformatics ; 23(1): 69, 2022 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-35164667

RESUMEN

BACKGROUND: Gene ontology (GO) enrichment analysis is frequently undertaken during exploration of various -omics data sets. Despite the wide array of tools available to biologists to perform this analysis, meaningful visualisation of the overrepresented GO in a manner which is easy to interpret is still lacking. RESULTS: Monash Gene Ontology (MonaGO) is a novel web-based visualisation system that provides an intuitive, interactive and responsive interface for performing GO enrichment analysis and visualising the results. MonaGO supports gene lists as well as GO terms as inputs. Visualisation results can be exported as high-resolution images or restored in new sessions, allowing reproducibility of the analysis. An extensive comparison between MonaGO and 11 state-of-the-art GO enrichment visualisation tools based on 9 features revealed that MonaGO is a unique platform that simultaneously allows interactive visualisation within one single output page, directly accessible through a web browser with customisable display options. CONCLUSION: MonaGO combines dynamic clustering and interactive visualisation as well as customisation options to assist biologists in obtaining meaningful representation of overrepresented GO terms, producing simplified outputs in an unbiased manner. MonaGO will facilitate the interpretation of GO analysis and will assist the biologists into the representation of the results.


Asunto(s)
Programas Informáticos , Análisis por Conglomerados , Ontología de Genes , Probabilidad , Reproducibilidad de los Resultados
19.
Entropy (Basel) ; 24(2)2022 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-35205537

RESUMEN

Web services have the advantage of being able to generate new value-added services based on existing services. To effectively compose Web services, the composition process necessitates that the services that will participate in a given composite service are more trustworthy than those that provide similar functionality. The trust mechanism appears to be a promising way for determining service selection and composition. Existing trust evaluation approaches do not take into account customer expectations. Based on fuzzy set theory and probability theory, this work proposes a unique Web service trust evaluation approach that is notable for its ability to provide personalized service selection based on customer expectations and preferences. The proposed approach defines trust as a fuzzy notion that is related to prior experiences and ratings, and expresses trust in two different forms. This work mainly solves two key issues in Web service trust architectures, bootstrapping trust for the newcomer services and deriving trust for composite services. The proposed approach combines the solutions to numerous issues in a natural way. The case study and approaches comparison demonstrate that the proposed approach is feasible.

20.
Comput Methods Biomech Biomed Engin ; 25(10): 1180-1194, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35174762

RESUMEN

In recent years, cardiovascular disease becomes a prominent source of death. The web services connect other medical equipments and the computers via internet for exchanging and combining the data in novel ways. The accurate prediction of heart disease is important to prevent cardiac patients prior to heart attack. The main drawback of heart disease is delay in identifying the disease in the early stage. This objective is obtained by using the machine learning method with rich healthcare information on heart diseases. In this paper, the smart healthcare method is proposed for the prediction of heart disease using Biogeography optimization algorithm and Mexican hat wavelet to enhance Dragonfly algorithm optimization with mixed kernel based extreme learning machine (BMDA-MKELM) approach. Here, data is gathered from the two devices such as sensor nodes as well as the electronic medical records. The android based design is utilized to gather the patient data and the reliable cloud-based scheme for the data storage. For further evaluation for the prediction of heart disease, data are gathered from cloud computing services. At last, BMDA-MKELM based prediction scheme is capable to classify cardiovascular diseases. In addition to this, the proposed prediction scheme is compared with another method with respect to measures such as accuracy, precision, specificity, and sensitivity. The experimental results depict that the proposed approach achieves better results for the prediction of heart disease when compared with other methods.


Asunto(s)
Cardiopatías , Aprendizaje Automático , Algoritmos , Aminas , Atención a la Salud , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA