RESUMEN
This paper outlines the protocol for the deployment of a cloud-based universal medical image repository system. The proposal aims not only at the deployment but also at the automatic expansion of the platform, incorporating Artificial Intelligence (AI) for the analysis of medical image examinations. The methodology encompasses efficient data management through a universal database, along with the deployment of various AI models designed to assist in diagnostic decision-making. By presenting this protocol, the goal is to overcome technical challenges and issues that impact all phases of the workflow, from data management to the deployment of AI models in the healthcare sector. These challenges include ethical considerations, compliance with legal regulations, establishing user trust, and ensuring data security. The system has been deployed, with a tested and validated proof of concept, possessing the capability to receive thousands of images daily and to sustain the ongoing deployment of new AI models to expedite the analysis process in medical image exams.
Asunto(s)
Inteligencia Artificial , Nube Computacional , Humanos , Diagnóstico por Imagen/métodos , Salud Pública , Proyectos Piloto , Bases de Datos Factuales , Seguridad Computacional , Manejo de Datos/métodosRESUMEN
This work investigated the annual variations in dry snow (DSRZ) and wet snow radar zones (WSRZ) in the north of the Antarctic Peninsula between 2015-2023. A specific code for snow zone detection on Sentinel-1 images was created on Google Earth Engine by combining the CryoSat-2 digital elevation model and air temperature data from ERA5. Regions with backscatter coefficients (σ°) values exceeding -6.5 dB were considered the extent of surface melt occurrence, and the dry snow line was considered to coincide with the -11 °C isotherm of the average annual air temperature. The annual variation in WSRZ exhibited moderate correlations with annual average air temperature, total precipitation, and the sum of annual degree-days. However, statistical tests indicated low determination coefficients and no significant trend values in DSRZ behavior with atmospheric variables. The results of reducing DSRZ area for 2019/2020 and 2020/2021 compared to 2018/2018 indicated the upward in dry zone line in this AP region. The methodology demonstrated its efficacy for both quantitative and qualitative analyses of data obtained in digital processing environments, allowing for the large-scale spatial and temporal variations monitoring and for the understanding changes in glacier mass loss.
Asunto(s)
Nube Computacional , Radar , Nieve , Regiones Antárticas , Estaciones del Año , Monitoreo del Ambiente/métodos , TemperaturaRESUMEN
The widespread adoption of cloud computing necessitates privacy-preserving techniques that allow information to be processed without disclosure. This paper proposes a method to increase the accuracy and performance of privacy-preserving Convolutional Neural Networks with Homomorphic Encryption (CNN-HE) by Self-Learning Activation Functions (SLAF). SLAFs are polynomials with trainable coefficients updated during training, together with synaptic weights, for each polynomial independently to learn task-specific and CNN-specific features. We theoretically prove its feasibility to approximate any continuous activation function to the desired error as a function of the SLAF degree. Two CNN-HE models are proposed: CNN-HE-SLAF and CNN-HE-SLAF-R. In the first model, all activation functions are replaced by SLAFs, and CNN is trained to find weights and coefficients. In the second one, CNN is trained with the original activation, then weights are fixed, activation is substituted by SLAF, and CNN is shortly re-trained to adapt SLAF coefficients. We show that such self-learning can achieve the same accuracy 99.38% as a non-polynomial ReLU over non-homomorphic CNNs and lead to an increase in accuracy (99.21%) and higher performance (6.26 times faster) than the state-of-the-art CNN-HE CryptoNets on the MNIST optical character recognition benchmark dataset.
Asunto(s)
Seguridad Computacional , Redes Neurales de la Computación , Privacidad , Humanos , Algoritmos , Nube ComputacionalRESUMEN
SUMMARY: Volume abnormalities in subcortical structures, including the hippocampus, amygdala, thalamus, caudate, putamen, and globus pallidus have been observed in schizophrenia (SZ) and bipolar disorder (BD), not all individuals with these disorders exhibit such changes. In addition, the specific patterns and severity of volume changes may vary between individuals and at different stages of the disease. The study aims to compare the volumes of these subcortical structures between healthy subjects and individuals diagnosed with SZ or BD. Volumetric measurements of lateral ventricle, globus palllidus, caudate, putamen, hippocampus, and amygdale were made by MRI in 52 healthy subjects (HS), 33 patients with SZ, and 46 patients with BD. Automatic segmentation methods were used to analyze the MR images with VolBrain and MRICloud. Hippocampus, amygdala and lateral ventricle increased in schizophrenia and bipolar disorder patients in comparison with control subjects using MRIcloud. Globus pallidus and caudate volume increased in patients with schizophrenia and bipolar disorder compared control subjects using Volbrain. We suggested that our results will contribute in schizophrenia and bipolar disorder patients that assessment of the sub-cortical progression, pathology, and anomalies of subcortical brain compositions. In patients with psychiatric disorders, VolBrain and MRICloud can detect subtle structural differences in the brain.
Se han observado anomalías de volumen en las estructuras subcorticales, incluidos el hipocampo, la amígdala, el tálamo, el núcleo caudado, el putamen y el globo pálido, en la esquizofrenia (SZ) y el trastorno bipolar (BD); no todos los individuos con estos trastornos presentan tales cambios. Además, los patrones específicos y la gravedad de los cambios de volumen pueden variar entre individuos y en diferentes etapas de la enfermedad. El estudio tuvo como objetivo comparar los volúmenes de estas estructuras subcorticales entre sujetos sanos e individuos diagnosticados con SZ o BD. Se realizaron mediciones volumétricas del ventrículo lateral, globo pálido, núcleo caudado, putamen, hipocampo y amígdala mediante resonancia magnética en 52 sujetos sanos (HS), 33 pacientes con SZ y 46 pacientes con BD. Se utilizaron métodos de segmentación automática para analizar las imágenes de resonancia magnética con VolBrain y MRICloud. El hipocampo, la amígdala y el ventrículo lateral aumentaron en pacientes con esquizofrenia y trastorno bipolar en comparación con sujetos de control que utilizaron MRIcloud. El globo pálido y el núcleo caudado aumentaron en pacientes con esquizofrenia y trastorno bipolar en comparación con los sujetos control que utilizaron Volbrain. Sugerimos que en pacientes con esquizofrenia y trastorno bipolar, nuestros resultados contribuirán a la evaluación de la progresión subcortical, la patología y las anomalías de las composiciones cerebrales subcorticales. En pacientes con trastornos psiquiátricos, VolBrain y MRICloud pueden detectar diferencias estructurales sutiles en el cerebro.
Asunto(s)
Humanos , Masculino , Femenino , Adulto , Persona de Mediana Edad , Esquizofrenia/diagnóstico por imagen , Trastorno Bipolar/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Tamaño de los Órganos , Esquizofrenia/patología , Trastorno Bipolar/patología , Estudios Transversales , Estudios Retrospectivos , Nube ComputacionalRESUMEN
This work presents Chameleon, a cloud computing (CC) Industry 4.0 (I4) neutron spectrum unfolding code. The code was designed under the Python programming language, using Streamlit framework®, and it is executed on the cloud, as I4 CC technology through internet, by using mobile devices with internet connectivity and a web navigator. In its first version, as a proof of concept, the SPUNIT algorithm was implemented. The main functionalities and the preliminary tests performed to validate the code are presented. Chameleon solves the neutron spectrum unfolding problem and it is easy, friendly and intuitive. It can be applied with success in various workplaces. More validation tests are in progress. Future implementations will include improving the graphical user interface, inserting other algorithms, such as GRAVEL, MAXED and neural networks, and implementing an algorithm to estimate uncertainties in the calculated integral quantities.
Asunto(s)
Algoritmos , Nube Computacional , Redes Neurales de la Computación , Internet , NeutronesRESUMEN
As startups são empresas que apresentam modelos de negócios marcados pela inovação, rapidez, flexibilidade e alta capacidade de adaptação aos mercados. Atuando em diferentes setores socioeconômicos, elas prometem criar e transformar produtos e serviços. A emergência e disseminação dessas empresas ocorrem em um momento histórico de mudanças iniciadas a partir de 1970 e marcadas pelas crises geradas com o esgotamento do paradigma da sociedade urbano industrial. No Brasil, o número desse modelo de negócio apresentou uma expansão expressiva, alcançando a marca de 13.374 nos últimos cinco anos. Atento a esse cenário, o objetivo desta pesquisa consistiu em compreender como sujeitos, grupos e instituições atribuem sentidos à experiência de trabalho nas chamadas startups. Na parte teórica, as condições sociais e econômicas que possibilitaram a emergência e disseminação das startups são analisadas em uma perspectiva crítica. A parte empírica, por sua vez, apresenta depoimentos de empreendedores relatando o contexto geral de atuação nas startups. Ao final deste artigo, conclui-se que há uma instrumentalização capitalística de componentes subjetivos específicos selecionados e colocados em circulação para fortalecer o modo de produção capitalista financeirizado.(AU)
Startups are companies that have business models characterized by innovation, speed, flexibility, and a high capacity to adapt to markets. Operating in different socioeconomic sectors, they promise to create and transform products and services. The emergence and dissemination of these companies occur at a historical moment of changes that began from 1970 and are marked by the crises generated by the exhaustion of the paradigm of industrial urban society. In Brazil, the number of businesses in this model showed a significant expansion, reaching 13,374 companies in the last five years. Attentive to this scenario, the objective of this research was to understand how subjects, groups, and institutions attribute meanings to the work experience in so-called startups. In the theoretical part, the social and economic conditions that enabled the emergence and dissemination of startups are analyzed in a critical perspective. The empirical part presents entrepreneurs reporting the general context of action in startups. At the end of this article, it is concluded that there is a capitalistic instrumentalization of specific subjective components that are selected and put into circulation to strengthen the financed capitalist production.(AU)
Las startups son empresas que tienen modelos de negocio marcados por la innovación, la velocidad, la flexibilidad y una alta capacidad de adaptación a los mercados. Desde diferentes sectores socioeconómicos, las startups prometen crear y transformar productos y servicios. La aparición y difusión de estas empresas se produce en un momento histórico de cambios que comenzó a partir de 1970 y que está marcado por crisis generadas por el agotamiento del paradigma de la sociedad urbana industrial. En Brasil, estas empresas se expandieron significativamente alcanzando la marca de 13.374 empresas en los últimos cinco años. En este escenario, el objetivo de esta investigación fue entender cómo los sujetos, grupos e instituciones atribuyen significados a la experiencia laboral en las startups. En la parte teórica, se analizan las condiciones sociales y económicas que permitieron el surgimiento y la difusión de las startups en una perspectiva crítica. La parte empírica presenta testimonios de emprendedores que informan sobre el trabajo en startups. La investigación concluye que hay una instrumentalización capitalista de componentes subjetivos específicos que se seleccionan y ponen en circulación para fortalecer el modo de producción capitalista financiero.(AU)
Asunto(s)
Humanos , Masculino , Femenino , Satisfacción Personal , Psicología Social , Trabajo , Organizaciones , Capitalismo , Organización y Administración , Innovación Organizacional , Grupo Paritario , Personalidad , Política , Corporaciones Profesionales , Práctica Profesional , Psicología , Relaciones Públicas , Gestión de Riesgos , Seguridad , Salarios y Beneficios , Ajuste Social , Cambio Social , Valores Sociales , Tecnología , Pensamiento , Horas de Trabajo , Toma de Decisiones en la Organización , Propuestas de Licitación , Financiación del Capital , Inteligencia Artificial , Conferencias de Consenso como Asunto , Cultura Organizacional , Salud , Personal Administrativo , Salud Laboral , Técnicas de Planificación , Adolescente , Emprendimiento , Empleos Subvencionados , Sector Privado , Modelos Organizacionales , Entrevista , Gestión de la Calidad Total , Administración del Tiempo , Eficiencia Organizacional , Conducta Competitiva , Recursos Naturales , Comportamiento del Consumidor , Servicios Contratados , Benchmarking , Patente , Servicios Externos , Evolución Cultural , Mercadotecnía , Difusión de Innovaciones , Competencia Económica , Eficiencia , Empleo , Eventos Científicos y de Divulgación , Comercialización de Productos , Estudios de Evaluación como Asunto , Agroindustria , Planificación , Ensayos Analíticos de Alto Rendimiento , Pequeña Empresa , Red Social , Administración Financiera , Invenciones , Colaboración de las Masas , Nube Computacional , Equilibrio entre Vida Personal y Laboral , Participación de los Interesados , Crecimiento Sostenible , Libertad , Macrodatos , Utilización de Instalaciones y Servicios , Comercio Electrónico , Cadena de Bloques , Diseño Universal , Realidad Aumentada , Inteligencia , Inversiones en Salud , Medios de Comunicación de Masas , OcupacionesRESUMEN
Recently, the number of vehicles equipped with wireless connections has increased considerably. The impact of that growth in areas such as telecommunications, infotainment, and automatic driving is enormous. More and more drivers want to be part of a vehicular network, despite the implications or risks that, for instance, the openness of wireless communications, its dynamic topology, and its considerable size may bring. Undoubtedly, this trend is because of the benefits the vehicular network can offer. Generally, a vehicular network has two modes of communication (V2I and V2V). The advantage of V2I over V2V is roadside units' high computational and transmission power, which assures the functioning of early warning and driving guidance services. This paper aims to discover the principal vulnerabilities and challenges in V2I communications, the tools and methods to mitigate those vulnerabilities, the evaluation metrics to measure the effectiveness of those tools and methods, and based on those metrics, the methods or tools that provide the best results. Researchers have identified the non-resistance to attacks, the regular updating and exposure of keys, and the high dependence on certification authorities as main vulnerabilities. Thus, the authors found schemes resistant to attacks, authentication schemes, privacy protection models, and intrusion detection and prevention systems. Of the solutions for providing security analyzed in this review, the authors determined that most of them use metrics such as computational cost and communication overhead to measure their performance. Additionally, they determined that the solutions that use emerging technologies such as fog/edge/cloud computing present better results than the rest. Finally, they established that the principal challenge in V2I communication is to protect and dispose of a safe and reliable communication channel to avoid adversaries taking control of the medium.
Asunto(s)
Seguridad Computacional , Confidencialidad , Nube Computacional , Redes de Comunicación de Computadores , ComunicaciónRESUMEN
The rise of digitalization, sensory devices, cloud computing and internet of things (IoT) technologies enables the design of novel digital product lifecycle management (DPLM) applications for use cases such as manufacturing and delivery of digital products. The verification of the accomplishment/violations of agreements defined in digital contracts is a key task in digital business transactions. However, this verification represents a challenge when validating both the integrity of digital product content and the transactions performed during multiple stages of the DPLM. This paper presents a traceability method for DPLM based on the integration of online and offline verification mechanisms based on blockchain and fingerprinting, respectively. A blockchain lifecycle registration model is used for organizations to register the exchange of digital products in the cloud with partners and/or consumers throughout the DPLM stages as well as to verify the accomplishment of agreements at each DPLM stage. The fingerprinting scheme is used for offline verification of digital product integrity and to register the DPLM logs within digital products, which is useful in either dispute or violation of agreements scenarios. We built a DPLM service prototype based on this method, which was implemented as a cloud computing service. A case study based on the DPLM of audios was conducted to evaluate this prototype. The experimental evaluation revealed the ability of this method to be applied to DPLM in real scenarios in an efficient manner.
Asunto(s)
Cadena de Bloques , Internet de las Cosas , Seguridad Computacional , Nube Computacional , TecnologíaRESUMEN
Cloud storage has become a keystone for organizations to manage large volumes of data produced by sensors at the edge as well as information produced by deep and machine learning applications. Nevertheless, the latency produced by geographic distributed systems deployed on any of the edge, the fog, or the cloud, leads to delays that are observed by end-users in the form of high response times. In this paper, we present an efficient scheme for the management and storage of Internet of Thing (IoT) data in edge-fog-cloud environments. In our proposal, entities called data containers are coupled, in a logical manner, with nano/microservices deployed on any of the edge, the fog, or the cloud. The data containers implement a hierarchical cache file system including storage levels such as in-memory, file system, and cloud services for transparently managing the input/output data operations produced by nano/microservices (e.g., a sensor hub collecting data from sensors at the edge or machine learning applications processing data at the edge). Data containers are interconnected through a secure and efficient content delivery network, which transparently and automatically performs the continuous delivery of data through the edge-fog-cloud. A prototype of our proposed scheme was implemented and evaluated in a case study based on the management of electrocardiogram sensor data. The obtained results reveal the suitability and efficiency of the proposed scheme.
Asunto(s)
Nube Computacional , Redes de Comunicación de Computadores , Electrocardiografía , InternetAsunto(s)
COVID-19 , Brasil/epidemiología , Nube Computacional , Cuidados Críticos , Humanos , Sistema de RegistrosRESUMEN
Introducción: La gestión de información agiliza los procesos en diferentes ambientes laborales, para lo cual se emplean sistemas capaces de reunir, organizar y vincular la información almacenada. La nube se basa en tecnologías existentes, tales como virtualización y servicios web; constituye un hito informático y se adapta a diversos escenarios y contextos. Objetivo: Contribuir a la gestión de información relacionada con las bases de datos de la reacción en cadena de la polimerasa y los procesos de acreditación en la Universidad de Ciencias Médicas de Santiago de Cuba a través de la nube Infomed Santiago. Métodos: Se realizó una investigación aplicada de desarrollo tecnológico en el Centro Provincial de Información de Ciencias Médicas, la Universidad de Ciencias Médicas y el Centro Provincial de Higiene, Epidemiología y Microbiología de Santiago de Cuba, desde junio hasta diciembre del 2021. Se aplicaron encuestas a 19 profesionales del sector, quienes laboraban directamente en la gestión de información relacionada con las bases de datos antes citadas. Resultados: Los 19 encuestados (100,0 %) afirmaron que utilizaban el correo electrónico, el chat integrado, la nube y otras facilidades que esta herramienta ofrece para almacenar y compartir información. Conclusiones: La nube permitió la gestión de información relacionada con los procesos asistenciales y académicos durante el periodo más crítico de la COVID-19.
Introduction: The information management speeds up the processes in different labor atmospheres, for which systems able to gather, organize and link the stored information are used. The cloud is based on existent technologies, such as virtualization and web services; it constitutes a computer landmark and it is adapted to diverse scenarios and contexts. Objective: To contribute the information management related to the databases of the polymerase chain reaction and the accreditation processes in the University of Medical Sciences from Santiago de Cuba through the cloud Infomed Santiago. Methods: An applied investigation of technological development was carried out in the Provincial Information Center of Medical Sciences, the University of Medical Sciences and the Provincial Center of Hygiene, Epidemiology and Microbiology in Santiago de Cuba, from June to December, 2021. Surveys were applied to 19 professionals of the sector who worked directly in the administration of information related to the databases mentioned. Results: The 19 professionals interviewed (100.0 %) affirmed that they used the electronic mail, the integrated chat, the cloud and other facilities that this tool offers to store and share information. Conclusions: The cloud allowed the information management related to the assistance and academic processes during the most critical period of COVID-19.
Asunto(s)
Gestión de la Información , Nube Computacional , COVID-19RESUMEN
In this paper we present the design of an open-source and low-cost buoy prototype for remote monitoring of water quality variables in fish farming. The designed battery-powered system periodically measures temperature, pH and dissolved oxygen, transmitting the information locally through a low-power wide-area network protocol to a gateway connected to a cloud service for data storage and visualization. We provide a novel buoy design that can be easily constructed with off-the-shelf materials, delivering a stable anchored float for the IoT device and the probes immersed in the water pond. The prototype was tested at an operating fish farm, showing promising results for a low-cost remote monitoring tool that enables automatic data acquisition and storage in fish farming scenarios. All the elements of this design, including hardware and software designs, are freely available under permissive licenses as an open-source project.
Asunto(s)
Explotaciones Pesqueras , Calidad del Agua , Nube ComputacionalRESUMEN
The amount of available data is continuously growing. This phenomenon promotes a new concept, named big data. The highlight technologies related to big data are cloud computing (infrastructure) and Not Only SQL (NoSQL; data storage). In addition, for data analysis, machine learning algorithms such as decision trees, support vector machines, artificial neural networks, and clustering techniques present promising results. In a biological context, big data has many applications due to the large number of biological databases available. Some limitations of biological big data are related to the inherent features of these data, such as high degrees of complexity and heterogeneity, since biological systems provide information from an atomic level to interactions between organisms or their environment. Such characteristics make most bioinformatic-based applications difficult to build, configure, and maintain. Although the rise of big data is relatively recent, it has contributed to a better understanding of the underlying mechanisms of life. The main goal of this article is to provide a concise and reliable survey of the application of big data-related technologies in biology. As such, some fundamental concepts of information technology, including storage resources, analysis, and data sharing, are described along with their relation to biological data.
Asunto(s)
Macrodatos , Minería de Datos , Nube Computacional , Minería de Datos/métodos , Aprendizaje Automático , Redes Neurales de la ComputaciónRESUMEN
BACKGROUND: Industry 4.0 technologies have been widely used in the railway industry, focusing mainly on maintenance and control tasks necessary in the railway infrastructure. Given the great potential that these technologies offer, the scientific community has come to use them in varied ways to solve a wide range of problems such as train failures, train station security, rail system control and communication in hard-to-reach areas, among others. For this reason, this paper aims to answer the following research questions: what are the main issues in the railway transport industry, what are the technologic strategies that are currently being used to solve these issues and what are the technologies from industry 4.0 that are used in the railway transport industry to solve the aforementioned issues? METHODS: This study adopts a systematic literature review approach. We searched the Science Direct and Web of Science database inception from January 2017 to November 2021. Studies published in conferences or journals written in English or Spanish were included for initial process evaluation. The initial included papers were analyzed by authors and selected based on whether they helped answer the proposed research questions or not. RESULTS: Of the recovered 515 articles, 109 were eligible, from which we could identify three main application domains in the railway industry: monitoring, decision and planification techniques, and communication and security. Regarding industry 4.0 technologies, we identified 9 different technologies applied in reviewed studies: Artificial Intelligence (AI), Internet of Things (IoT), Cloud Computing, Big Data, Cybersecurity, Modelling and Simulation, Smart Decision Support Systems (SDSS), Computer Vision and Virtual Reality (VR). This study is, to our knowledge, one of the first to show how industry 4.0 technologies are currently being used to tackle railway industry problems and current application trends in the scientific community, which is highly useful for the development of future studies and more advanced solutions. FUNDING: Colombian national organizations Minciencias and the Mining-Energy Planning Unit.
Asunto(s)
Inteligencia Artificial , Internet de las Cosas , Macrodatos , Nube Computacional , TecnologíaRESUMEN
Cloud computing has been widely adopted over the years by practitioners and companies with a variety of requirements. With a strong economic appeal, cloud computing makes possible the idea of computing as a utility, in which computing resources can be consumed and paid for with the same convenience as electricity. One of the main characteristics of cloud as a service is elasticity supported by auto-scaling capabilities. The auto-scaling cloud mechanism allows adjusting resources to meet multiple demands dynamically. The elasticity service is best represented in critical web trading and transaction systems that must satisfy a certain service level agreement (SLA), such as maximum response time limits for different types of inbound requests. Nevertheless, existing cloud infrastructures maintained by different cloud enterprises often offer different cloud service costs for equivalent SLAs upon several factors. The factors might be contract types, VM types, auto-scaling configuration parameters, and incoming workload demand. Identifying a combination of parameters that results in SLA compliance directly in the system is often sophisticated, while the manual analysis is prone to errors due to the huge number of possibilities. This paper proposes the modeling of auto-scaling mechanisms in a typical cloud infrastructure using a stochastic Petri net (SPN) and the employment of a well-established adaptive search metaheuristic (GRASP) to discover critical trade-offs between performance and cost in cloud services.The proposed SPN models enable cloud designers to estimate the metrics of cloud services in accordance with each required SLA such as the best configuration, cost, system response time, and throughput.The auto-scaling SPN model was extensively validated with 95% confidence against a real test-bed scenario with 18.000 samples. A case-study of cloud services was used to investigate the viability of this method and to evaluate the adoptability of the proposed auto-scaling model in practice. On the other hand, the proposed optimization algorithm enables the identification of economic system configuration and parameterization to satisfy required SLA and budget constraints. The adoption of the metaheuristic GRASP approach and the modeling of auto-scaling mechanisms in this work can help search for the optimized-quality solution and operational management for cloud services in practice.
Asunto(s)
Algoritmos , Nube Computacional , Carga de TrabajoRESUMEN
Containers have emerged as a more portable and efficient solution than virtual machines for cloud infrastructure providing both a flexible way to build and deploy applications. The quality of service, security, performance, energy consumption, among others, are essential aspects of their deployment, management, and orchestration. Inappropriate resource allocation can lead to resource contention, entailing reduced performance, poor energy efficiency, and other potentially damaging effects. In this paper, we present a set of online job allocation strategies to optimize quality of service, energy savings, and completion time, considering contention for shared on-chip resources. We consider the job allocation as the multilevel dynamic bin-packing problem that provides a lightweight runtime solution that minimizes contention and energy consumption while maximizing utilization. The proposed strategies are based on two and three levels of scheduling policies with container selection, capacity distribution, and contention-aware allocation. The energy model considers joint execution of applications of different types on shared resources generalized by the job concentration paradigm. We provide an experimental analysis of eighty-six scheduling heuristics with scientific workloads of memory and CPU-intensive jobs. The proposed techniques outperform classical solutions in terms of quality of service, energy savings, and completion time by 21.73-43.44%, 44.06-92.11%, and 16.38-24.17%, respectively, leading to a cost-efficient resource allocation for cloud infrastructures.
Asunto(s)
Algoritmos , Nube ComputacionalRESUMEN
MOTIVATION: Large-scale cancer genome projects have generated genomic, transcriptomic, epigenomic and clinicopathological data from thousands of samples in almost every human tumor site. Although most omics data and their associated resources are publicly available, its full integration and interpretation to dissect the sources of gene expression modulation require specialized knowledge and software. RESULTS: We present Multiomix, an interactive cloud-based platform that allows biologists to identify genetic and epigenetic events associated with the transcriptional modulation of cancer-related genes through the analysis of multi-omics data available on public functional genomic databases or user-uploaded datasets. Multiomix consists of an integrated set of functions, pipelines and a graphical user interface that allows retrieval, aggregation, analysis and visualization of different omics data sources. After the user provides the data to be analyzed, Multiomix identifies all significant correlations between mRNAs and non-mRNA genomics features (e.g. miRNA, DNA methylation and CNV) across the genome, the predicted sequence-based interactions (e.g. miRNA-mRNA) and their associated prognostic values. AVAILABILITY AND IMPLEMENTATION: Multiomix is available at https://www.multiomix.org. The source code is freely available at https://github.com/omics-datascience/multiomix. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Asunto(s)
MicroARNs , Neoplasias , Humanos , Epigenómica , Nube Computacional , Genómica , Neoplasias/genética , Programas Informáticos , MicroARNs/genética , Transcriptoma , OncogenesRESUMEN
La internet de las cosas ha mantenido un crecimiento continuo en los últimos años. Las potencialidades de uso que muestra en diferentes campos han sido ampliamente documentadas. Su utilización efectiva en el campo de la salud puede traer consigo mejoras en la eficiencia de los tratamientos médicos, prevenir situaciones de riesgo, ayudar a elevar la calidad del servicio y proporcionar soporte a la toma de decisiones. La presente revisión profundiza en aspectos medulares de su utilización con el objetivo de explorar las principales tendencias y desafíos relacionados con la creciente utilización de la internet de las cosas en la salud, prestando mayor atención a los aspectos relacionados con las arquitecturas utilizadas para el despliegue de sistemas de internet de las cosas en ese ámbito, el manejo de la seguridad de estos sistemas y las herramientas para el apoyo a la toma de decisiones empleadas. Mediante el análisis documental se logra mostrar las principales características de estos sistemas, así como su arquitectura, herramientas utilizadas para la gestión de los datos capturados y mecanismos de seguridad. La utilización de la internet de las cosas en el campo de la salud tiene gran impacto, mejorando la vida de millones de personas en todo el mundo y brindando grandes oportunidades para el desarrollo de sistemas inteligentes de salud(AU)
The internet of things has maintained continuous growth in recent years. The potentialities of use that it shows in different fields have been widely documented. Its effective use in the field of health can bring improvements in the efficiency of medical treatments, prevention of risky situations, help raising the quality of service and provide support for decision-making. The present review explores into core aspects of its use in order to analyze trends, challenges and strengths. Document analysis was used to show the main characteristics of these systems, as well as their architecture, tools used for the management of the captured data and security mechanisms. The use of the internet of things in the health field has a great impact, improving the lives of millions of people around the world and providing great opportunities for the development of intelligent health systems(AU)
Asunto(s)
Humanos , Masculino , Femenino , Informática Médica , Sistemas de Salud , Nube Computacional/tendencias , Cadena de Bloques/tendencias , Internet de las Cosas/tendenciasRESUMEN
The recent growth of the Internet of Things' services and applications has increased data processing and storage requirements. The Edge computing concept aims to leverage the processing capabilities of the IoT and other devices placed at the edge of the network. One embodiment of this paradigm is Fog computing, which provides an intermediate and often hierarchical processing tier between the data sources and the remote Cloud. Among the major benefits of this concept, the end-to-end latency can be decreased, thus favoring time-sensitive applications. Moreover, the data traffic at the network core and the Cloud computing workload can be reduced. Combining the Fog computing paradigm with Complex Event Processing (CEP) and data fusion techniques has excellent potential for generating valuable knowledge and aiding decision-making processes in the Internet of Things' systems. In this context, we propose a multi-tier complex event processing approach (sensor node, Fog, and Cloud) that promotes fast decision making and is based on information with 98% accuracy. The experiments show a reduction of 77% in the average time of sending messages in the network. In addition, we achieved a reduction of 82% in data traffic.
Asunto(s)
Internet de las Cosas , Agricultura , Nube ComputacionalRESUMEN
The high demand for data processing in web applications has grown in recent years due to the increased computing infrastructure supply as a service in a cloud computing ecosystem. This ecosystem offers benefits such as broad network access, elasticity, and resource sharing, among others. However, properly exploiting these benefits requires optimized provisioning of computational resources in the target infrastructure. Several studies in the literature improve the quality of this management, which involves enhancing the scalability of the infrastructure, either through cost management policies or strategies aimed at resource scaling. However, few studies adequately explore performance evaluation mechanisms. In this context, we present the MoHRiPA-Management of Hybrid Resources in Private cloud Architecture. MoHRiPA has a modular design encompassing scheduling algorithms, virtualization tools, and monitoring tools. The proposed architecture solution allows assessing the overall system's performance by using complete factorial planning to identify the general behavior of architecture under high demand of requests. It also evaluates workload behavior, the number of virtualized resources, and provides an elastic resource manager. A composite metric is also proposed and adopted as a criterion for resource scaling. This work presents a performance evaluation by using formal techniques, which analyses the scheduling algorithms of architecture and the experiment bottlenecks analysis, average response time, and latency. In summary, the proposed MoHRiPA mapping resources algorithm (HashRefresh) showed significant improvement results than the analyzed competitor, decreasing about 7% percent in the uniform average compared to ListSheduling (LS).