Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Phys Imaging Radiat Oncol ; 30: 100584, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38803466

RESUMEN

Background and purpose: Even with most breathing-controlled four-dimensional computed tomography (4DCT) algorithms image artifacts caused by single significant longer breathing still occur, resulting in negative consequences for radiotherapy. Our study presents first phantom examinations of a new optimized raw data selection and binning algorithm, aiming to improve image quality and geometric accuracy without additional dose exposure. Materials and methods: To validate the new approach, phantom measurements were performed to assess geometric accuracy (volume fidelity, root mean square error, Dice coefficient of volume overlap) for one- and three-dimensional tumor motion trajectories with and without considering motion hysteresis effects. Scans without significantly longer breathing cycles served as references. Results: Median volume deviations between optimized approach and reference of at maximum 1% were obtained considering all movements. In comparison, standard reconstruction yielded median deviations of 9%, 21% and 12% for one-dimensional, three-dimensional, and hysteresis motion, respectively. Measurements in one- and three-dimensional directions reached a median Dice coefficient of 0.970 ± 0.013 and 0.975 ± 0.012, respectively, but only 0.918 ± 0.075 for hysteresis motions averaged over all measurements for the optimized selection. However, for the standard reconstruction median Dice coefficients were 0.845 ± 0.200, 0.868 ± 0.205 and 0.915 ± 0.075 for one- and three-dimensional as well as hysteresis motions, respectively. Median root mean square errors for the optimized algorithm were 30 ± 16 HU2 and 120 ± 90 HU2 for three-dimensional and hysteresis motions, compared to 212 ± 145 HU2 and 130 ± 131 HU2 for the standard reconstruction. Conclusions: The algorithm was proven to reduce 4DCT-related artifacts due to missing projection data without further dose exposure. An improvement in radiotherapy treatment planning due to better image quality can be expected.

2.
Stud Health Technol Inform ; 310: 840-844, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269927

RESUMEN

Telehealth services are becoming more and more popular, leading to an increasing amount of data to be monitored by health professionals. Machine learning can support them in managing these data. Therefore, the right machine learning algorithms need to be applied to the right data. We have implemented and validated different algorithms for selecting optimal time instances from time series data derived from a diabetes telehealth service. Intrinsic, supervised, and unsupervised instance selection algorithms were analysed. Instance selection had a huge impact on the accuracy of our random forest model for dropout prediction. The best results were achieved with a One Class Support Vector Machine, which improved the area under the receiver operating curve of the original algorithm from 69.91 to 75.88 %. We conclude that, although hardly mentioned in telehealth literature so far, instance selection has the potential to significantly improve the accuracy of machine learning algorithms.


Asunto(s)
Algoritmos , Telemedicina , Humanos , Personal de Salud , Aprendizaje Automático , Máquina de Vectores de Soporte
3.
Bioengineering (Basel) ; 10(3)2023 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-36978685

RESUMEN

The World Health Organization (WHO) highlights that cardiovascular diseases (CVDs) are one of the leading causes of death globally, with an estimated rise to over 23.6 million deaths by 2030. This alarming trend can be attributed to our unhealthy lifestyles and lack of attention towards early CVD diagnosis. Traditional cardiac auscultation, where a highly qualified cardiologist listens to the heart sounds, is a crucial diagnostic method, but not always feasible or affordable. Therefore, developing accessible and user-friendly CVD recognition solutions can encourage individuals to integrate regular heart screenings into their routine. Although many automatic CVD screening methods have been proposed, most of them rely on complex prepocessing steps and heart cycle segmentation processes. In this work, we introduce a simple and efficient approach for recognizing normal and abnormal PCG signals using Physionet data. We employ data selection techniques such as kernel density estimation (KDE) for signal duration extraction, signal-to-noise Ratio (SNR), and GMM clustering to improve the performance of 17 pretrained Keras CNN models. Our results indicate that using KDE to select the appropriate signal duration and fine-tuning the VGG19 model results in excellent classification performance with an overall accuracy of 0.97, sensitivity of 0.946, precision of 0.944, and specificity of 0.946.

4.
Polymers (Basel) ; 14(21)2022 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-36365761

RESUMEN

Data-driven soft sensors have increasingly been applied for the quality measurement of industrial polymerization processes in recent years. However, owing to the costly assay process, the limited labeled data available still pose significant obstacles to the construction of accurate models. In this study, a novel soft sensor named the selective Wasserstein generative adversarial network, with gradient penalty-based support vector regression (SWGAN-SVR), is proposed to enhance quality prediction with limited training samples. Specifically, the Wasserstein generative adversarial network with gradient penalty (WGAN-GP) is employed to capture the distribution of the available limited labeled data and to generate virtual candidates. Subsequently, an effective data-selection strategy is developed to alleviate the problem of varied-quality samples caused by the unstable training of the WGAN-GP. The selection strategy includes two parts: the centroid metric criterion and the statistical characteristic criterion. An SVR model is constructed based on the qualified augmented training data to evaluate the prediction performance. The superiority of SWGAN-SVR is demonstrated, using a numerical example and an industrial polyethylene process.

5.
Sensors (Basel) ; 22(20)2022 Oct 16.
Artículo en Inglés | MEDLINE | ID: mdl-36298201

RESUMEN

In order to accurately record the entry and departure times of helicopters and reduce the incidence of general aviation accidents, this paper proposes a helicopter entry and departure recognition method based on a self-learning mechanism, which is supplemented by a lightweight object detection module and an image classification module. The original image data obtained from the lightweight object detection module are used to construct an Automatic Selector of Data (Auto-SD) and an Adjustment Evaluator of Data Bias (Ad-EDB), whereby Auto-SD automatically generates a pseudo-clustering of the original image data. Ad-EDB then performs the adjustment evaluation and selects the best matching module for image classification. The self-learning mechanism constructed in this paper is applied to the helicopter entry and departure recognition scenario, and the ResNet18 residual network is selected for state classification. As regards the self-built helicopter entry and departure data set, the accuracy reaches 97.83%, which is 6.51% better than the bounding box detection method. To a certain extent, the strong reliance on manual annotation for helicopter entry and departure status classification scenarios is lifted, and the data auto-selector is continuously optimized using the preorder classification results to establish a circular learning loop in the algorithm.


Asunto(s)
Aeronaves , Algoritmos , Análisis por Conglomerados
6.
Sensors (Basel) ; 22(19)2022 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-36236728

RESUMEN

As the core link of the "Internet + Recycling" process, the value identification of the sorting center is a great challenge due to its small and imbalanced data set. This paper utilizes transfer fuzzy c-means to improve the value assessment accuracy of the sorting center by transferring the knowledge of customers clustering. To ensure the transfer effect, an inter-class balanced data selection method is proposed to select a balanced and more qualified subset of the source domain. Furthermore, an improved RFM (Recency, Frequency, and Monetary) model, named GFMR (Gap, Frequency, Monetary, and Repeat), has been presented to attain a more reasonable attribute description for sorting centers and consumers. The application in the field of electronic waste recycling shows the effectiveness and advantages of the proposed method.


Asunto(s)
Administración de Residuos , Análisis por Conglomerados , Internet , Reciclaje , Administración de Residuos/métodos
7.
Sensors (Basel) ; 22(19)2022 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-36236761

RESUMEN

A trunk-twisting posture is strongly associated with physical discomfort. Measurement of joint kinematics to assess physical exposure to injuries is important. However, using a single Kinect sensor to track the upper-limb joint angle trajectories during twisting tasks in the workplace is challenging due to sensor view occlusions. This study provides and validates a simple method to optimally select the upper-limb joint angle data from two Kinect sensors at different viewing angles during the twisting task, so the errors of trajectory estimation can be improved. Twelve healthy participants performed a rightward twisting task. The tracking errors of the upper-limb joint angle trajectories of two Kinect sensors during the twisting task were estimated based on concurrent data collected using a conventional motion tracking system. The error values were applied to generate the error trendlines of two Kinect sensors using third-order polynomial regressions. The intersections between two error trendlines were used to define the optimal data selection points for data integration. The finding indicates that integrating the outputs from two Kinect sensor datasets using the proposed method can be more robust than using a single sensor for upper-limb joint angle trajectory estimations during the twisting task.


Asunto(s)
Articulaciones , Postura , Fenómenos Biomecánicos , Humanos , Rango del Movimiento Articular , Extremidad Superior
8.
Front Artif Intell ; 5: 855184, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35664508

RESUMEN

We present a custom implementation of a 2D Convolutional Neural Network (CNN) as a viable application for real-time data selection in high-resolution and high-rate particle imaging detectors, making use of hardware acceleration in high-end Field Programmable Gate Arrays (FPGAs). To meet FPGA resource constraints, a two-layer CNN is optimized for accuracy and latency with KerasTuner, and network quantization is further used to minimize the computing resource utilization of the network. We use "High Level Synthesis for Machine Learning" (hls4ml) tools to test CNN deployment on a Xilinx UltraScale+ FPGA, which is an FPGA technology proposed for use in the front-end readout system of the future Deep Underground Neutrino Experiment (DUNE) particle detector. We evaluate network accuracy and estimate latency and hardware resource usage, and comment on the feasibility of applying CNNs for real-time data selection within the currently planned DUNE data acquisition system. This represents the first-ever exploration of employing 2D CNNs on FPGAs for DUNE.

9.
Sensors (Basel) ; 21(22)2021 Nov 12.
Artículo en Inglés | MEDLINE | ID: mdl-34833608

RESUMEN

Ranking-oriented cross-project defect prediction (ROCPDP), which ranks software modules of a new target industrial project based on the predicted defect number or density, has been suggested in the literature. A major concern of ROCPDP is the distribution difference between the source project (aka. within-project) data and target project (aka. cross-project) data, which evidently degrades prediction performance. To investigate the impacts of training data selection methods on the performances of ROCPDP models, we examined the practical effects of nine training data selection methods, including a global filter, which does not filter out any cross-project data. Additionally, the prediction performances of ROCPDP models trained on the filtered cross-project data using the training data selection methods were compared with those of ranking-oriented within-project defect prediction (ROWPDP) models trained on sufficient and limited within-project data. Eleven available defect datasets from the industrial projects were considered and evaluated using two ranking performance measures, i.e., FPA and Norm(Popt). The results showed no statistically significant differences among these nine training data selection methods in terms of FPA and Norm(Popt). The performances of ROCPDP models trained on filtered cross-project data were not comparable with those of ROWPDP models trained on sufficient historical within-project data. However, ROCPDP models trained on filtered cross-project data achieved better performance values than ROWPDP models trained on limited historical within-project data. Therefore, we recommended that software quality teams exploit other project datasets to perform ROCPDP when there is no or limited within-project data.


Asunto(s)
Aprendizaje Automático , Programas Informáticos , Investigación Empírica
10.
Entropy (Basel) ; 23(5)2021 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-34068635

RESUMEN

Although commercial motion-capture systems have been widely used in various applications, the complex setup limits their application scenarios for ordinary consumers. To overcome the drawbacks of wearability, human posture reconstruction based on a few wearable sensors have been actively studied in recent years. In this paper, we propose a deep-learning-based sparse inertial sensor human posture reconstruction method. This method uses bidirectional recurrent neural network (Bi-RNN) to build an a priori model from a large motion dataset to build human motion, thereby the low-dimensional motion measurements are mapped to whole-body posture. To improve the motion reconstruction performance for specific application scenarios, two fundamental problems in the model construction are investigated: training data selection and sparse sensor placement. The problem of deep-learning training data selection is to select independent and identically distributed (IID) data for a certain scenario from the accumulated imbalanced motion dataset with sufficient information. We formulate the data selection into an optimization problem to obtain continuous and IID data segments, which comply with a small reference dataset collected from the target scenario. A two-step heuristic algorithm is proposed to solve the data selection problem. On the other hand, the optimal sensor placement problem is studied to exploit most information from partial observation of human movement. A method for evaluating the motion information amount of any group of wearable inertial sensors based on mutual information is proposed, and a greedy searching method is adopted to obtain the approximate optimal sensor placement of a given sensor number, so that the maximum motion information and minimum redundancy is achieved. Finally, the human posture reconstruction performance is evaluated with different training data and sensor placement selection methods, and experimental results show that the proposed method takes advantages in both posture reconstruction accuracy and model training time. In the 6 sensors configuration, the posture reconstruction errors of our model for walking, running, and playing basketball are 7.25°, 8.84°, and 14.13°, respectively.

11.
Sensors (Basel) ; 21(5)2021 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-33803121

RESUMEN

Understanding people's eating habits plays a crucial role in interventions promoting a healthy lifestyle. This requires objective measurement of the time at which a meal takes place, the duration of the meal, and what the individual eats. Smartwatches and similar wrist-worn devices are an emerging technology that offers the possibility of practical and real-time eating monitoring in an unobtrusive, accessible, and affordable way. To this end, we present a novel approach for the detection of eating segments with a wrist-worn device and fusion of deep and classical machine learning. It integrates a novel data selection method to create the training dataset, and a method that incorporates knowledge from raw and virtual sensor modalities for training with highly imbalanced datasets. The proposed method was evaluated using data from 12 subjects recorded in the wild, without any restriction about the type of meals that could be consumed, the cutlery used for the meal, or the location where the meal took place. The recordings consist of data from accelerometer and gyroscope sensors. The experiments show that our method for detection of eating segments achieves precision of 0.85, recall of 0.81, and F1-score of 0.82 in a person-independent manner. The results obtained in this study indicate that reliable eating detection using in the wild recorded data is possible with the use of wearable sensors on the wrist.

12.
J Vestib Res ; 30(5): 305-317, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33044206

RESUMEN

BACKGROUND: It has not yet been tested whether averaged gain values and the presence of pathological saccades are significantly altered by manual data selection or if data selection only done by the incorporated software detection algorithms provides a reliable data set following v-HIT testing. OBJECTIVE: The primary endpoint was to evaluate whether the averaged gain values of all six SCCs are significantly altered by manual data selection with two different v-HIT systems. METHOD: 120 subjects with previously neither vestibular nor neurological disorders underwent four separate tests of all six SCCs with either EyeSeeCam® or ICS Impulse®. All v-HIT test reports underwent manual data selection by an experienced ENT Specialist with deletion of any noise and/or artifacts. Generalized estimating equations were used to compare averaged gain values based on unsorted data with averaged gain values based on the sorted data. RESULTS: EyeSeeCam®: Horizontal SCCs: The estimate and the p-value (shown in parenthesis) for the right lateral SCC and the left lateral SCC were 0.00004 (0.95) and 0.00087 (0.70) respectively. Vertical SCCs: The estimate varied from -0.00858 to 0.00634 with p-values ranging from 0.31 to 0.78. ICS Impulse®: Horizontal SCCs: The estimate and the p-value for the right lateral SCC and the left lateral SCC were 0.00159 (0.18) and 0.00071 (0.38) respectively. Vertical SCCs: The estimate varied from 0.00217 to 0.01357 with p-values ranging from 0.00 to 0.17. Based upon the averaged gain value from the individual SCC being tested, 148 tests before and 127 after manual data selection were considered pathological. CONCLUSION: None of the two v-HIT systems revealed any clinically important effects of manual data selection. However, 21 fewer tests were considered pathological after manual data selection.


Asunto(s)
Análisis de Datos , Dispositivos de Protección de los Ojos , Prueba de Impulso Cefálico/métodos , Canales Semicirculares/fisiología , Grabación en Video/métodos , Adulto , Estudios Transversales , Femenino , Prueba de Impulso Cefálico/instrumentación , Humanos , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Grabación en Video/instrumentación
13.
Acta Crystallogr D Struct Biol ; 76(Pt 7): 636-652, 2020 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-32627737

RESUMEN

Phasing by single-wavelength anomalous diffraction (SAD) from multiple crystallographic data sets can be particularly demanding because of the weak anomalous signal and possible non-isomorphism. The identification and exclusion of non-isomorphous data sets by suitable indicators is therefore indispensable. Here, simple and robust data-selection methods are described. A multi-dimensional scaling procedure is first used to identify data sets with large non-isomorphism relative to clusters of other data sets. Within each cluster that it identifies, further selection is based on the weighted ΔCC1/2, a quantity representing the influence of a set of reflections on the overall CC1/2 of the merged data. The anomalous signal is further improved by optimizing the scaling protocol. The success of iterating the selection and scaling steps was verified by substructure determination and subsequent structure solution. Three serial synchrotron crystallography (SSX) SAD test cases with hundreds of partial data sets and one test case with 62 complete data sets were analyzed. Structure solution was dramatically simplified with this procedure, and enabled solution of the structures after a few selection/scaling iterations. To explore the limits, the procedure was tested with much fewer data than originally required and could still solve the structure in several cases. In addition, an SSX data challenge, minimizing the number of (simulated) data sets necessary to solve the structure, was significantly underbid.


Asunto(s)
Cristalografía por Rayos X , Análisis de Datos , Modelos Moleculares , Proteínas/química , Conjuntos de Datos como Asunto , Conformación Proteica
14.
Artículo en Inglés | MEDLINE | ID: mdl-32351945

RESUMEN

Human movements are characterized by highly non-linear and multi-dimensional interactions within the motor system. Therefore, the future of human movement analysis requires procedures that enhance the classification of movement patterns into relevant groups and support practitioners in their decisions. In this regard, the use of data-driven techniques seems to be particularly suitable to generate classification models. Recently, an increasing emphasis on machine-learning applications has led to a significant contribution, e.g., in increasing the classification performance. In order to ensure the generalizability of the machine-learning models, different data preprocessing steps are usually carried out to process the measured raw data before the classifications. In the past, various methods have been used for each of these preprocessing steps. However, there are hardly any standard procedures or rather systematic comparisons of these different methods and their impact on the classification performance. Therefore, the aim of this analysis is to compare different combinations of commonly applied data preprocessing steps and test their effects on the classification performance of gait patterns. A publicly available dataset on intra-individual changes of gait patterns was used for this analysis. Forty-two healthy participants performed 6 sessions of 15 gait trials for 1 day. For each trial, two force plates recorded the three-dimensional ground reaction forces (GRFs). The data was preprocessed with the following steps: GRF filtering, time derivative, time normalization, data reduction, weight normalization and data scaling. Subsequently, combinations of all methods from each preprocessing step were analyzed by comparing their prediction performance in a six-session classification using Support Vector Machines, Random Forest Classifiers, Multi-Layer Perceptrons, and Convolutional Neural Networks. The results indicate that filtering GRF data and a supervised data reduction (e.g., using Principal Components Analysis) lead to increased prediction performance of the machine-learning classifiers. Interestingly, the weight normalization and the number of data points (above a certain minimum) in the time normalization does not have a substantial effect. In conclusion, the present results provide first domain-specific recommendations for commonly applied data preprocessing methods and might help to build more comparable and more robust classification models based on machine learning that are suitable for a practical application.

15.
Comput Softw Big Sci ; 4(1): 7, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33385105

RESUMEN

We describe a fully GPU-based implementation of the first level trigger for the upgrade of the LHCb detector, due to start data taking in 2021. We demonstrate that our implementation, named Allen, can process the 40 Tbit/s data rate of the upgraded LHCb detector and perform a wide variety of pattern recognition tasks. These include finding the trajectories of charged particles, finding proton-proton collision points, identifying particles as hadrons or muons, and finding the displaced decay vertices of long-lived particles. We further demonstrate that Allen can be implemented in around 500 scientific or consumer GPU cards, that it is not I/O bound, and can be operated at the full LHC collision rate of 30 MHz. Allen is the first complete high-throughput GPU trigger proposed for a HEP experiment.

16.
Wellcome Open Res ; 5: 137, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-35265750

RESUMEN

In their recent analysis, Hanlon et al. set out to estimate the years of life lost (YLL) in people who have died with COVID-19 by following and expanding on the WHO standard approach. We welcome this research as an attempt to draw a more accurate picture of the mortality burden of this disease which has been involved in the deaths of more than 300,000 people worldwide as of May 2020. However, we argue that obtained YLL estimates (13 years for men and 11 years for women) are interpreted in a misleading way. Even with the presented efforts to control for the role of multimorbidity in COVID-19 deaths, these estimates cannot be interpreted to imply "how long someone who died from COVID-19 might otherwise have been expected to live". By example we analyze the underlying problem of data selection bias which, in the context of COVID-19, renders such an interpretation of YLL estimates impossible, and outline potential approaches to control for the problem.

17.
Biol Cybern ; 113(5-6): 495-513, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31562544

RESUMEN

Active inference is an approach to understanding behaviour that rests upon the idea that the brain uses an internal generative model to predict incoming sensory data. The fit between this model and data may be improved in two ways. The brain could optimise probabilistic beliefs about the variables in the generative model (i.e. perceptual inference). Alternatively, by acting on the world, it could change the sensory data, such that they are more consistent with the model. This implies a common objective function (variational free energy) for action and perception that scores the fit between an internal model and the world. We compare two free energy functionals for active inference in the framework of Markov decision processes. One of these is a functional of beliefs (i.e. probability distributions) about states and policies, but a function of observations, while the second is a functional of beliefs about all three. In the former (expected free energy), prior beliefs about outcomes are not part of the generative model (because they are absorbed into the prior over policies). Conversely, in the second (generalised free energy), priors over outcomes become an explicit component of the generative model. When using the free energy function, which is blind to future observations, we equip the generative model with a prior over policies that ensure preferred (i.e. priors over) outcomes are realised. In other words, if we expect to encounter a particular kind of outcome, this lends plausibility to those policies for which this outcome is a consequence. In addition, this formulation ensures that selected policies minimise uncertainty about future outcomes by minimising the free energy expected in the future. When using the free energy functional-that effectively treats future observations as hidden states-we show that policies are inferred or selected that realise prior preferences by minimising the free energy of future expectations. Interestingly, the form of posterior beliefs about policies (and associated belief updating) turns out to be identical under both formulations, but the quantities used to compute them are not.


Asunto(s)
Conducta/fisiología , Encéfalo/fisiología , Modelos Neurológicos , Animales , Humanos , Cadenas de Markov
18.
Chem Pharm Bull (Tokyo) ; 67(11): 1183-1190, 2019 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-31423003

RESUMEN

For rational drug design, it is essential to predict the binding mode of protein-ligand complexes. Although various machine learning-based models have been reported that use convolutional neural networks (deep learning) to predict binding modes from three-dimensional structures, there are few detailed reports on how best to construct and use datasets. Here, we examined how different datasets affected the prediction of the binding mode of CYP3A4 by a three-dimensional neural network when the number of crystal structures for the target protein was limited. We used four different training datasets: one large, general dataset containing various protein complexes and three smaller, more specific datasets containing complexes with CYP3A4-like pockets, complexes with CYP3A4-binding ligands, and complexes with CYP protein family members. We then trained models with different combinations of datasets with or without subsequent fine-tuning and evaluated the binding mode prediction performance of each model. The best receiver operating characteristic (ROC) area under the curve (AUC) model with respect to area under the receiver operating characteristic curve was obtained by training with a combination of the general protein and CYP family datasets. However, the ROC AUC-recall balanced model was obtained by training with this combination of datasets followed by fine-tuning with the CYP3A4-binding ligands dataset. Our results suggest that datasets that balance protein functionality and data size are important for optimizing binding mode prediction performance. In addition, datasets with large median binding pocket sizes may be important for the binding mode prediction specifically of CYP3A4.


Asunto(s)
Citocromo P-450 CYP3A/química , Aprendizaje Profundo , Sitios de Unión , Citocromo P-450 CYP3A/metabolismo , Bases de Datos Factuales , Humanos , Ligandos
19.
Adv Exp Med Biol ; 1137: 17-43, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31183818

RESUMEN

This chapter starts by introducing an example of how we can retrieve text, where every step is done manually. The chapter will describe step-by-step how we can automatize each step of the example using shell script commands, which will be introduced and explained as long as they are required. The goal is to equip the reader with a basic set of skills to retrieve data from any online database and follow the links to retrieve more information from other sources, such as literature.


Asunto(s)
Bases de Datos Factuales , Almacenamiento y Recuperación de la Información , Lenguajes de Programación , Internet
20.
Integr Environ Assess Manag ; 15(6): 880-894, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-29917303

RESUMEN

Most alternatives assessments (AAs) published to date are largely hazard-based rankings, thereby ignoring potential differences in human and/or ecosystem exposures; as such, they may not represent a fully informed consideration of the advantages and disadvantages of possible alternatives. Building on the 2014 US National Academy of Sciences recommendations to improve AA decisions by including comparative exposure assessment into AAs, the Health and Environmental Sciences Institute's (HESI) Sustainable Chemical Alternatives Technical Committee, which comprises scientists from academia, industry, government, and nonprofit organizations, developed a qualitative comparative exposure approach. Conducting such a comparison can screen for alternatives that are expected to have a higher or different routes of human or environmental exposure potential, which together with consideration of the hazard assessment, could trigger a higher tiered, more quantitative exposure assessment on the alternatives being considered, minimizing the likelihood of regrettable substitution. This article outlines an approach for including chemical ingredient- and product-related exposure information in a qualitative comparison, including ingredient and product-related parameters. A classification approach was developed for ingredient and product parameters to support comparisons between alternatives as well as a methodology to address exposure parameter relevance and data quality. The ingredient parameters include a range of physicochemical properties that can impact routes and magnitude of exposure, whereas the product parameters include aspects such as product-specific exposure pathways, use information, accessibility, and disposal. Two case studies are used to demonstrate the application of the methodology. Key learnings and future research needs are summarized. Integr Environ Assess Manag 2018;00:000-000. © 2018 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).


Asunto(s)
Exposición a Riesgos Ambientales/análisis , Monitoreo del Ambiente/métodos , Contaminantes Químicos del Agua/análisis , Toma de Decisiones , Ecotoxicología/métodos , Medición de Riesgo/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA