Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros











Intervalo de año de publicación
1.
Data Brief ; 55: 110601, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38993233

RESUMEN

The dataset provides data obtained with eye-tracking while 55 volunteers solved 3 distinct neuropsychological tests on a screen inside a closed room. Among the 55 volunteers, 22 were women and 33 were men, all with ages ranging between 9 and 50, and 5 of whom were diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) [1]. The eye-tracker used for the collection of the data was an EyeTribe, which has a sampling rate of 60 Hz and an average visual angle between 0.5 and 1, which correspond to an on-screen error between 0.5 and 1cm (0.1969 to 0.393 inches aprox) respectively, when the distance to the user is around 60cm (23.62 in) [2], which was the case during the collection of these data. The neuropsychological tests were implemented in a software named NEURO-INNOVA KIDS® [3], which are the following: a domino test adapted from the D-48 intelligence test [4], an adaptation of the MASMI test consisting of unfolded cubes [5], the figures series completion test adapted from [6], and the Poppelreuter figures test [7]. Before each of the tests, a calibration process was performed, ensuring that the visual angle error was less than or equal to 0.5 cm (0.1969 in), which is considered an acceptable calibration. The collective mean duration of the four administered tests amounted to 20 minutes. This dataset exhibits significant promise for potential utilization due to the extensive prevalence of these neuropsychological assessments among healthcare practitioners for evaluating diverse cognitive faculties in individuals. Moreover, it has been empirically established that poor performance on these tests is associated with attention deficits [8].

2.
Diagnostics (Basel) ; 12(12)2022 Dec 02.
Artículo en Inglés | MEDLINE | ID: mdl-36553037

RESUMEN

Glaucoma is an eye disease that gradually deteriorates vision. Much research focuses on extracting information from the optic disc and optic cup, the structure used for measuring the cup-to-disc ratio. These structures are commonly segmented with deeplearning techniques, primarily using Encoder-Decoder models, which are hard to train and time-consuming. Object detection models using convolutional neural networks can extract features from fundus retinal images with good precision. However, the superiority of one model over another for a specific task is still being determined. The main goal of our approach is to compare object detection model performance to automate segment cups and discs on fundus images. This study brings the novelty of seeing the behavior of different object detection models in the detection and segmentation of the disc and the optical cup (Mask R-CNN, MS R-CNN, CARAFE, Cascade Mask R-CNN, GCNet, SOLO, Point_Rend), evaluated on Retinal Fundus Images for Glaucoma Analysis (REFUGE), and G1020 datasets. Reported metrics were Average Precision (AP), F1-score, IoU, and AUCPR. Several models achieved the highest AP with a perfect 1.000 when the threshold for IoU was set up at 0.50 on REFUGE, and the lowest was Cascade Mask R-CNN with an AP of 0.997. On the G1020 dataset, the best model was Point_Rend with an AP of 0.956, and the worst was SOLO with 0.906. It was concluded that the methods reviewed achieved excellent performance with high precision and recall values, showing efficiency and effectiveness. The problem of how many images are needed was addressed with an initial value of 100, with excellent results. Data augmentation, multi-scale handling, and anchor box size brought improvements. The capability to translate knowledge from one database to another shows promising results too.

3.
Rev. mex. ing. bioméd ; 43(2): 1246, May.-Aug. 2022. tab, graf
Artículo en Inglés | LILACS-Express | LILACS | ID: biblio-1409795

RESUMEN

ABSTRACT Deep learning (DL) techniques achieve high performance in the detection of illnesses in retina images, but the majority of models are trained with different databases for solving one specific task. Consequently, there are currently no solutions that can be used for the detection/segmentation of a variety of illnesses in the retina in a single model. This research uses Transfer Learning (TL) to take advantage of previous knowledge generated during model training of illness detection to segment lesions with encoder-decoder Convolutional Neural Networks (CNN), where the encoders are classical models like VGG-16 and ResNet50 or variants with attention modules. This shows that it is possible to use a general methodology using a single fundus image database for the detection/segmentation of a variety of retinal diseases achieving state-of-the-art results. This model could be in practice more valuable since it can be trained with a more realistic database containing a broad spectrum of diseases to detect/segment illnesses without sacrificing performance. TL can help achieve fast convergence if the samples in the main task (Classification) and sub-tasks (Segmentation) are similar. If this requirement is not fulfilled, the parameters start from scratch.


RESUMEN Las técnicas de Deep Learning (DL) han demostrado un buen desempeño en la detección de anomalías en imágenes de retina, pero la mayoría de los modelos son entrenados en diferentes bases de datos para resolver una tarea en específico. Como consecuencia, actualmente no se cuenta con modelos que se puedan usar para la detección/segmentación de varias lesiones o anomalías con un solo modelo. En este artículo, se utiliza Transfer Learning (TL) con la cual se aprovecha el conocimiento adquirido para determinar si una imagen de retina tiene o no una lesión. Con este conocimiento se segmenta la imagen utilizando una red neuronal convolucional (CNN), donde los encoders o extractores de características son modelos clásicos como VGG-16 y ResNet50 o variantes con módulos de atención. Se demuestra así, que es posible utilizar una metodología general con bases de datos de retina para la detección/ segmentación de lesiones en la retina alcanzando resultados como los que se muestran en el estado del arte. Este modelo puede ser entrenado con bases de datos más reales que contengan una gama de enfermedades para detectar/ segmentar sin sacrificar rendimiento. TL puede ayudar a conseguir una convergencia rápida del modelo si la base de datos principal (Clasificación) se parece a la base de datos de las tareas secundarias (Segmentación), si esto no se cumple los parámetros básicamente comienzan a ajustarse desde cero.

4.
J Air Waste Manag Assoc ; 72(10): 1095-1112, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35816429

RESUMEN

Atmospheric pollution refers to the presence of substances in the air such as particulate matter (PM) which has a negative impact in population ́s health exposed to it. This makes it a topic of current interest. Since the Metropolitan Zone of the Valley of Mexico's geographic characteristics do not allow proper ventilation and due to its population's density a significant quantity of poor air quality events are registered. This paper proposes a methodology to improve the forecasting of PM10 and PM2.5, in largely populated areas, using a recurrent long-term/short-term memory (LSTM) network optimized by the Ant Colony Optimization (ACO) algorithm. The experimental results show an improved performance in reducing the error by around 13.00% in RMSE and 14.82% in MAE using as reference the averaged results obtained by the LSTM deep neural network. Overall, the current study proposes a methodology to be studied in the future to improve different forecasting techniques in real-life applications where there is no need to respond in real time.Implications: This contribution presents a methodology to deal with the highly non-linear modeling of airborne particulate matter (both PM10 and PM2.5). Most linear approaches to this modeling problem are often not accurate enough when dealing with this type of data. In addition, most machine learning methods require extensive training or have problems when dealing with noise embedded in the time-series data. The proposed methodology deals with this data in three stages: preprocessing, modeling, and optimization. In the preprocessing stage, data is acquired and imputed any missing data. This ensures that the modeling process is robust even when there are errors in the acquired data and is invalid, or the data is missing. In the modeling stage, a recurrent deep neural network called LSTM (Long-Short Term Memory) is used, which shows that regardless of the monitoring station and the geographical characteristics of the site, the resulting model shows accurate and robust results. Furthermore, the optimization stage deals with enhancing the capability of the data modeling by using swarm intelligence algorithms (Ant Colony Optimization, in this case). The results presented in this study were compared with other works that presented traditional algorithms, such as multi-layer perceptron, traditional deep neural networks, and common spatiotemporal models, which show the feasibility of the methodology presented in this contribution. Lastly, the advantages of using this methodology are highlighted.


Asunto(s)
Contaminantes Atmosféricos , Material Particulado , Contaminantes Atmosféricos/análisis , Monitoreo del Ambiente/métodos , Inteligencia , Redes Neurales de la Computación , Material Particulado/análisis
5.
Comput Methods Programs Biomed ; 143: 97-111, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-28391823

RESUMEN

BACKGROUND AND OBJECTIVE: There are many work related with segmentation techniques, including nearest neighbor algorithm, fuzzy rules, morphological filters, image entropy, thresholding, machine learning, wavelet analysis, and so on. Such methods carry out the segmentation, but take a lot of processing time by modifying the content of the image or showing discern problems in homogeneous areas, and the segmentation technique is designed to work efficiently only with the techniques used. In this paper a method to segment mammograms in order to separate breast area from pectoral-muscle avoiding bright areas that produce noise and therefore reducing false-positives is presented. METHODS: The proposed methodology is divided into four sections: 1) Pre-processing to acquire image and decreasing its size. 2) Improving the image quality through image thresholding and histogram equalization. 3) Localization of regions of interest (ROI) applying Scale-Invariant Feature Transform to find image's descriptors. Clustering methods were implemented to determine the best number of clusters and which of these represent the most significant breast area. Then found ROI's coordinates are compared with the position of abnormalities diagnosed by the Mammographic Image Analysis Society. 4) Microcalcifications (mcc) detection; wavelet transform is used, and to enhance its performance different high-pass filters and high-frequency emphasis filters are evaluated. Symlet wavelets: Sym8 and Sym16 were used with different decomposition level; images results from both processes are compared and only those elements in common are detected as microcalcifications. RESULTS: Moreover, muscle's remnants in the corners of the regions of interest were removed using fuzzy c-means clustering. The best results in terms of sensitivity (91.27), false-positives per image (80.25), and precision (74.38) are compared with previous work. CONCLUSIONS: Results shows that the breast area can be discriminated from the pectoral-muscle by avoiding to work with brightness areas that produces false positives. Moreover, because the image size is reduced the computer processing time will be decreased. This segmentation stage can be an addition to mammograms analysis broadly, not only to find mcc but abnormalities such as circumscribed masses, speculated masses and architectural distortion. Also is useful to create automatically an unsupervised segmentation in mammograms without stage of training.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Mama/diagnóstico por imagen , Mamografía/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Algoritmos , Calcinosis , Análisis por Conglomerados , Reacciones Falso Positivas , Femenino , Lógica Difusa , Humanos , Procesamiento de Imagen Asistido por Computador , Modelos Estadísticos , Músculos , Músculos Pectorales/diagnóstico por imagen , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Análisis de Ondículas
6.
Sensors (Basel) ; 13(11): 14367-97, 2013 Oct 24.
Artículo en Inglés | MEDLINE | ID: mdl-24284770

RESUMEN

The present work presents an improved method to align the measurement scale mark in an immersion hydrometer calibration system of CENAM, the National Metrology Institute (NMI) of Mexico, The proposed method uses a vision system to align the scale mark of the hydrometer to the surface of the liquid where it is immersed by implementing image processing algorithms. This approach reduces the variability in the apparent mass determination during the hydrostatic weighing in the calibration process, therefore decreasing the relative uncertainty of calibration.

7.
Int J Med Robot ; 7(2): 225-36, 2011 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21538771

RESUMEN

BACKGROUND: A needle placement system using a serial robot arm for manipulation of biopsy and/or treatment needles is introduced. A method for fast calibration of the robot and the preliminary accuracy tests of the robotic system are presented. METHODS: The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction mounted on a mobile platform, a robot-driven angiographic C-arm system and a navigation system. RESULTS: Calibration of the robot with the navigation system has a residual error of 0.23 mm (rms) with a standard deviation of ± 0.1 mm. Needle targeting accuracy with different trajectories was 1.2 mm (rms) with a standard deviation of ± 0.4 mm. CONCLUSIONS: Robot absolute positioning accuracy was reduced to the navigation camera accuracy. The approach includes control strategies that may be very useful for interventional applications.


Asunto(s)
Angiografía/instrumentación , Biopsia con Aguja/instrumentación , Biopsia/instrumentación , Agujas , Cirugía Asistida por Computador/instrumentación , Tomografía Computarizada por Rayos X/métodos , Angiografía/métodos , Biopsia/métodos , Biopsia con Aguja/métodos , Calibración , Gráficos por Computador , Diseño de Equipo , Humanos , Procesamiento de Imagen Asistido por Computador , Reproducibilidad de los Resultados , Robótica , Cirugía Asistida por Computador/métodos
8.
Sensors (Basel) ; 9(12): 10326-40, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-22303176

RESUMEN

An improved method which considers the use of Fourier and wavelet transform based analysis to infer and extract 3D information from an object by fringe projection on it is presented. This method requires a single image which contains a sinusoidal white light fringe pattern projected on it, and this pattern has a known spatial frequency and its information is used to avoid any discontinuities in the fringes with high frequency. Several computer simulations and experiments have been carried out to verify the analysis. The comparison between numerical simulations and experiments has proved the validity of this proposed method.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA