Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.653
Filtrar
1.
J Med Eng Technol ; : 1-30, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39282826

RESUMEN

An early detection of lung tumors is critical for better treatment results, and CT scans can reveal lumps in the lungs which are too small to be picked up by conventional X-rays. CT imaging has advantages, but it also exposes a person to radiation from ions, which raises the possibility of malignancy, particularly when the imaging procedure is done. Access to expensive-quality CT scans and the related sophisticated analytic tools might be restricted in environments with fewer resources due to their high cost and limited availability. It will need an array of creative technological innovations to overcome such weaknesses. This paper aims to design a heuristic and deep learning-aided lung cancer classification using CT images. The collected images are undergone for segmentation, which is performed by Shuffling Atrous Convolutional (SAC) based ResUnet++ (SACRUnet++). Finally, the lung cancer classification is performed by the Adaptive Residual Attention Network (ARAN) by inputting the segmented images. Here the parameters of ARAN are optimally tuned using the Improved Garter Snake Optimization Algorithm (IGSOA). The developed lung cancer classification performance is compared to conventional lung cancer classification models and it showed high accuracy.

2.
Sensors (Basel) ; 24(17)2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39275384

RESUMEN

Accurate 6DoF (degrees of freedom) pose and focal length estimation are important in extended reality (XR) applications, enabling precise object alignment and projection scaling, thereby enhancing user experiences. This study focuses on improving 6DoF pose estimation using single RGB images of unknown camera metadata. Estimating the 6DoF pose and focal length from an uncontrolled RGB image, obtained from the internet, is challenging because it often lacks crucial metadata. Existing methods such as FocalPose and Focalpose++ have made progress in this domain but still face challenges due to the projection scale ambiguity between the translation of an object along the z-axis (tz) and the camera's focal length. To overcome this, we propose a two-stage strategy that decouples the projection scaling ambiguity in the estimation of z-axis translation and focal length. In the first stage, tz is set arbitrarily, and we predict all the other pose parameters and focal length relative to the fixed tz. In the second stage, we predict the true value of tz while scaling the focal length based on the tz update. The proposed two-stage method reduces projection scale ambiguity in RGB images and improves pose estimation accuracy. The iterative update rules constrained to the first stage and tailored loss functions including Huber loss in the second stage enhance the accuracy in both 6DoF pose and focal length estimation. Experimental results using benchmark datasets show significant improvements in terms of median rotation and translation errors, as well as better projection accuracy compared to the existing state-of-the-art methods. In an evaluation across the Pix3D datasets (chair, sofa, table, and bed), the proposed two-stage method improves projection accuracy by approximately 7.19%. Additionally, the incorporation of Huber loss resulted in a significant reduction in translation and focal length errors by 20.27% and 6.65%, respectively, in comparison to the Focalpose++ method.

3.
Sensors (Basel) ; 24(17)2024 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-39275408

RESUMEN

Precise measurement of fiber diameter in animal and synthetic textiles is crucial for quality assessment and pricing; however, traditional methods often struggle with accuracy, particularly when fibers are densely packed or overlapping. Current computer vision techniques, while useful, have limitations in addressing these challenges. This paper introduces a novel deep-learning-based method to automatically generate distance maps of fiber micrographs, enabling more accurate fiber segmentation and diameter calculation. Our approach utilizes a modified U-Net architecture, trained on both real and simulated micrographs, to regress distance maps. This allows for the effective separation of individual fibers, even in complex scenarios. The model achieves a mean absolute error (MAE) of 0.1094 and a mean square error (MSE) of 0.0711, demonstrating its effectiveness in accurately measuring fiber diameters. This research highlights the potential of deep learning to revolutionize fiber analysis in the textile industry, offering a more precise and automated solution for quality control and pricing.

4.
Sensors (Basel) ; 24(17)2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39275590

RESUMEN

Inspecting and maintaining power lines is essential for ensuring the safety, reliability, and efficiency of electrical infrastructure. This process involves regular assessment to identify hazards such as damaged wires, corrosion, or vegetation encroachment, followed by timely maintenance to prevent accidents and power outages. By conducting routine inspections and maintenance, utilities can comply with regulations, enhance operational efficiency, and extend the lifespan of power lines and equipment. Unmanned Aerial Vehicles (UAVs) can play a relevant role in this process by increasing efficiency through rapid coverage of large areas and access to difficult-to-reach locations, enhanced safety by minimizing risks to personnel in hazardous environments, and cost-effectiveness compared to traditional methods. UAVs equipped with sensors such as visual and thermographic cameras enable the accurate collection of high-resolution data, facilitating early detection of defects and other potential issues. To ensure the safety of the autonomous inspection process, UAVs must be capable of performing onboard processing, particularly for detection of power lines and obstacles. In this paper, we address the development of a deep learning approach with YOLOv8 for power line detection based on visual and thermographic images. The developed solution was validated with a UAV during a power line inspection mission, obtaining mAP@0.5 results of over 90.5% on visible images and over 96.9% on thermographic images.

5.
Sensors (Basel) ; 24(17)2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39275687

RESUMEN

Underwater image enhancement technology is crucial for the human exploration and exploitation of marine resources. The visibility of underwater images is affected by visible light attenuation. This paper proposes an image reconstruction method based on the decomposition-fusion of multi-channel luminance data to enhance the visibility of underwater images. The proposed method is a single-image approach to cope with the condition that underwater paired images are difficult to obtain. The original image is first divided into its three RGB channels. To reduce artifacts and inconsistencies in the fused images, a multi-resolution fusion process based on the Laplace-Gaussian pyramid guided by a weight map is employed. Image saliency analysis and mask sharpening methods are also introduced to color-correct the fused images. The results indicate that the method presented in this paper effectively enhances the visibility of dark regions in the original image and globally improves its color, contrast, and sharpness compared to current state-of-the-art methods. Our method can enhance underwater images in engineering practice, laying the foundation for in-depth research on underwater images.

6.
Sensors (Basel) ; 24(17)2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39275711

RESUMEN

As a fundamental element of the transportation system, traffic signs are widely used to guide traffic behaviors. In recent years, drones have emerged as an important tool for monitoring the conditions of traffic signs. However, the existing image processing technique is heavily reliant on image annotations. It is time consuming to build a high-quality dataset with diverse training images and human annotations. In this paper, we introduce the utilization of Vision-language Models (VLMs) in the traffic sign detection task. Without the need for discrete image labels, the rapid deployment is fulfilled by the multi-modal learning and large-scale pretrained networks. First, we compile a keyword dictionary to explain traffic signs. The Chinese national standard is used to suggest the shape and color information. Our program conducts Bootstrapping Language-image Pretraining v2 (BLIPv2) to translate representative images into text descriptions. Second, a Contrastive Language-image Pretraining (CLIP) framework is applied to characterize not only drone images but also text descriptions. Our method utilizes the pretrained encoder network to create visual features and word embeddings. Third, the category of each traffic sign is predicted according to the similarity between drone images and keywords. Cosine distance and softmax function are performed to calculate the class probability distribution. To evaluate the performance, we apply the proposed method in a practical application. The drone images captured from Guyuan, China, are employed to record the conditions of traffic signs. Further experiments include two widely used public datasets. The calculation results indicate that our vision-language model-based method has an acceptable prediction accuracy and low training cost.

7.
J Am Stat Assoc ; 119(546): 798-810, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39280355

RESUMEN

Medical imaging is a form of technology that has revolutionized the medical field over the past decades. Digital pathology imaging, which captures histological details at the cellular level, is rapidly becoming a routine clinical procedure for cancer diagnosis support and treatment planning. Recent developments in deep-learning methods have facilitated tumor region segmentation from pathology images. The traditional shape descriptors that characterize tumor boundary roughness at the anatomical level are no longer suitable. New statistical approaches to model tumor shapes are in urgent need. In this paper, we consider the problem of modeling a tumor boundary as a closed polygonal chain. A Bayesian landmark-based shape analysis model is proposed. The model partitions the polygonal chain into mutually exclusive segments, accounting for boundary roughness. Our Bayesian inference framework provides uncertainty estimations on both the number and locations of landmarks, while outputting metrics that can be used to quantify boundary roughness. The performance of our model is comparable with that of a recently developed landmark detection model for planar elastic curves. In a case study of 143 consecutive patients with stage I to IV lung cancer, we demonstrated the heterogeneity of tumor boundary roughness derived from our model effectively predicted patient prognosis (p-value < 0.001).

8.
Cureus ; 16(8): e66851, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39280515

RESUMEN

BACKGROUND: Tentorium resection and detachment from the oculomotor nerve are sometimes required for surgical clipping of unruptured posterior communicating artery (PCoA) aneurysms. Using T2-weighted 3D images, we aimed to identify the preoperative radiological features required to determine the necessity of these additional procedures. METHODS: We reviewed 30 patients with unruptured PCoA aneurysms who underwent surgical clipping and preoperative simulation using T2-weighted 3D images for measurement of the distance between the tentorium and aneurysm. Aneurysms were classified into superior type (superior to the tentorium) and inferior type (inferior to the tentorium). RESULTS: Seven patients (23%) underwent tentorium resection; all had the inferior type (superior vs. inferior, 0% vs. 33%, p = 0.071). In the 21 patients with the inferior type, the distance from the tentorium to the aneurysmal neck was 2.2 ± 1.1 mm and 0.0 ± 0.5 mm without and with tentorium resection (p < 0.01), respectively. An optimal cutoff value of ≤ +0.84 mm was identified for tentorium resection (area under the curve (AUC) = 0.96). Furthermore, 17 patients (57%) showed tight aneurysm attachment to the oculomotor nerve; all had the inferior type (0% vs. 81%, p < 0.01). The distance from the aneurysm tip to the tentorium was 1.1 ± 1.2 mm and -1.7 ± 1.4 mm without and with attachment (p < 0.01). The optimal cutoff value was ≤ +0.45 mm (AUC = 0.92). CONCLUSIONS: Measurement of the distance between the tentorium and aneurysmal neck or tip with T2-weighted 3D images is effective for preoperative simulation for surgical clipping of PCoA aneurysms.

9.
Med Eng Phys ; 131: 104224, 2024 09.
Artículo en Inglés | MEDLINE | ID: mdl-39284646

RESUMEN

This study aimed to measure trunk rotation angle representations from images using a single camera combined with a posture mirror and to examine its reliability and validity. We applied a trunk rotation angle model using a tripod and markers simulating trunk rotation. We compared two methods of trunk rotation angle measurement: the conventional method from the superior aspect using a manual goniometer and a novel measurement method using images from a digital camera and a posture mirror. Measurement error was calculated as the average absolute error between the angle measured by the goniometer and that calculated from the camera and mirror image. The intraclass correlation coefficient (ICC 1, 1) and ICC (2, 1) were calculated as the intra-rater reliability and agreement between the measurement angles of the two methods, respectively. Systematic errors of the angles measured by the two methods were examined by a Bland‒Altman analysis. The mean (SD) of the mean absolute error was 1.17° (0.71°). ICC (1, 1) was 0.978, and ICC (2, 1) was 0.991. The Bland‒Altman analysis showed no systematic errors. The results suggest the validity and accuracy of our novel method to measure the angle of trunk rotation, which does not require high-cost equipment or a special environment.


Asunto(s)
Postura , Torso , Rotación , Torso/fisiología , Postura/fisiología , Reproducibilidad de los Resultados , Humanos , Procesamiento de Imagen Asistido por Computador
10.
Basic Clin Neurosci ; 15(1): 117-130, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39291083

RESUMEN

Introduction: This study investigated the effect of autobiographical brand images on false memory formation in adults, using the category associate's procedure. The study also applied the event-related potential (ERP) approach to explore neural correlates of false memory and gender differences in false memory recall of brand images. Methods: Eight categories of autobiographical brand images were implied in a category associates' procedure to investigate false memory recall. ERP data were obtained from 24 participants (12 females and 12 males) using a 32-channel amplifier while subjects were performing the memory task. Subsequently, gender effects on behavioral responses and neural correlates of false and true memory recalls were statistically compared using peak amplitude and latency of P300, late positive complex, and FN400 components. Results: The results showed that left frontal areas in women were more activated in response to false memories compared to men, however, the men's brain responses were faster. In addition, the men's brain responses to false memories were widely distributed mainly over frontal, parietal, and occipital areas. Conclusion: Males and females differently process autobiographical brand images. Nevertheless, the differential neural process may not influence their recognition rate or response time.

11.
Med Biol Eng Comput ; 2024 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-39292382

RESUMEN

Atherosclerosis causes heart disease by forming plaques in arterial walls. IVUS imaging provides a high-resolution cross-sectional view of coronary arteries and plaque morphology. Healthcare professionals diagnose and quantify atherosclerosis physically or using VH-IVUS software. Since manual or VH-IVUS software-based diagnosis is time-consuming, automated plaque characterization tools are essential for accurate atherosclerosis detection and classification. Recently, deep learning (DL) and computer vision (CV) approaches are promising tools for automatically classifying plaques on IVUS images. With this motivation, this manuscript proposes an automated atherosclerotic plaque classification method using a hybrid Ant Lion Optimizer with Deep Learning (AAPC-HALODL) technique on IVUS images. The AAPC-HALODL technique uses the faster regional convolutional neural network (Faster RCNN)-based segmentation approach to identify diseased regions in the IVUS images. Next, the ShuffleNet-v2 model generates a useful set of feature vectors from the segmented IVUS images, and its hyperparameters can be optimally selected by using the HALO technique. Finally, an average ensemble classification process comprising a stacked autoencoder (SAE) and deep extreme learning machine (DELM) model can be utilized. The MICCAI Challenge 2011 dataset was used for AAPC-HALODL simulation analysis. A detailed comparative study showed that the AAPC-HALODL approach outperformed other DL models with a maximum accuracy of 98.33%, precision of 97.87%, sensitivity of 98.33%, and F score of 98.10%.

12.
Artículo en Inglés | MEDLINE | ID: mdl-39289317

RESUMEN

PURPOSE: Ultrasound imaging has emerged as a promising cost-effective and portable non-irradiant modality for the diagnosis and follow-up of diseases. Motion analysis can be performed by segmenting anatomical structures of interest before tracking them over time. However, doing so in a robust way is challenging as ultrasound images often display a low contrast and blurry boundaries. METHODS: In this paper, a robust descriptor inspired from the fractal dimension is presented to locally characterize the gray-level variations of an image. This descriptor is an adaptive grid pattern whose scale locally varies as the gray-level variations of the image. Robust features are then located based on the gray-level variations, which are more likely to be consistently tracked over time despite the presence of noise. RESULTS: The method was validated on three datasets: segmentation of the left ventricle on simulated echocardiography (Dice coefficient, DC), accuracy of diaphragm motion tracking for healthy subjects (mean sum of distances, MSD) and for a scoliosis patient (root mean square error, RMSE). Results show that the method segments the left ventricle accurately ( DC = 0.84 ) and robustly tracks the diaphragm motion for healthy subjects ( MSD = 1.10 mm) and for the scoliosis patient ( RMSE = 1.22 mm). CONCLUSIONS: This method has the potential to segment structures of interest according to their texture in an unsupervised fashion, as well as to help analyze the deformation of tissues. Possible applications are not limited to US image. The same principle could also be applied to other medical imaging modalities such as MRI or CT scans.

13.
Wiad Lek ; 77(7): 1490-1495, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39241150

RESUMEN

OBJECTIVE: Aim: The paper aims to examine superconscious processes as mental images of a higher order in the context of telezombification. PATIENTS AND METHODS: Materials and Methods: The authors used interpretive research paradigm, psychoanalysis, basic principles of hermeneutics, phenomenological approach along with general scientific methods, such as induction, deduction, generalization, etc. CONCLUSION: Conclusions: With the beginning of the russian full-scale attack on Ukraine, russian atrocities in Bucha, Mariupol and other cities and villages of the country, many Ukrainian citizens asked about what has happened to the russian society, the state authorities, who set the goal of destroying Ukraine as a state and all its inhabitants as a nation. Then Ukrainians have labelled the invaders and the authorities of Russia as non-humans. And this is a fair name for them. The fact is that these occupiers and their neo-Nazi leaders have a destroyed, distorted consciousness as a result of which they became incapable of realizing their thought processes. The consciousness of such persons gradually degrades towards animal thinking, the so-called proto-thinking. This is one direction to not realizing one's intentions and actions at the level of both subconscious and partially conscious analysis of primary mental images (images of a first and second orders). The second direction is the role of superconscious processes, in particular mental images of a higher level, which also form the worldview positions of an individual in the process of viewing and listening to certain information, while remaining, at the same time, unconscious until a certain time. Together, these directions form a person's attitude to existing social and worldview problems.


Asunto(s)
Estado de Conciencia , Humanos , Ucrania , Federación de Rusia
14.
Comput Methods Programs Biomed ; 257: 108373, 2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39276667

RESUMEN

Tumors are an important health concern in modern times. Breast cancer is one of the most prevalent causes of death for women. Breast cancer is rapidly becoming the leading cause of mortality among women globally. Early detection of breast cancer allows patients to obtain appropriate therapy, increasing their probability of survival. The adoption of 3-Dimensional (3D) mammography for the medical identification of abnormalities in the breast reduced the number of deaths dramatically. Classification and accurate detection of lumps in the breast in 3D mammography is especially difficult due to factors such as inadequate contrast and normal fluctuations in tissue density. Several Computer-Aided Diagnosis (CAD) solutions are under development to help radiologists accurately classify abnormalities in the breast. In this paper, a breast cancer diagnosis model is implemented to detect breast cancer in cancer patients to prevent death rates. The 3D mammogram images are gathered from the internet. Then, the gathered images are given to the preprocessing phase. The preprocessing is done using a median filter and image scaling method. The purpose of the preprocessing phase is to enhance the quality of the images and remove any noise or artifacts that may interfere with the detection of abnormalities. The median filter helps to smooth out any irregularities in the images, while the image scaling method adjusts the size and resolution of the images for better analysis. Once the preprocessing is complete, the preprocessed image is given to the segmentation phase. The segmentation phase is crucial in medical image analysis as it helps to identify and separate different structures within the image, such as organs or tumors. This process involves dividing the preprocessed image into meaningful regions or segments based on intensity, color, texture, or other features. The segmentation process is done using Adaptive Thresholding with Region Growing Fusion Model (AT-RGFM)". This model combines the advantages of both thresholding and region-growing techniques to accurately identify and delineate specific structures within the image. By utilizing AT-RGFM, the segmentation phase can effectively differentiate between different parts of the image, allowing for more precise analysis and diagnosis. It plays a vital role in the medical image analysis process, providing crucial insights for healthcare professionals. Here, the Modified Garter Snake Optimization Algorithm (MGSOA) is used to optimize the parameters. It helps to optimize parameters for accurately identifying and delineating specific structures within medical images and also helps healthcare professionals in providing more precise analysis and diagnosis, ultimately playing a vital role in the medical image analysis process. MGSOA enhances the segmentation phase by effectively differentiating between different parts of the image, leading to more accurate results. Then, the segmented image is fed into the detection phase. The tumor detection is performed by the Vision Transformer-based Multiscale Adaptive EfficientNetB7 (ViT-MAENB7) model. This model utilizes a combination of advanced algorithms and deep learning techniques to accurately identify and locate tumors within the segmented medical image. By incorporating a multiscale adaptive approach, the ViT-MAENB7 model can analyze the image at various levels of detail, improving the overall accuracy of tumor detection. This crucial step in the medical image analysis process allows healthcare professionals to make more informed decisions regarding patient treatment and care. Here, the created MGSOA algorithm is used to optimize the parameters for enhancing the performance of the model. The suggested breast cancer diagnosis performance is compared to conventional cancer diagnosis models and it showed high accuracy. The accuracy of the developed MGSOA-ViT-MAENB7 is 96.6 %, and others model like RNN, LSTM, EffNet, and ViT-MAENet given the accuracy to be 90.31 %, 92.79 %, 94.46 % and 94.75 %. The developed model's ability to analyze images at multiple scales, combined with the optimization provided by the MGSOA algorithm, results in a highly accurate and efficient system for detecting tumors in medical images. This cutting-edge technology not only improves the accuracy of diagnosis but also helps healthcare professionals tailor treatment plans to individual patients, ultimately leading to better outcomes. By outperforming traditional cancer diagnosis models, the proposed model is revolutionizing the field of medical imaging and setting a new standard for precision and effectiveness in healthcare.

15.
Waste Manag ; 190: 63-73, 2024 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-39277917

RESUMEN

In recent years, the rapid accumulation of marine waste not only endangers the ecological environment but also causes seawater pollution. Traditional manual salvage methods often have low efficiency and pose safety risks to human operators, making automatic underwater waste recycling a mainstream approach. In this paper, we propose a lightweight multi-scale cross-level network for underwater waste segmentation based on sonar images that provides pixel-level location information and waste categories for autonomous underwater robots. In particular, we introduce hybrid perception and multi-scale attention modules to capture multi-scale contextual features and enhance high-level critical information, respectively. At the same time, we use sampling attention modules and cross-level interaction modules to achieve feature down-sampling and fuse detailed features and semantic features, respectively. Relevant experimental results indicate that our method outperforms other semantic segmentation models and achieves 74.66 % mIoU with only 0.68 M parameters. In particular, compared with the representative PIDNet Small model based on the convolutional neural network architecture, our method can improve the mIoU metric by 1.15 percentage points and can reduce model parameters by approximately 91 %. Compared with the representative SeaFormer T model based on the transformer architecture, our approach can improve the mIoU metric by 2.07 percentage points and can reduce model parameters by approximately 59 %. Our approach maintains a satisfactory balance between model parameters and segmentation performance. Our solution provides new insights into intelligent underwater waste recycling, which helps in promoting sustainable marine development.

16.
Sci Rep ; 14(1): 21431, 2024 09 13.
Artículo en Inglés | MEDLINE | ID: mdl-39271720

RESUMEN

In the field of spinal pathology, sagittal balance of the spine is usually judged by the spatial structure and morphology of pelvis, which can be represented by pelvic parameters. Pelvic parameters, including pelvic incidence, pelvic tilt and sacral slope, are therefore essential for the diagnosis and treatment of spinal disorders, however, it is a time-consuming and laborious procedure to measure these parameters by traditional methods. In this paper, an automatic measurement framework for pelvic CT images was proposed to calculate three-dimensional (3D) pelvic parameters with the support of deep learning technology. Pelvic images were first preprocessed, and 3D reconstruction was then performed to obtain 3D pelvic model by the Visualization Toolkit. DRINet was trained to segment the femoral head region in the pelvic images, and 3D sphere fitting was performed to locate the femoral heads. In addition, VGG16 was adopted to recognize images containing superior sacral endplate, and the plane growth algorithm was used to fit the plane so that the midpoint and normal vector of the superior sacral endplate could be obtained. Finally, 3D pelvic parameters were automatically calculated, and compared with manual measurements for 15 patients. The proposed framework automatically generated 3D pelvic models, and calculated two-dimensional (2D) and 3D pelvic parameters from continuous CT images. Experiments demonstrated that the framework can greatly speed up the calculation of pelvic parameters, and these parameters are accurate when compared with the manual measurements. In conclusion, the proposed framework demonstrates good performance on automatic pelvimetry measurement by incorporating deep learning technology, and can well replace the traditional methods for pelvic parameter measurement.


Asunto(s)
Aprendizaje Profundo , Imagenología Tridimensional , Pelvis , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Imagenología Tridimensional/métodos , Pelvis/diagnóstico por imagen , Pelvimetría/métodos , Algoritmos , Femenino , Masculino , Adulto , Persona de Mediana Edad , Cabeza Femoral/diagnóstico por imagen
17.
Metab Eng ; 86: 1-11, 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39233197

RESUMEN

There have been significant advances in literature mining, allowing for the extraction of target information from the literature. However, biological literature often includes biological pathway images that are difficult to extract in an easily editable format. To address this challenge, this study aims to develop a machine learning framework called the "Extraction of Biological Pathway Information" (EBPI). The framework automates the search for relevant publications, extracts biological pathway information from images within the literature, including genes, enzymes, and metabolites, and generates the output in a tabular format. For this, this framework determines the direction of biochemical reactions, and detects and classifies texts within biological pathway images. Performance of EBPI was evaluated by comparing the extracted pathway information with manually curated pathway maps. EBPI will be useful for extracting biological pathway information from the literature in a high-throughput manner, and can be used for pathway studies, including metabolic engineering.

18.
Foods ; 13(17)2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39272595

RESUMEN

The variety and content of high-quality proteins in sunflower seeds are higher than those in other cereals. However, sunflower seeds can suffer from abnormalities, such as breakage and deformity, during planting and harvesting, which hinder the development of the sunflower seed industry. Traditional methods such as manual sensory and machine sorting are highly subjective and cannot detect the internal characteristics of sunflower seeds. The development of spectral imaging technology has facilitated the application of terahertz waves in the quality inspection of sunflower seeds, owing to its advantages of non-destructive penetration and fast imaging. This paper proposes a novel terahertz image classification model, MobileViT-E, which is trained and validated on a self-constructed dataset of sunflower seeds. The results show that the overall recognition accuracy of the proposed model can reach 96.30%, which is 4.85%, 3%, 7.84% and 1.86% higher than those of the ResNet-50, EfficientNeT, MobileOne and MobileViT models, respectively. At the same time, the performance indices such as the recognition accuracy, the recall and the F1-score values are also effectively improved. Therefore, the MobileViT-E model proposed in this study can improve the classification and identification of normal, damaged and deformed sunflower seeds, and provide technical support for the non-destructive detection of sunflower seed quality.

19.
Diagnostics (Basel) ; 14(17)2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39272664

RESUMEN

Artificial intelligence (AI) is making notable advancements in the medical field, particularly in bone fracture detection. This systematic review compiles and assesses existing research on AI applications aimed at identifying bone fractures through medical imaging, encompassing studies from 2010 to 2023. It evaluates the performance of various AI models, such as convolutional neural networks (CNNs), in diagnosing bone fractures, highlighting their superior accuracy, sensitivity, and specificity compared to traditional diagnostic methods. Furthermore, the review explores the integration of advanced imaging techniques like 3D CT and MRI with AI algorithms, which has led to enhanced diagnostic accuracy and improved patient outcomes. The potential of Generative AI and Large Language Models (LLMs), such as OpenAI's GPT, to enhance diagnostic processes through synthetic data generation, comprehensive report creation, and clinical scenario simulation is also discussed. The review underscores the transformative impact of AI on diagnostic workflows and patient care, while also identifying research gaps and suggesting future research directions to enhance data quality, model robustness, and ethical considerations.

20.
Diagnostics (Basel) ; 14(17)2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39272688

RESUMEN

The integrity of the reconstructed human epidermis generated in vitro can be assessed using histological analyses combined with immunohistochemical staining of keratinocyte differentiation markers. Technical differences during the preparation and capture of stained images may influence the outcome of computational methods. Due to the specific nature of the analyzed material, no annotated datasets or dedicated methods are publicly available. Using a dataset with 598 unannotated images showing cross-sections of in vitro reconstructed human epidermis stained with DAB-based immunohistochemistry reaction to visualize four different keratinocyte differentiation marker proteins (filaggrin, keratin 10, Ki67, HSPA2) and counterstained with hematoxylin, we developed an unsupervised method for the detection and quantification of immunohistochemical staining. The pipeline consists of the following steps: (i) color normalization; (ii) color deconvolution; (iii) morphological operations; (iv) automatic image rotation; and (v) clustering. The most effective combination of methods includes (i) Reinhard's normalization; (ii) Ruifrok and Johnston color-deconvolution method; (iii) proposed image-rotation method based on boundary distribution of image intensity; and (iv) k-means clustering. The results of the work should enhance the performance of quantitative analyses of protein markers in reconstructed human epidermis samples and enable the comparison of their spatial distribution between different experimental conditions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA