Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Heliyon ; 9(7): e17651, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37449128

RESUMEN

Accurate segmentation of the mandibular canal is essential in dental implant and maxillofacial surgery, which can help prevent nerve or vascular damage inside the mandibular canal. Achieving this is challenging because of the low contrast in CBCT scans and the small scales of mandibular canal areas. Several innovative methods have been proposed for mandibular canal segmentation with positive performance. However, most of these methods segment the mandibular canal based on sliding patches, which may adversely affect the morphological integrity of the tubular structure. In this study, we propose whole mandibular canal segmentation using transformed dental CBCT volume in the Frenet frame. Considering the connectivity of the mandibular canal, we propose to transform the CBCT volume to obtain a sub-volume containing the whole mandibular canal based on the Frenet frame to ensure complete 3D structural information. Moreover, to further improve the performance of mandibular canal segmentation, we use clDice to guarantee the integrity of the mandibular canal structure and segment the mandibular canal. Experimental results on our CBCT dataset show that integrating the proposed transformed volume in the Frenet frame into other state-of-the-art methods achieves a 0.5%∼12.1% improvement in Dice performance. Our proposed method can achieve impressive results with a Dice value of 0.865 (±0.035), and a clDice value of 0.971 (±0.020), suggesting that our method can segment the mandibular canal with superior performance.

2.
Heliyon ; 9(2): e13694, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36852021

RESUMEN

Background: Manual segmentation of the inferior alveolar canal (IAC) in panoramic images requires considerable time and labor even for dental experts having extensive experience. The objective of this study was to evaluate the performance of automatic segmentation of IAC with ambiguity classification in panoramic images using a deep learning method. Methods: Among 1366 panoramic images, 1000 were selected as the training dataset and the remaining 336 were assigned to the testing dataset. The radiologists divided the testing dataset into four groups according to the quality of the visible segments of IAC. The segmentation time, dice similarity coefficient (DSC), precision, and recall rate were calculated to evaluate the efficiency and segmentation performance of deep learning-based automatic segmentation. Results: Automatic segmentation achieved a DSC of 85.7% (95% confidence interval [CI] 75.4%-90.3%), precision of 84.1% (95% CI 78.4%-89.3%), and recall of 87.7% (95% CI 77.7%-93.4%). Compared with manual annotation (5.9s per image), automatic segmentation significantly increased the efficiency of IAC segmentation (33 ms per image). The DSC and precision values of group 4 (most visible) were significantly better than those of group 1 (least visible). The recall values of groups 3 and 4 were significantly better than those of group 1. Conclusions: The deep learning-based method achieved high performance for IAC segmentation in panoramic images under different visibilities and was positively correlated with IAC image clarity.

3.
Comput Med Imaging Graph ; 105: 102186, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36731328

RESUMEN

Bone suppression is to suppress the superimposed bone components over the soft tissues within the lung area of Chest X-ray (CXR), which is potentially useful for the subsequent lung disease diagnosis for radiologists, as well as computer-aided systems. Despite bone suppression methods for frontal CXRs being well studied, it remains challenging for lateral CXRs due to the limited and imperfect DES dataset containing paired lateral CXR and soft-tissue/bone images and more complex anatomical structures in the lateral view. In this work, we propose a bone suppression method for lateral CXRs by leveraging a two-stage distillation learning strategy and a specific data correction method. Specifically, a primary model is first trained on a real DES dataset with limited samples. The bone-suppressed results on a relatively large lateral CXR dataset produced by the primary model are improved by a designed gradient correction method. Secondly, the corrected results serve as training samples to train the distillated model. By automatically learning knowledge from both the primary model and the extra correction procedure, our distillated model is expected to promote the performance of the primary model while omitting the tedious correction procedure. We adopt an ensemble model named MsDd-MAP for the primary and distillated models, which learns the complementary information of Multi-scale and Dual-domain (i.e., intensity and gradient) and fuses them in a maximum-a-posteriori (MAP) framework. Our method is evaluated on a two-exposure lateral DES dataset consisting of 46 subjects and a lateral CXR dataset consisting of 240 subjects. The experimental results suggest that our method is superior to other competing methods regarding the quantitative evaluation metrics. Furthermore, the subjective evaluation by three experienced radiologists also indicates that the distillated model can produce more visually appealing soft-tissue images than the primary model, even comparable to real DES imaging for lateral CXRs.


Asunto(s)
Radiografía Torácica , Tórax , Humanos , Radiografía Torácica/métodos , Rayos X , Radiografía , Huesos
4.
Dis Markers ; 2022: 3443891, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36133434

RESUMEN

Objectives: This in vitro study is aimed at assessing the oral all-ceramic materials energy transmission and temperature changes after Er:YAG laser irradiation of monolithic zirconia all-ceramic materials with varying optical properties. Materials and Methods: Two monolithic zirconia materials, Zenostar T and X-CERA TT (monolithic Zirconia), were studied. Specimens were divided into four groups, with a thickness of 1.0, 1.5, 2.0, and 2.5 mm, respectively. The chemical elemental composition of the two materials was determined using X-ray spectroscopy and Fourier transform infrared spectroscopy. The light transmittance of specimens with different thicknesses was measured using a spectrophotometer at three wavelength ranges: 200-380, 380-780, and 780-2500 nm. Irradiation with Er:YAG laser was performed, and the resultant temperature changes were measured using a thermocouple thermometer. Results: Compositional analysis indicated that Si content in X-CERA TT was higher than that in Zenostar T. The light transmittance of both materials decreased as specimen thickness increased. Er:YAG laser irradiation led to temperature increase at both Zenostar T (26.4°C-81.7°C) and X-CERA TT (23.9°C-53.5°C) specimens. Both optical transmittance and temperature changes after Er:YAG laser irradiation were consistent with exponential distribution against different thickness levels. Conclusion: Er:YAG laser penetration energy and resultant temperature changes were mainly determined by the thickness and composition of the examined monolithic zirconia materials.


Asunto(s)
Láseres de Estado Sólido , Cerámica/química , Humanos , Temperatura , Circonio/química , Circonio/efectos de la radiación
5.
Med Phys ; 49(7): 4494-4507, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35338781

RESUMEN

PURPOSE: Automated retinal vessel segmentation is crucial to the early diagnosis and treatment of ophthalmological diseases. Many deep-learning-based methods have shown exceptional success in this task. However, current approaches are still inadequate in challenging vessels (e.g., thin vessels) and rarely focus on the connectivity of vessel segmentation. METHODS: We propose using an error discrimination network (D) to distinguish whether the vessel pixel predictions of the segmentation network (S) are correct, and S is trained to obtain fewer error predictions of D. Our method is similar to, but not the same as, the generative adversarial network. Three types of vessel samples and corresponding error masks are used to train D, as follows: (1) vessel ground truth; (2) vessel segmented by S; (3) artificial thin vessel error samples that further improve the sensitivity of D to wrong small vessels. As an auxiliary loss function of S, D strengthens the supervision of difficult vessels. Optionally, we can use the errors predicted by D to correct the segmentation result of S. RESULTS: Compared with state-of-the-art methods, our method achieves the highest scores in sensitivity (86.19%, 86.26%, and 86.53%) and G-Mean (91.94%, 91.30%, and 92.76%) on three public datasets, namely, STARE, DRIVE, and HRF. Our method also maintains a competitive level in other metrics. On the STARE dataset, the F1-score and area under the receiver operating characteristic curve (AUC) of our method rank second and first, respectively, reaching 84.51% and 98.97%. The top scores of the three topology-relevant metrics (Conn, Inf, and Cor) demonstrate that the vessels extracted by our method have excellent connectivity. We also validate the effectiveness of error discrimination supervision and artificial error sample training through ablation experiments. CONCLUSIONS: The proposed method provides an accurate and robust solution for difficult vessel segmentation.


Asunto(s)
Redes Neurales de la Computación , Vasos Retinianos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Curva ROC , Vasos Retinianos/diagnóstico por imagen
6.
Comput Methods Programs Biomed ; 216: 106631, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35123347

RESUMEN

BACKGROUND AND OBJECTIVE: Conjunctival microcirculation has been used to quantitatively assess microvascular changes due to systemic disorders. The space between red blood cell clusters in conjunctival microvessels is essential for assessing hemodynamics. However, it causes discontinuities in vessel image segmentation and increases the difficulty of automatically measuring blood velocity. In this study, we developed an EVA system based on deep learning to maintain vessel segmentation continuity and automatically measure blood velocity. METHODS: The EVA system sequentially performs image registration, vessel segmentation, diameter measurement, and blood velocity measurement on conjunctival images. A U-Net model optimized with a connectivity-preserving loss function was used to solve the problem of discontinuities in vessel segmentation. Then, an automatic measurement algorithm based on line segment detection was proposed to obtain accurate blood velocity. Finally, the EVA system assessed hemodynamic parameters based on the measured blood velocity in each vessel segment. RESULTS: The EVA system was validated for 23 videos of conjunctival microcirculation captured using functional slit-lamp microscopy. The U-Net model produced the longest average vessel segment length, 158.03 ± 181.87 µm, followed by the adaptive threshold method and Frangi filtering, which produced lengths of 120.05 ± 151.47 µm and 99.94 ± 138.12 µm, respectively. The proposed method and one based on cross-correlation were validated to measure blood velocity for a dataset consisting of 30 vessel segments. Bland-Altman analysis showed that compared with the cross-correlation method (bias: 0.36, SD: 0.32), the results of the proposed method were more consistent with a manual measurement-based gold standard (bias: -0.04, SD: 0.14). CONCLUSIONS: The proposed EVA system provides an automatic and reliable solution for quantitative assessment of hemodynamics in conjunctival microvascular images, and potentially can be applied to hypoglossal microcirculation images.


Asunto(s)
Microvasos , Velocidad del Flujo Sanguíneo , Hemodinámica , Microcirculación , Microvasos/diagnóstico por imagen
7.
Exp Biol Med (Maywood) ; 246(20): 2222-2229, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34308658

RESUMEN

Vascular tortuosity as an indicator of retinal vascular morphological changes can be quantitatively analyzed and used as a biomarker for the early diagnosis of relevant disease such as diabetes. While various methods have been proposed to evaluate retinal vascular tortuosity, the main obstacle limiting their clinical application is the poor consistency compared with the experts' evaluation. In this research, we proposed to apply a multiple subdivision-based algorithm for the vessel segment vascular tortuosity analysis combining with a learning curve function of vessel curvature inflection point number, emphasizing the human assessment nature focusing not only global but also on local vascular features. Our algorithm achieved high correlation coefficients of 0.931 for arteries and 0.925 for veins compared with clinical grading of extracted retinal vessels. For the prognostic performance against experts' prediction in retinal fundus images from diabetic patients, the area under the receiver operating characteristic curve reached 0.968, indicating a good consistency with experts' predication in full retinal vascular network evaluation.


Asunto(s)
Algoritmos , Diabetes Mellitus/diagnóstico , Fondo de Ojo , Microvasos/patología , Vasos Retinianos/patología , Biomarcadores , Angiografía por Tomografía Computarizada/métodos , Diabetes Mellitus/patología , Diagnóstico Precoz , Humanos , Microvasos/anatomía & histología , Tomografía de Coherencia Óptica/métodos
8.
J Healthc Eng ; 2020: 7156408, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32377330

RESUMEN

Mosaicking of retinal images is potentially useful for ophthalmologists and computer-aided diagnostic schemes. Vascular bifurcations can be used as features for matching and stitching of retinal images. A fully convolutional network model is employed to segment vascular structures in retinal images to detect vascular bifurcations. Then, bifurcations are extracted as feature points on the vascular mask by a robust and efficient approach. Transformation parameters for stitching can be estimated from the correspondence of vascular bifurcations. The proposed feature detection and mosaic method is evaluated on retinal images of 14 different eyes, 62 retinal images. The proposed method achieves a considerably higher average recall rate of matching for paired images compared with speeded-up robust features and scale-invariant feature transform. The running time of our method was also lower than other methods. Results produced by the proposed method superior to that of AutoStitch, photomerge function in Photoshop cs6 and ICE, demonstrate that accurate matching of detected vascular bifurcations could lead to high-quality mosaic of retinal images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Retina/diagnóstico por imagen , Vasos Retinianos/diagnóstico por imagen , Algoritmos , Diagnóstico por Computador , Humanos , Interpretación de Imagen Asistida por Computador , Redes Neurales de la Computación
9.
Comput Methods Programs Biomed ; 180: 105014, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31430596

RESUMEN

BACKGROUND AND OBJECTIVE: In chest radiographs (CXRs), all bones and soft tissues are overlapping with each other, which raises issues for radiologists to read and interpret CXRs. Delineating the ribs and clavicles is helpful for suppressing them from chest radiographs so that their effects can be reduced for chest radiography analysis. However, delineating ribs and clavicles automatically is difficult by methods without deep learning models. Moreover, few of methods without deep learning models can delineate the anterior ribs effectively due to their faint rib edges in the posterior-anterior (PA) CXRs. METHODS: In this work, we present an effective deep learning method for delineating posterior ribs, anterior ribs and clavicles automatically using a fully convolutional DenseNet (FC-DenseNet) as pixel classifier. We consider a pixel-weighted loss function to mitigate the uncertainty issue during manually delineating for robust prediction. RESULTS: We conduct a comparative analysis with two other fully convolutional networks for edge detection and the state-of-the-art method without deep learning models. The proposed method significantly outperforms these methods in terms of quantitative evaluation metrics and visual perception. The average recall, precision and F-measure are 0.773 ± 0.030, 0.861 ± 0.043 and 0.814 ± 0.023 respectively, and the mean boundary distance (MBD) is 0.855 ± 0.642 pixels of the proposed method on the test dataset. The proposed method also performs well on JSRT and NIH Chest X-ray datasets, indicating its generalizability across multiple databases. Besides, a preliminary result of suppressing the bone components of CXRs has been produced by using our delineating system. CONCLUSIONS: The proposed method can automatically delineate ribs and clavicles in CXRs and produce accurate edge maps.


Asunto(s)
Automatización , Clavícula/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Costillas/diagnóstico por imagen , Aprendizaje Profundo , Humanos , Radiografía Torácica/métodos
10.
Appl Bionics Biomech ; 2019: 9806464, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31341514

RESUMEN

BACKGROUND AND OBJECTIVE: When radiologists diagnose lung diseases in chest radiography, they can miss some lung nodules overlapped with ribs or clavicles. Dual-energy subtraction (DES) imaging performs well because it can produce soft tissue images, in which the bone components in chest radiography were almost suppressed but the visibility of nodules and lung vessels was still maintained. However, most routinely available X-ray machines do not possess the DES function. Thus, we presented a data-driven decomposition model to perform virtual DES function for decomposing a single conventional chest radiograph into soft tissue and bone images. METHODS: For a given chest radiograph, similar chest radiographs with corresponding DES soft tissue and bone images are selected from the training database as exemplars for decomposition. The corresponding fields between the observed chest radiograph and the exemplars are solved by a hierarchically dense matching algorithm. Then, nonparametric priors of soft tissue and bone components are constructed by sampling image patches from the selected soft tissue and bone images according to the corresponding fields. Finally, these nonparametric priors are integrated into our decomposition model, the energy function of which is efficiently optimized by an iteratively reweighted least-squares scheme (IRLS). RESULTS: The decomposition method is evaluated on a data set of posterior-anterior DES radiography (503 cases), as well as on the JSRT data set. The proposed method can produce soft tissue and bone images similar to those produced by the actual DES system. CONCLUSIONS: The proposed method can markedly reduce the visibility of bony structures in chest radiographs and shows potential to enhance diagnosis.

11.
Comput Methods Programs Biomed ; 175: 205-214, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31104708

RESUMEN

BACKGROUND AND OBJECTIVE: Panoramic images reconstructed from dental cone beam CT (CBCT) data have been effectively used in dental clinics for disease diagnosis. Panoramic images generally have low contrast because excessive non-interest tissues participate in the reconstruction, which may affect the diagnosis. In this study, we developed a fully automatic reconstruction method to improve the global and detail contrast of panoramic images. METHODS: The proposed method consists of dental arch thickness detection, image synthesis, and image enhancement. First, the dental arch thickness is detected from an axial maximum intensity projection (MIP) image generated from the axial slices containing the teeth to reduce non-interest tissues in panoramic image reconstruction. Then, a new synthesis algorithm is proposed at image synthesis stage to reduce the effect of non-interest tissues on image contrast. Finally, an image enhancement algorithm is applied to the synthesized image to improve the detail contrast of the final panoramic image. RESULTS: A total of 129 real clinical dental CBCT data sets were used to test the proposed method. The panoramic images generated by three methods were subjectively scored by three experienced dentists who were blinded to the generated method. The evaluation of image contrast included the maxillary, mandible, teeth, and particular region (root canal, crown reconstruction, implants, and metal brackets). The overall image contrast score revealed that the proposed method scored the highest of 11.03 ±â€¯2.46, followed by the ray sum and x-ray methods with corresponding scores of 6.4 ±â€¯1.65 and 5.35 ±â€¯1.56. The results of expert subjective scoring indicated that the image contrast of the panoramic image generated by the proposed method is higher than those of existing methods. CONCLUSIONS: The proposed method provides a quick, effective and robust solution to improve the global and detail contrast of the panoramic image generated from dental CBCT data.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Arco Dental/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía Dental , Radiografía Panorámica , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Niño , Femenino , Humanos , Masculino , Mandíbula/diagnóstico por imagen , Maxilar/diagnóstico por imagen , Persona de Mediana Edad , Reconocimiento de Normas Patrones Automatizadas , Reproducibilidad de los Resultados , Diente/diagnóstico por imagen , Adulto Joven
12.
Comput Math Methods Med ; 2019: 6490161, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30838049

RESUMEN

Automatic segmentation of ulna and radius (UR) in forearm radiographs is a necessary step for single X-ray absorptiometry bone mineral density measurement and diagnosis of osteoporosis. Accurate and robust segmentation of UR is difficult, given the variation in forearms between patients and the nonuniformity intensity in forearm radiographs. In this work, we proposed a practical automatic UR segmentation method through the dynamic programming (DP) algorithm to trace UR contours. Four seed points along four UR diaphysis edges are automatically located in the preprocessed radiographs. Then, the minimum cost paths in a cost map are traced from the seed points through the DP algorithm as UR edges and are merged as the UR contours. The proposed method is quantitatively evaluated using 37 forearm radiographs with manual segmentation results, including 22 normal-exposure and 15 low-exposure radiographs. The average Dice similarity coefficient of our method reached 0.945. The average mean absolute distance between the contours extracted by our method and a radiologist is only 5.04 pixels. The segmentation performance of our method between the normal- and low-exposure radiographs was insignificantly different. Our method was also validated on 105 forearm radiographs acquired under various imaging conditions from several hospitals. The results demonstrated that our method was fairly robust for forearm radiographs of various qualities.


Asunto(s)
Antebrazo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía , Radio (Anatomía)/diagnóstico por imagen , Cúbito/diagnóstico por imagen , Absorciometría de Fotón , Algoritmos , Densidad Ósea , Humanos , Osteoporosis/diagnóstico por imagen , Lenguajes de Programación , Articulación de la Muñeca/diagnóstico por imagen
13.
IEEE J Biomed Health Inform ; 22(3): 842-851, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-28368835

RESUMEN

Lung field segmentation in chest radiographs (CXRs) is an essential preprocessing step in automatically analyzing such images. We present a method for lung field segmentation that is built on a high-quality boundary map detected by an efficient modern boundary detector, namely a structured edge detector (SED). A SED is trained beforehand to detect lung boundaries in CXRs with manually outlined lung fields. Then, an ultrametric contour map (UCM) is transformed from the masked and marked boundary map. Finally, the contours with the highest confidence level in the UCM are extracted as lung contours. Our method is evaluated using the public Japanese Society of Radiological Technology database of scanned films. The average Jaccard index of our method is 95.2%, which is comparable with those of other state-of-the-art methods (95.4%). The computation time of our method is less than 0.1 s for a CXR when executed on an ordinary laptop. Our method is also validated on CXRs acquired with different digital radiography units. The results demonstrate the generalization of the trained SED model and the usefulness of our method.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Pulmón/diagnóstico por imagen , Radiografía Torácica/métodos , Algoritmos , Bases de Datos Factuales , Humanos
14.
J Gastroenterol Hepatol ; 32(9): 1631-1639, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-28068755

RESUMEN

BACKGROUND AND AIM: There still lacks a tool for precisely evaluating cirrhotic remodeling. Histologic distortion characterized in cirrhosis (i.e. cirrhotic patterns) has a validated pathophysiological meaning and potential relevance to clinical complications. We aimed to establish a new tool to quantify the cirrhotic patterns and test it for reflecting the cirrhotic remodeling. METHODS: We designed a computerized algorithm, named qCP, dedicated for the analysis of liver images acquired by second harmonic microscopy. We evaluated its measurement by using a cohort of 95 biopsies (Ishak staging F4/5/6 = 33/35/27) of chronic hepatitis B and a carbon tetrachloride-intoxicated rat model for simulating the bidirectional cirrhotic change. RESULTS: QCP can characterize 14 histological cirrhosis parameters involving the nodules, septa, sinusoid, and vessels. For chronic hepatitis B biopsies, the mean overall intra-observer and inter-observer agreement was 0.94 ± 0.08 and 0.93 ± 0.09, respectively. The robustness in resisting sample adequacy-related scoring error was demonstrated. The proportionate areas of total (collagen proportionate area), septal (septal collagen proportionate area [SPA]), sinusoidal, and vessel collagen, nodule area, and nodule density (ND) were associated with Ishak staging (P < 0.01 for all). But only ND and SPA were independently associated (P ≤ 0.001 for both). A histological cirrhosis parameters-composed qCP-index demonstrated an excellent accuracy in quantitatively diagnosing evolving cirrhosis (areas under receiver operating characteristic curves 0.95-0.92; sensitivity 0.93-0.82; specificity 0.94-0.85). In the rat model, changes in collagen proportionate area, SPA, and ND had strong correlations with both cirrhosis progression and regression and faithfully characterized the histological evolution. CONCLUSIONS: QCP preliminarily demonstrates potential for quantitating cirrhotic remodeling with high resolution and accuracy. Further validation with in-study cohorts and multiple-etiologies is warranted.


Asunto(s)
Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Cirrosis Hepática/diagnóstico por imagen , Cirrosis Hepática/patología , Hígado/diagnóstico por imagen , Hígado/patología , Algoritmos , Animales , Tetracloruro de Carbono , Modelos Animales de Enfermedad , Progresión de la Enfermedad , Masculino , Microscopía , Ratas Sprague-Dawley , Sensibilidad y Especificidad
16.
PLoS One ; 10(10): e0140381, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26447861

RESUMEN

Automatic classification of tissue types of region of interest (ROI) plays an important role in computer-aided diagnosis. In the current study, we focus on the classification of three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor) in T1-weighted contrast-enhanced MRI (CE-MRI) images. Spatial pyramid matching (SPM), which splits the image into increasingly fine rectangular subregions and computes histograms of local features from each subregion, exhibits excellent results for natural scene classification. However, this approach is not applicable for brain tumors, because of the great variations in tumor shape and size. In this paper, we propose a method to enhance the classification performance. First, the augmented tumor region via image dilation is used as the ROI instead of the original tumor region because tumor surrounding tissues can also offer important clues for tumor types. Second, the augmented tumor region is split into increasingly fine ring-form subregions. We evaluate the efficacy of the proposed method on a large dataset with three feature extraction methods, namely, intensity histogram, gray level co-occurrence matrix (GLCM), and bag-of-words (BoW) model. Compared with using tumor region as ROI, using augmented tumor region as ROI improves the accuracies to 82.31% from 71.39%, 84.75% from 78.18%, and 88.19% from 83.54% for intensity histogram, GLCM, and BoW model, respectively. In addition to region augmentation, ring-form partition can further improve the accuracies up to 87.54%, 89.72%, and 91.28%. These experimental results demonstrate that the proposed method is feasible and effective for the classification of brain tumors in T1-weighted CE-MRI.


Asunto(s)
Neoplasias Encefálicas/diagnóstico , Glioma/diagnóstico , Meningioma/diagnóstico , Neoplasias Encefálicas/clasificación , Glioma/clasificación , Humanos , Interpretación de Imagen Asistida por Computador , Imagen por Resonancia Magnética , Meningioma/clasificación , Sensibilidad y Especificidad
17.
Zhonghua Fu Chan Ke Za Zhi ; 49(12): 899-902, 2014 Dec.
Artículo en Chino | MEDLINE | ID: mdl-25608989

RESUMEN

OBJECTIVE: To investigate the reconstruction of digital three-dimensional (3D) model of normal human placental vascular network based on MRI data in vitro. METHODS: Six full term placentas were collected, casted with modified self-curing denture base resin and scanned by T1 e-THRIVE high resolution magnetic resonance imaging. MRI images were imported into Mimics 14.0 software for 3D reconstruction, and the 3D model was compared with placental vascular casting model. RESULTS: (1) The placental vascular network could be obtained on MR 2D images. The 3D model were reconstructed successfully, which showed clear, realistic images. The 3D model could be zoomed and revolved from any direction to observe the branches of arteries and veins. (2) The umbilical vein and 2 umbilical arteries could be seen in the 3D model. In the root of the umbilical cord, the umbilical vein divided into 5-7 branches. While the 2 umbilical arteries anatomoses to form blood sinus and then devided into sub-branches. All the peripheral vessels ended in chorionic plate with abundant sub-branches. (3) When compared with the casting of placental arterial-venous vascular network, the morphology, structure, angle and trend of vessels in 3D model was consistent with the casting network. CONCLUSIONS: Reconstruction of digital 3D model of normal human placental vascular network based on MRI in vitro is a new and promising method for the study of placental vasculature. It has better vascular exposure, free rotation, radiation-free. It provides a promising base for the study of placental vasculature in vivo in the future.


Asunto(s)
Imagen por Resonancia Magnética , Modelos Anatómicos , Placenta/irrigación sanguínea , Arterias Umbilicales/anatomía & histología , Cordón Umbilical/irrigación sanguínea , Corion , Femenino , Humanos , Imagenología Tridimensional , Técnicas In Vitro , Microcirculación/fisiología , Placenta/anatomía & histología , Embarazo , Arterias Umbilicales/fisiología , Cordón Umbilical/anatomía & histología
18.
Nan Fang Yi Ke Da Xue Xue Bao ; 30(9): 2156-60, 2010 Sep.
Artículo en Chino | MEDLINE | ID: mdl-20855278

RESUMEN

For medical image volume rendering, it is very difficult to simultaneously visualize interior and exterior structures while preserving clear shape cues. Highly transparent transfer functions produce cluttered images with many overlapping structures, while clipping techniques completely remove possibly important contextual information. To address this issue, A gradient adaptive shading based illumination model is proposed and implemented in CUDA architecture. The coefficients of ambient, diffuse and specular lighting are tuned adaptively according to gradient. The experiments show that our method is capable of preserving 3-D contextual information in medical image dataset while still show clear boundaries with real-time interactive speed.


Asunto(s)
Algoritmos , Artefactos , Gráficos por Computador , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional , Simulación por Computador , Humanos , Aumento de la Imagen/métodos , Modelos Teóricos , Reconocimiento de Normas Patrones Automatizadas/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA