Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39285109

RESUMEN

PURPOSE: A critical piece of information for prostate intervention and cancer treatment is provided by the complementary medical imaging modalities of ultrasound (US) and magnetic resonance imaging (MRI). Therefore, MRI-US image fusion is often required during prostate examination to provide contrast-enhanced TRUS, in which image registration is a key step in multimodal image fusion. METHODS: We propose a novel multi-scale feature-crossing network for the prostate MRI-US image registration task. We designed a feature-crossing module to enhance information flow in the hidden layer by integrating intermediate features between adjacent scales. Additionally, an attention block utilizing three-dimensional convolution interacts information between channels, improving the correlation between different modal features. We used 100 cases randomly selected from The Cancer Imaging Archive (TCIA) for our experiments. A fivefold cross-validation method was applied, dividing the dataset into five subsets. Four subsets were used for training, and one for testing, repeating this process five times to ensure each subset served as the test set once. RESULTS: We test and evaluate our technique using fivefold cross-validation. The cross-validation trials result in a median target registration error of 2.20 mm on landmark centroids and a median Dice of 0.87 on prostate glands, both of which were better than the baseline model. In addition, the standard deviation of the dice similarity coefficient is 0.06, which suggests that the model is stable. CONCLUSION: We propose a novel multi-scale feature-crossing network for the prostate MRI-US image registration task. A random selection of 100 cases from The Cancer Imaging Archive (TCIA) was used to test and evaluate our approach using fivefold cross-validation. The experimental results showed that our method improves the registration accuracy. After registration, MRI and TURS images were more similar in structure and morphology, and the location and morphology of cancer were more accurately reflected in the images.

2.
Brief Bioinform ; 25(2)2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38483256

RESUMEN

Numerous imaging techniques are available for observing and interrogating biological samples, and several of them can be used consecutively to enable correlative analysis of different image modalities with varying resolutions and the inclusion of structural or molecular information. Achieving accurate registration of multimodal images is essential for the correlative analysis process, but it remains a challenging computer vision task with no widely accepted solution. Moreover, supervised registration methods require annotated data produced by experts, which is limited. To address this challenge, we propose a general unsupervised pipeline for multimodal image registration using deep learning. We provide a comprehensive evaluation of the proposed pipeline versus the current state-of-the-art image registration and style transfer methods on four types of biological problems utilizing different microscopy modalities. We found that style transfer of modality domains paired with fully unsupervised training leads to comparable image registration accuracy to supervised methods and, most importantly, does not require human intervention.


Asunto(s)
Aprendizaje Profundo , Humanos , Microscopía
3.
BMC Med Inform Decis Mak ; 24(1): 65, 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-38443881

RESUMEN

BACKGROUND: Multimodal histology image registration is a process that transforms into a common coordinate system two or more images obtained from different microscopy modalities. The combination of information from various modalities can contribute to a comprehensive understanding of tissue specimens, aiding in more accurate diagnoses, and improved research insights. Multimodal image registration in histology samples presents a significant challenge due to the inherent differences in characteristics and the need for tailored optimization algorithms for each modality. RESULTS: We developed MMIR a cloud-based system for multimodal histological image registration, which consists of three main modules: a project manager, an algorithm manager, and an image visualization system. CONCLUSION: Our software solution aims to simplify image registration tasks with a user-friendly approach. It facilitates effective algorithm management, responsive web interfaces, supports multi-resolution images, and facilitates batch image registration. Moreover, its adaptable architecture allows for the integration of custom algorithms, ensuring that it aligns with the specific requirements of each modality combination. Beyond image registration, our software enables the conversion of segmented annotations from one modality to another.


Asunto(s)
Algoritmos , Programas Informáticos , Humanos
4.
Eur J Radiol ; 169: 111189, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37939605

RESUMEN

PURPOSE: The objective of this study was to analyze the effect of TMJ disc position on condylar bone remodeling after arthroscopic disc repositioning surgery. METHODS: Nine patients with anterior disc displacement without reduction (ADDWoR, 15 sides) who underwent arthroscopic disc repositioning surgery were included. Three-dimensional (3D) reconstruction of the articular disc and the condyle in the closed-mouth position was performed using cone-beam computed tomography (CBCT) and magnetic resonance imaging (MRI) data. Then, the CBCT and MRI images were fused and displayed together by multimodal image registration techniques. Morphological changes in the articular disc and condyle, as well as changes in their spatial relationship, were studied by comparing preoperative and 3-month postoperative CBCT-MRI fused images. RESULTS: The volume and superficial area of the articular disc, as well as the area of the articular disc surface in the subarticular cavity, were significantly increased compared to that before the surgical treatment(P < 0.01). There was also a significant increase in the volume of the condyle (P < 0.001). All condyles showed bone remodeling after surgery that could be categorized as one of two types depending on the position of the articular disc, suggesting that the location of the articular disc was related to the new bone formation. CONCLUSIONS: The morphology of the articular disc and condyle were significantly changed after arthroscopic disc repositioning surgery. The 3D changes in the position of the articular disc after surgery tended to have an effect on condylar bone remodeling and the location of new bone formation.


Asunto(s)
Luxaciones Articulares , Disco de la Articulación Temporomandibular , Humanos , Disco de la Articulación Temporomandibular/diagnóstico por imagen , Disco de la Articulación Temporomandibular/cirugía , Disco de la Articulación Temporomandibular/patología , Remodelación Ósea , Huesos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada de Haz Cónico , Luxaciones Articulares/patología , Articulación Temporomandibular , Cóndilo Mandibular
5.
Anal Chim Acta ; 1283: 341969, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37977791

RESUMEN

The integration of matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI MSI) and histology plays a pivotal role in advancing our understanding of complex heterogeneous tissues, which provides a comprehensive description of biological tissue with both wide molecule coverage and high lateral resolution. Herein, we proposed a novel strategy for the correction and registration of MALDI MSI data with hematoxylin & eosin (H&E) staining images. To overcome the challenges of discrepancies in spatial resolution towards the unification of the two imaging modalities, a deep learning-based interpolation algorithm for MALDI MSI data was constructed, which enables spatial coherence and the following orientation matching between images. Coupled with the affine transformation (AT) and the subsequent moving least squares algorithm, the two types of images from one rat brain tissue section were aligned automatically with high accuracy. Moreover, we demonstrated the practicality of the developed pipeline by projecting it to a rat cerebral ischemia-reperfusion injury model, which would help decipher the link between molecular metabolism and pathological interpretation towards microregion. This new approach offers the chance for other types of bioimaging to boost the field of multimodal image fusion.


Asunto(s)
Algoritmos , Microscopía , Ratas , Animales , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción/métodos , Coloración y Etiquetado
6.
Cell Rep Methods ; 3(10): 100595, 2023 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-37741277

RESUMEN

Imaging mass cytometry (IMC) is a powerful technique capable of detecting over 30 markers on a single slide. It has been increasingly used for single-cell-based spatial phenotyping in a wide range of samples. However, it only acquires a rectangle field of view (FOV) with a relatively small size and low image resolution, which hinders downstream analysis. Here, we reported a highly practical dual-modality imaging method that combines high-resolution immunofluorescence (IF) and high-dimensional IMC on the same tissue slide. Our computational pipeline uses the whole-slide image (WSI) of IF as a spatial reference and integrates small-FOV IMC into a WSI of IMC. The high-resolution IF images enable accurate single-cell segmentation to extract robust high-dimensional IMC features for downstream analysis. We applied this method in esophageal adenocarcinoma of different stages, identified the single-cell pathology landscape via reconstruction of WSI IMC images, and demonstrated the advantage of the dual-modality imaging strategy.


Asunto(s)
Adenocarcinoma , Esófago de Barrett , Neoplasias Esofágicas , Humanos , Esófago de Barrett/patología , Neoplasias Esofágicas/patología , Adenocarcinoma/diagnóstico por imagen , Técnica del Anticuerpo Fluorescente , Citometría de Imagen
7.
J Appl Clin Med Phys ; 24(8): e14084, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37430473

RESUMEN

Retrograde intrarenal surgery (RIRS) is a widely utilized diagnostic and therapeutic tool for multiple upper urinary tract pathologies. The image-guided navigation system can assist the surgeon to perform precise surgery by providing the relative position between the lesion and the instrument after the intraoperative image is registered with the preoperative model. However, due to the structural complexity and diversity of multi-branched organs such as kidneys, bronchi, etc., the consistency of the intensity distribution of virtual and real images will be challenged, which makes the classical pure intensity registration method prone to bias and random results in a wide search domain. In this paper, we propose a structural feature similarity-based method combined with a semantic style transfer network, which significantly improves the registration accuracy when the initial state deviation is obvious. Furthermore, multi-view constraints are introduced to compensate for the collapse of spatial depth information and improve the robustness of the algorithm. Experimental studies were conducted on two models generated from patient data to evaluate the performance of the method and competing algorithms. The proposed method obtains mean target error (mTRE) of 0.971 ± 0.585 mm and 1.266 ± 0.416 mm respectively, with better accuracy and robustness overall. Experimental results demonstrate that the proposed method has the potential to be applied to RIRS and extended to other organs with similar structures.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Fantasmas de Imagen
8.
J Clin Med ; 12(6)2023 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-36983141

RESUMEN

One of the crucial tasks for the planning of surgery of the iliosacral joint is placing an iliosacral screw with the goal of fixing broken parts of the pelvis. Tracking of proper screw trajectory is usually done in the preoperative phase by the acquisition of X-ray images under different angles, which guide the surgeons to perform surgery. This approach is standardly complicated due to the investigation of 2D X-ray images not showing spatial perspective. Therefore, in this pilot study, we propose complex software tools which are aimed at making a simulation model of reconstructed CT (DDR) images with a virtual iliosacral screw to guide the surgery process. This pilot study presents the testing for two clinical cases to reveal the initial performance and usability of this software in clinical conditions. This model is consequently used for a multiregional registration with reference intraoperative X-ray images to select the slide from the 3D dataset which best fits with reference X-ray. The proposed software solution utilizes input CT slices of the pelvis area to create a segmentation model of individual bone components. Consequently, a model of an iliosacral screw is inserted into this model. In the next step, we propose the software CT2DDR which makes DDR projections with the iliosacral screw. In the last step, we propose a multimodal registration procedure, which performs registration of a selected number of slices with reference X-ray, and based on the Structural Similarity Index (SSIM) and index of correlation, the procedure finds the best match of DDR with X-ray images. In this pilot study, we also provide a comparative analysis of the computational costs of the multimodal registration upon various numbers of DDR slices to show the complex software performance. The proposed complex model has versatile usage for modeling and surgery planning of the pelvis area in fractures of iliosacral joints.

9.
Comput Biol Med ; 155: 106661, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36827789

RESUMEN

PROPOSE: Multimodal registration of 2D Ultrasound (US) and 3D Magnetic Resonance (MR) for fusion navigation can improve the intraoperative detection accuracy of lesion. However, multimodal registration remains a challenge because of the poor US image quality. In the study, a weighted self-similarity structure vector (WSSV) is proposed to registrate multimodal images. METHOD: The self-similarity structure vector utilizes the normalized distance of symmetrically located patches in the neighborhood to describe the local structure information. The texture weights are extracted using the local standard deviation to reduce the speckle interference in the US images. The multimodal similarity metric is constructed by combining a self-similarity structure vector with a texture weight map. RESULTS: Experiments were performed on US and MR images of the liver from 88 groups of data including 8 patients and 80 simulated samples. The average target registration error was reduced from 14.91 ± 3.86 mm to 4.95 ± 2.23 mm using the WSSV-based method. CONCLUSIONS: The experimental results show that the WSSV-based registration method could robustly align the US and MR images of the liver. With further acceleration, the registration framework can be potentially applied in time-sensitive clinical settings, such as US-MR image registration in image-guided surgery.


Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Ultrasonografía/métodos , Hígado/diagnóstico por imagen , Imagenología Tridimensional/métodos
10.
J Appl Stat ; 49(7): 1865-1889, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35707551

RESUMEN

We present a new statistical framework for landmark ?>curve-based image registration and surface reconstruction. The proposed method first elastically aligns geometric features (continuous, parameterized curves) to compute local deformations, and then uses a Gaussian random field model to estimate the full deformation vector field as a spatial stochastic process on the entire surface or image domain. The statistical estimation is performed using two different methods: maximum likelihood and Bayesian inference via Markov Chain Monte Carlo sampling. The resulting deformations accurately match corresponding curve regions while also being sufficiently smooth over the entire domain. We present several qualitative and quantitative evaluations of the proposed method on both synthetic and real data. We apply our approach to two different tasks on real data: (1) multimodal medical image registration, and (2) anatomical and pottery surface reconstruction.

11.
Sensors (Basel) ; 22(6)2022 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-35336570

RESUMEN

Brain shift is an important obstacle to the application of image guidance during neurosurgical interventions. There has been a growing interest in intra-operative imaging to update the image-guided surgery systems. However, due to the innate limitations of the current imaging modalities, accurate brain shift compensation continues to be a challenging task. In this study, the application of intra-operative photoacoustic imaging and registration of the intra-operative photoacoustic with pre-operative MR images are proposed to compensate for brain deformation. Finding a satisfactory registration method is challenging due to the unpredictable nature of brain deformation. In this study, the co-sparse analysis model is proposed for photoacoustic-MR image registration, which can capture the interdependency of the two modalities. The proposed algorithm works based on the minimization of mapping transform via a pair of analysis operators that are learned by the alternating direction method of multipliers. The method was evaluated using an experimental phantom and ex vivo data obtained from a mouse brain. The results of the phantom data show about 63% improvement in target registration error in comparison with the commonly used normalized mutual information method. The results proved that intra-operative photoacoustic images could become a promising tool when the brain shift invalidates pre-operative MRI.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Algoritmos , Animales , Encéfalo/diagnóstico por imagen , Encéfalo/cirugía , Imagen por Resonancia Magnética/métodos , Ratones , Procedimientos Neuroquirúrgicos/métodos , Fantasmas de Imagen
12.
Front Neuroinform ; 15: 691918, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34393747

RESUMEN

The acquisition of high quality maps of gene expression in the rodent brain is of fundamental importance to the neuroscience community. The generation of such datasets relies on registering individual gene expression images to a reference volume, a task encumbered by the diversity of staining techniques employed, and by deformations and artifacts in the soft tissue. Recently, deep learning models have garnered particular interest as a viable alternative to traditional intensity-based algorithms for image registration. In this work, we propose a supervised learning model for general multimodal 2D registration tasks, trained with a perceptual similarity loss on a dataset labeled by a human expert and augmented by synthetic local deformations. We demonstrate the results of our approach on the Allen Mouse Brain Atlas (AMBA), comprising whole brain Nissl and gene expression stains. We show that our framework and design of the loss function result in accurate and smooth predictions. Our model is able to generalize to unseen gene expressions and coronal sections, outperforming traditional intensity-based approaches in aligning complex brain structures.

13.
Artículo en Inglés | MEDLINE | ID: mdl-34366715

RESUMEN

Multimodal image registration (MIR) is a fundamental procedure in many image-guided therapies. Recently, unsupervised learning-based methods have demonstrated promising performance over accuracy and efficiency in deformable image registration. However, the estimated deformation fields of the existing methods fully rely on the to-be-registered image pair. It is difficult for the networks to be aware of the mismatched boundaries, resulting in unsatisfactory organ boundary alignment. In this paper, we propose a novel multimodal registration framework, which elegantly leverages the deformation fields estimated from both: (i) the original to-be-registered image pair, (ii) their corresponding gradient intensity maps, and adaptively fuses them with the proposed gated fusion module. With the help of auxiliary gradient-space guidance, the network can concentrate more on the spatial relationship of the organ boundary. Experimental results on two clinically acquired CT-MRI datasets demonstrate the effectiveness of our proposed approach.

14.
Artículo en Inglés | MEDLINE | ID: mdl-34367471

RESUMEN

The loss function of an unsupervised multimodal image registration framework has two terms, i.e., a metric for similarity measure and regularization. In the deep learning era, researchers proposed many approaches to automatically learn the similarity metric, which has been shown effective in improving registration performance. However, for the regularization term, most existing multimodal registration approaches still use a hand-crafted formula to impose artificial properties on the estimated deformation field. In this work, we propose a unimodal cyclic regularization training pipeline, which learns task-specific prior knowledge from simpler unimodal registration, to constrain the deformation field of multimodal registration. In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods, especially for severely deformed local regions.

15.
Phys Med Biol ; 66(17)2021 08 23.
Artículo en Inglés | MEDLINE | ID: mdl-34330122

RESUMEN

A long-standing problem in image-guided radiotherapy is that inferior intraoperative images present a difficult problem for automatic registration algorithms. Particularly for digital radiography (DR) and digitally reconstructed radiograph (DRR), the blurred, low-contrast, and noisy DR makes the multimodal registration of DR-DRR challenging. Therefore, we propose a novel CNN-based method called CrossModalNet to exploit the quality preoperative modality (DRR) for handling the limitations of intraoperative images (DR), thereby improving the registration accuracy. The method consists of two parts: DR-DRR contour predictions and contour-based rigid registration. We have designed the CrossModal Attention Module and CrossModal Refine Module to fully exploit the multiscale crossmodal features and implement the crossmodal interactions during the feature encoding and decoding stages. Then, the predicted anatomical contours of DR-DRR are registered by the classic mutual information method. We collected 2486 patient scans to train CrossModalNet and 170 scans to test its performance. The results show that it outperforms the classic and state-of-the-art methods with 95th percentile Hausdorff distance of 5.82 pixels and registration accuracy of 81.2%. The code is available at https://github.com/lc82111/crossModalNet.


Asunto(s)
Algoritmos , Radioterapia Guiada por Imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen Multimodal , Intensificación de Imagen Radiográfica
16.
Comput Biol Med ; 134: 104529, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34126283

RESUMEN

Optical coherence tomography angiography (OCTA) and fluorescein angiography (FA) are two different vascular imaging modalities widely used in clinical practice to diagnose and grade different relevant retinal pathologies. Although each of them has its advantages and disadvantages, the joint analysis of the images produced by both techniques to analyze a specific area of the retina is of increasing interest, given that they provide common and complementary visual information. However, in order to facilitate this analysis task, a previous registration of the pair of FA and OCTA images is desirable in order to superimpose their common areas and focus the gaze on the regions of interest. Normally, this task is manually carried out by the expert clinician, but it turns out to be tedious and time-consuming. Here, we present a three-stage methodology for robust multimodal registration of FA and superficial plexus OCTA images. The first one is a preprocessing stage devoted to reducing the noise and segmenting the main vessels in both types of images. The second stage uses the vessel information to do an approximate registration based on template matching. Lastly, the third stage uses an evolutionary algorithm based on differential evolution to refine the previous registration and obtain the optimal registration. The method was evaluated in a dataset with 172 pairs of FA and OCTA images, obtaining a success rate of 98.8%. The best mean execution time of the method was less than 5 s per image.


Asunto(s)
Vasos Retinianos , Tomografía de Coherencia Óptica , Algoritmos , Angiografía con Fluoresceína , Retina , Vasos Retinianos/diagnóstico por imagen
17.
Med Image Anal ; 68: 101878, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33197714

RESUMEN

Multimodal image registration is a vital initial step in several medical image applications for providing complementary information from different data modalities. Since images with different modalities do not exhibit the same characteristics, finding their accurate correspondences remains a challenge. For convolutional multimodal registration methods, two components are quite significant: descriptive image feature as well as the suited similarity metric. However, these two components are often custom-designed and are infeasible to the high diversity of tissue appearance across modalities. In this paper, we translate image registration into a decision-making problem, where registration is achieved via an artificial agent trained by asynchronous reinforcement learning. More specifically, convolutional long-short-term-memory is incorporated after stacked convolutional layers in this method to extract spatial-temporal image features and learn the similarity metric implicitly. A customized reward function driven by landmark error is advocated to guide the agent to the correct registration direction. A Monte Carlo rollout strategy is also leveraged to perform as a look-ahead inference in the testing stage, to increase registration accuracy further. Experiments on paired CT and MR images of patients diagnosed as nasopharyngeal carcinoma demonstrate that our method achieves state-of-the-art performance in medical image registration.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos
18.
Med Phys ; 46(10): 4575-4587, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31420963

RESUMEN

PURPOSE: As affordable equipment, electronic portal imaging devices (EPIDs) are wildly used in radiation therapy departments to verify patients' positions for accurate radiotherapy. However, these devices tend to produce visually ambiguous and low-contrast planar digital radiographs under megavoltage x ray (MV-DRs), which poses a tremendous challenge for clinicians to perform multimodal registration between the MV-DRs and the kilovoltage digital reconstructed radiographs (KV-DRRs) developed from the planning computed tomography. Furthermore, the existent of strong appearance variations also makes accurate registration beyond the reach of current automatic algorithms. METHODS: We propose a novel modality conversion approach to this task that first synthesizes KV images from MV-DRs, and then registers the synthesized and real KV-DRRs. We focus on the synthesis technique and develop a conditional generative adversarial network with information bottleneck extension (IB-cGAN) that takes MV-DRs and nonaligned KV-DRRs as inputs and outputs synthesized KV images. IB-cGAN is designed to address two main challenges in deep-learning-based synthesis: (a) training with a roughly aligned dataset suffering from noisy correspondences; (b) making synthesized images have real clinical meanings that faithfully reflects MV-DRs rather than nonaligned KV-DRRs. Accordingly, IB-cGAN employs (a) an adversarial loss to provide training supervision at semantic level rather than the imprecise pixel level; (b) an IB to constrain the information from the nonaligned KV-DRRs. RESULTS: We collected 2698 patient scans to train the model and 208 scans to test its performance. The qualitative results demonstrate realistic KV images can be synthesized allowing clinicians to perform the visual registration. The quantitative results show it significantly outperforms current nonmodality conversion methods by 22.37% (P = 0.0401) in terms of registration accuracy. CONCLUSIONS: The modality conversion approach facilitates the downstream MV-KV registration for both clinicians and off-the-shelf registration algorithms. With this approach, it is possible to benefit the developing countries where inexpensive EPIDs are widely used for the image-guided radiation therapy.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Radiografía
19.
Plant Methods ; 15: 44, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31168314

RESUMEN

With the introduction of high-throughput multisensory imaging platforms, the automatization of multimodal image analysis has become the focus of quantitative plant research. Due to a number of natural and technical reasons (e.g., inhomogeneous scene illumination, shadows, and reflections), unsupervised identification of relevant plant structures (i.e., image segmentation) represents a nontrivial task that often requires extensive human-machine interaction. Registration of multimodal plant images enables the automatized segmentation of 'difficult' image modalities such as visible light or near-infrared images using the segmentation results of image modalities that exhibit higher contrast between plant and background regions (such as fluorescent images). Furthermore, registration of different image modalities is essential for assessment of a consistent multiparametric plant phenotype, where, for example, chlorophyll and water content as well as disease- and/or stress-related pigmentation can simultaneously be studied at a local scale. To automatically register thousands of images, efficient algorithmic solutions for the unsupervised alignment of two structurally similar but, in general, nonidentical images are required. For establishment of image correspondences, different algorithmic approaches based on different image features have been proposed. The particularity of plant image analysis consists, however, of a large variability of shapes and colors of different plants measured at different developmental stages from different views. While adult plant shoots typically have a unique structure, young shoots may have a nonspecific shape that can often be hardly distinguished from the background structures. Consequently, it is not clear a priori what image features and registration techniques are suitable for the alignment of various multimodal plant images. Furthermore, dynamically measured plants may exhibit nonuniform movements that require application of nonrigid registration techniques. Here, we investigate three common techniques for registration of visible light and fluorescence images that rely on finding correspondences between (i) feature-points, (ii) frequency domain features, and (iii) image intensity information. The performance of registration methods is validated in terms of robustness and accuracy measured by a direct comparison with manually segmented images of different plants. Our experimental results show that all three techniques are sensitive to structural image distortions and require additional preprocessing steps including structural enhancement and characteristic scale selection. To overcome the limitations of conventional approaches, we develop an iterative algorithmic scheme, which allows it to perform both rigid and slightly nonrigid registration of high-throughput plant images in a fully automated manner.

20.
Comput Electr Eng ; 74: 130-137, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30820068

RESUMEN

We present the concept of image registration using ultrasound (US) and electron paramagnetic resonance (EPR) imaging and discuss the benefits of this solution, as well as its limitations. Both phantoms and murine tumors were used to test US and EPR image co-registration. Comparison of dental molding cast immobilization and predesigned cradle revealed that the latter approach is more effective in stabilizing the fiducial position. In vivo imaging of mouse tumors, image registration and comparison of fiducials system for 3D spatial as well as 4D spatial-spectral EPR imaging supported by 3D US were demonstrated. Ultrasound may provide a convenient alternative to other anatomical imaging methods for image registration in preclinical research. Of particular interest is a fusion of US tissue structure, doppler vascular function and EPR oxygen or redox imaging.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA