Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Med (Lausanne) ; 11: 1360143, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38756944

RESUMEN

Introduction: Deep learning-based methods can promote and save critical time for the diagnosis of pneumonia from computed tomography (CT) images of the chest, where the methods usually rely on large amounts of labeled data to learn good visual representations. However, medical images are difficult to obtain and need to be labeled by professional radiologists. Methods: To address this issue, a novel contrastive learning model with token projection, namely CoTP, is proposed for improving the diagnostic quality of few-shot chest CT images. Specifically, (1) we utilize solely unlabeled data for fitting CoTP, along with a small number of labeled samples for fine-tuning, (2) we present a new Omicron dataset and modify the data augmentation strategy, i.e., random Poisson noise perturbation for the CT interpretation task, and (3) token projection is utilized to further improve the quality of the global visual representations. Results: The ResNet50 pre-trained by CoTP attained accuracy (ACC) of 92.35%, sensitivity (SEN) of 92.96%, precision (PRE) of 91.54%, and the area under the receiver-operating characteristics curve (AUC) of 98.90% on the presented Omicron dataset. On the contrary, the ResNet50 without pre-training achieved ACC, SEN, PRE, and AUC of 77.61, 77.90, 76.69, and 85.66%, respectively. Conclusion: Extensive experiments reveal that a model pre-trained by CoTP greatly outperforms that without pre-training. The CoTP can improve the efficacy of diagnosis and reduce the heavy workload of radiologists for screening of Omicron pneumonia.

2.
SIAM J Math Data Sci ; 4(4): 1420-1446, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-37576699

RESUMEN

Estimating the rank of a corrupted data matrix is an important task in data analysis, most notably for choosing the number of components in PCA. Significant progress on this task was achieved using random matrix theory by characterizing the spectral properties of large noise matrices. However, utilizing such tools is not straightforward when the data matrix consists of count random variables, e.g., Poisson, in which case the noise can be heteroskedastic with an unknown variance in each entry. In this work, we focus on a Poisson random matrix with independent entries and propose a simple procedure, termed biwhitening, for estimating the rank of the underlying signal matrix (i.e., the Poisson parameter matrix) without any prior knowledge. Our approach is based on the key observation that one can scale the rows and columns of the data matrix simultaneously so that the spectrum of the corresponding noise agrees with the standard Marchenko-Pastur (MP) law, justifying the use of the MP upper edge as a threshold for rank selection. Importantly, the required scaling factors can be estimated directly from the observations by solving a matrix scaling problem via the Sinkhorn-Knopp algorithm. Aside from the Poisson, our approach is extended to families of distributions that satisfy a quadratic relation between the mean and the variance, such as the generalized Poisson, binomial, negative binomial, gamma, and many others. This quadratic relation can also account for missing entries in the data. We conduct numerical experiments that corroborate our theoretical findings, and showcase the advantage of our approach for rank estimation in challenging regimes. Furthermore, we demonstrate the favorable performance of our approach on several real datasets of single-cell RNA sequencing (scRNA-seq), High-Throughput Chromosome Conformation Capture (Hi-C), and document topic modeling.

3.
Ultramicroscopy ; 229: 113335, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34243020

RESUMEN

We present a parameter retrieval method which incorporates prior knowledge about the object into ptychography. The proposed method is applied to two applications: (1) parameter retrieval of small particles from Fourier ptychographic dark field measurements; (2) parameter retrieval of a rectangular structure with real-space ptychography. The influence of Poisson noise is discussed in the second part of the paper. The Cramér Rao Lower Bound in both applications is computed and Monte Carlo analysis is used to verify the calculated lower bound. With the computation results we report the lower bound for various noise levels and analyze the correlation of particles in application 1. For application 2 the correlation of parameters of the rectangular structure is discussed.

4.
R Soc Open Sci ; 8(4): 201432, 2021 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-33996114

RESUMEN

We study theoretically the transport properties of electrons in a quantum dot system with spin-orbit coupling. By using the quantum master equation approach, the shot noise and skewness of the transport electrons are calculated. We obtain super-Poisson noise behaviour by investigating the full counting statistics of the transport system. We discover super-Poisson behaviour is more obvious with the spin polarization increasing. More importantly, we discover the suppression of shot noise induced by spin-orbit coupling. The value of shot noise is gradually decreasing when spin-orbit coupling strength increases.

5.
Biomed Eng Online ; 20(1): 36, 2021 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-33827586

RESUMEN

BACKGROUND: Low-dose X-ray images have become increasingly popular in the last decades, due to the need to guarantee the lowest reasonable patient's exposure. Dose reduction causes a substantial increase of quantum noise, which needs to be suitably suppressed. In particular, real-time denoising is required to support common interventional fluoroscopy procedures. The knowledge of noise statistics provides precious information that helps to improve denoising performances, thus making noise estimation a crucial task for effective denoising strategies. Noise statistics depend on different factors, but are mainly influenced by the X-ray tube settings, which may vary even within the same procedure. This complicates real-time denoising, because noise estimation should be repeated after any changes in tube settings, which would be hardly feasible in practice. This work investigates the feasibility of an a priori characterization of noise for a single fluoroscopic device, which would obviate the need for inferring noise statics prior to each new images acquisition. The noise estimation algorithm used in this study was tested in silico to assess its accuracy and reliability. Then, real sequences were acquired by imaging two different X-ray phantoms via a commercial fluoroscopic device at various X-ray tube settings. Finally, noise estimation was performed to assess the matching of noise statistics inferred from two different sequences, acquired independently in the same operating conditions. RESULTS: The noise estimation algorithm proved capable of retrieving noise statistics, regardless of the particular imaged scene, also achieving good results even by using only 10 frames (mean percentage error lower than 2%). The tests performed on the real fluoroscopic sequences confirmed that the estimated noise statistics are independent of the particular informational content of the scene from which they have been inferred, as they turned out to be consistent in sequences of the two different phantoms acquired independently with the same X-ray tube settings. CONCLUSIONS: The encouraging results suggest that an a priori characterization of noise for a single fluoroscopic device is feasible and could improve the actual implementation of real-time denoising strategies that take advantage of noise statistics to improve the trade-off between noise reduction and details preservation.


Asunto(s)
Fluoroscopía , Relación Señal-Ruido , Algoritmos , Fantasmas de Imagen , Reproducibilidad de los Resultados
6.
J Imaging ; 8(1)2021 Dec 23.
Artículo en Inglés | MEDLINE | ID: mdl-35049842

RESUMEN

The effectiveness of variational methods for restoring images corrupted by Poisson noise strongly depends on the suitable selection of the regularization parameter balancing the effect of the regulation term(s) and the generalized Kullback-Liebler divergence data term. One of the approaches still commonly used today for choosing the parameter is the discrepancy principle proposed by Zanella et al. in a seminal work. It relies on imposing a value of the data term approximately equal to its expected value and works well for mid- and high-count Poisson noise corruptions. However, the series truncation approximation used in the theoretical derivation of the expected value leads to poor performance for low-count Poisson noise. In this paper, we highlight the theoretical limits of the approach and then propose a nearly exact version of it based on Monte Carlo simulation and weighted least-square fitting. Several numerical experiments are presented, proving beyond doubt that in the low-count Poisson regime, the proposed modified, nearly exact discrepancy principle performs far better than the original, approximated one by Zanella et al., whereas it works similarly or slightly better in the mid- and high-count regimes.

7.
J Imaging ; 7(6)2021 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-39080887

RESUMEN

We are interested in the restoration of noisy and blurry images where the texture mainly follows a single direction (i.e., directional images). Problems of this type arise, for example, in microscopy or computed tomography for carbon or glass fibres. In order to deal with these problems, the Directional Total Generalized Variation (DTGV) was developed by Kongskov et al. in 2017 and 2019, in the case of impulse and Gaussian noise. In this article we focus on images corrupted by Poisson noise, extending the DTGV regularization to image restoration models where the data fitting term is the generalized Kullback-Leibler divergence. We also propose a technique for the identification of the main texture direction, which improves upon the techniques used in the aforementioned work about DTGV. We solve the problem by an ADMM algorithm with proven convergence and subproblems that can be solved exactly at a low computational cost. Numerical results on both phantom and real images demonstrate the effectiveness of our approach.

8.
J Xray Sci Technol ; 28(3): 481-505, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32390647

RESUMEN

In this paper, we present a review of the research literature regarding applying X-ray imaging of baggage scrutiny at airport. It discusses multiple X-ray imaging inspection systems used in airports for detecting dangerous objects inside the baggage. Moreover, it also explains the dual energy X-ray image fusion and image enhancement factors. Different types of noises in digital images and noise models are explained in length. Diagrammatical representations for different noise models are presented and illustrated to clearly show the effect of Poisson and Impulse noise on intensity values. Overall, this review discusses in detail of Poisson and Impulse noise, as well as its causes and effect on the X-ray images, which create un-certainty for the X-ray inspection imaging system while discriminating objects and for the screeners as well. The review then focuses on image processing techniques used by different research studies for X-ray image enhancement, de-noising, and their limitations. Furthermore, the most related approaches for noise reduction and its drawbacks are presented. The methods that may be useful to overcome the drawbacks are also discussed in subsequent sections of this paper. In summary, this review paper highlights the key theories and technical methods used for X-ray image enhancement and de-noising effect on X-ray images generated by the airport baggage inspection system.


Asunto(s)
Absorciometría de Fotón/métodos , Aeropuertos , Procesamiento de Imagen Asistido por Computador/métodos , Medidas de Seguridad , Algoritmos , Humanos , Procesamiento de Señales Asistido por Computador
9.
J Synchrotron Radiat ; 26(Pt 3): 762-773, 2019 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-31074441

RESUMEN

An unbiased approach to correct X-ray response non-uniformity in microstrip detectors has been developed based on the statistical estimation that the scattering intensity at a fixed angle from an object is expected to be constant within the Poisson noise. Raw scattering data of SiO2 glass measured by a microstrip detector module was found to show an accuracy of 12σPN at an intensity of 106 photons, where σPN is the standard deviation according to the Poisson noise. The conventional flat-field calibration has failed in correcting the data, whereas the alternative approach used in this article successfully improved the accuracy from 12σPN to 2σPN. This approach was applied to total-scattering data measured by a gapless 15-modular detector system. The quality of the data is evaluated in terms of the Bragg reflections of Si powder, the diffuse scattering of SiO2 glass, and the atomic pair distribution function of TiO2 nanoparticles and Ni powder.

10.
Igaku Butsuri ; 38(4): 143-158, 2019.
Artículo en Japonés | MEDLINE | ID: mdl-30828046

RESUMEN

[Purpose] The iterative CT image reconstruction (IR) method has been successfully incorporated into commercial CT scanners as a means to promote low-dose CT with high image quality. However, the algorithm of the IR method has not been made publicly available by scanner manufacturers. Kudo reviewed the fundamentals of IR methods on the basis of the articles published by the joint research group of each manufacture that were released before and during product development (Med Imag Tech 32: 239-248, 2014). According to this review, the object function of the IR method consists of the data fidelity term (likelihood) and the regularization term. The regularization term plays a significant role in the IR method; however, it has not been clarified whether or not the variance of projection data should be included into the likelihood to act the regularization term effectively. Our purpose in this study was to investigate the relationship of the incident photon number and the reconstructed linear attenuation coefficients of the IR method by numerical experiments.[Methods] We assumed the X-ray beam was a pencil beam, and the system matrix was given by the line integral of linear attenuation coefficients because we focused on the accuracy of the reconstructed linear attenuation coefficients in the ideal photon detection system equations given by Kudo. Total variation (TV) and the relative difference function were used for regularization of the IR method. Three kinds of numerical phantoms with 256×256 pixels were used as test images. Poisson noise was added to the projection data with 256 linear sampling and 256 views over 180°. The accuracy of reconstructed linear attenuation coefficients was evaluated by the mean reconstructed value within a region of interest (ROI) and the relative root mean square errors (%RMSEs) to the object image.[Results] The linear attenuation coefficients were reconstructed accurately by the IR method including the variance of projection data into the likelihood in comparison with the IR method without including the variance. When the incident photon number ranged from 100 to 2000 for the object having a mean linear attenuation coefficient of 0.067 to 0.087 cm-1, the reconstructed linear attenuation coefficients in ROI were close to the true values. However, when the incident photon number was 50, both the accuracy and the uniformity of reconstructed images decreased.[Discussion] From the viewpoint of the visual observation, the image quality of the IR method was superior to that of the filtered back-projection (FBP) image processed with the Gaussian filter of FWHM equal to 3 pixels. For the object with a high absorber, the FBP gives linear attenuation coefficients that were lower than the true values. This phenomenon was also observed in the IR method. The projection data of CT were given by the logarithm operation of the ratio between the incident photon and the transmitted photon numbers. If the transmitted photon number happened to be equal to 0 owing to the influence of noise, it was held to a value of 1 to avoid the logarithm of zero. This process caused an error of the linear attenuation coefficients.[Conclusion] The variance of projection data should be included into the likelihood to act the regularization term effectively in the IR method.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Fotones , Tomografía Computarizada por Rayos X , Algoritmos , Fantasmas de Imagen , Interpretación de Imagen Radiográfica Asistida por Computador
11.
J Med Imaging (Bellingham) ; 6(3): 031410, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35834318

RESUMEN

Digital breast tomosynthesis (DBT) is an imaging technique created to visualize 3-D mammary structures for the purpose of diagnosing breast cancer. This imaging technique is based on the principle of computed tomography. Due to the use of a dangerous ionizing radiation, the "as low as reasonably achievable" (ALARA) principle should be respected, aiming at minimizing the radiation dose to obtain an adequate examination. Thus, a noise filtering method is a fundamental step to achieve the ALARA principle, as the noise level of the image increases as the radiation dose is reduced, making it difficult to analyze the image. In our work, a double denoising approach for DBT is proposed, filtering in both projection (prereconstruction) and image (postreconstruction) domains. First, in the prefiltering step, methods were used for filtering the Poisson noise. To reconstruct the DBT projections, we used the filtered backprojection algorithm. Then, in the postfiltering step, methods were used for filtering Gaussian noise. Experiments were performed on simulated data generated by open virtual clinical trials (OpenVCT) software and on a physical phantom, using several combinations of methods in each domain. Our results showed that double filtering (i.e., in both domains) is not superior to filtering in projection domain only. By investigating the possible reason to explain these results, it was found that the noise model in DBT image domain could be better modeled by a Burr distribution than a Gaussian distribution. Finally, this important contribution can open a research direction in the DBT denoising problem.

12.
Sensors (Basel) ; 18(7)2018 Jul 13.
Artículo en Inglés | MEDLINE | ID: mdl-30011884

RESUMEN

Parameter estimation of Poisson-Gaussian signal-dependent random noise in the complementary metal-oxide semiconductor/charge-coupled device image sensor is a significant step in eliminating noise. The existing estimation algorithms, which are based on finding homogeneous regions, acquire the pair of the variances of noise and the intensities of every homogeneous region to fit the linear or piecewise linear curve and ascertain the noise parameters accordingly. In contrast to the existing algorithms, in this study, the Poisson noise samples of all homogeneous regions in every block image are pieced together to constitute a larger sample following the mixed Poisson noise distribution; then, the mean and variance of the mixed Poisson noise sample are deduced. Next, the mapping function among the noise parameters to be estimated-variance of Poisson-Gaussian noise and that of Gaussian noise corresponding to the stitched region in every block image-is constructed. Finally, the unbiased estimations of noise parameters are calculated from the mapping functions of all the image blocks. The experimental results confirm that the proposed method can obtain lower mean absolute error values of estimated noise parameters than the conventional ones.

13.
Comput Methods Programs Biomed ; 146: 59-68, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28688490

RESUMEN

BACKGROUND AND OBJECTIVE: For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. METHODS: To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. RESULTS: The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. CONCLUSIONS: The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other segmentation approaches used for cancer detection.


Asunto(s)
Biopsia , Lógica Difusa , Procesamiento de Imagen Asistido por Computador , Neoplasias/diagnóstico por imagen , Algoritmos , Humanos
14.
J Biophotonics ; 10(9): 1124-1133, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27943625

RESUMEN

Fluorescence Lifetime Imaging (FLIM) is an attractive microscopy method in the life sciences, yielding information on the sample otherwise unavailable through intensity-based techniques. A novel Noise-Corrected Principal Component Analysis (NC-PCA) method for time-domain FLIM data is presented here. The presence and distribution of distinct microenvironments are identified at lower photon counts than previously reported, without requiring prior knowledge of their number or of the dye's decay kinetics. A noise correction based on the Poisson statistics inherent to Time-Correlated Single Photon Counting is incorporated. The approach is validated using simulated data, and further applied to experimental FLIM data of HeLa cells stained with membrane dye di-4-ANEPPDHQ. Two distinct lipid phases were resolved in the cell membranes, and the modification of the order parameters of the plasma membrane during cholesterol depletion was also detected. Noise-corrected Principal Component Analysis of FLIM data resolves distinct microenvironments in cell membranes of live HeLa cells.


Asunto(s)
Membrana Celular , Aumento de la Imagen/métodos , Microscopía Fluorescente , Imagen Óptica , Células HeLa , Humanos , Fotones , Análisis de Componente Principal
15.
IEEE Trans Nucl Sci ; 63(3): 1435-1439, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27840452

RESUMEN

Noise-weighted FBP (filtered backprojection) algorithm and Bayesian FBP algorithm were developed recently for un-attenuated Radon transform, which have applications in x-ray CT (computed tomography). This paper extends the noise-weighted FBP algorithm to the case of uniformly attenuated Radon transform, and this extended FBP algorithm can be applied in uniformly attenuated SPECT (single photon emission computed tomography). Computer simulations and experimental data demonstrate that the proposed FBP algorithm has similar noise control capability as the iterative ML-EM (maximum likelihood expectation maximization) algorithm. In practice, the attenuator is rarely uniform. A stable FBP algorithm must be developed for non-uniform attenuators before the FBP algorithm can be applied in clinics when attenuation correction is required.

16.
Springerplus ; 5(1): 1272, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27540505

RESUMEN

Restoring Poissonian noise images have drawn a lot of attention in recent years. There are many regularization methods to solve this problem and one of the most famous methods is the total variation model. In this paper, by adding a quadratic regularization on TGV regularization part, a new image restoration model is proposed based on second-order total generalized variation regularization. Then the split Bregman iteration algorithm was used to solve this new model. The experimental results show that the proposed model and algorithm can deal with Poisson image restoration problem well. What's more, the restoration model performance is significantly improved both in visual effect and objective evaluation indexes.

17.
Artículo en Inglés | MEDLINE | ID: mdl-28935996

RESUMEN

The ML-EM (maximum likelihood expectation maximization) algorithm is the most popular image reconstruction method when the measurement noise is Poisson distributed. This short paper considers the problem that for a given noisy projection data set, whether the ML-EM algorithm is able to provide an approximate solution that is close to the true solution. It is well-known that the ML-EM algorithm at early iterations converges towards the true solution and then in later iterations diverges away from the true solution. Therefore a potential good approximate solution can only be obtained by early termination. This short paper argues that the ML-EM algorithm is not optimal in providing such an approximate solution. In order to show that the ML-EM algorithm is not optimal, it is only necessary to provide a different algorithm that performs better. An alternative algorithm is suggested in this paper and this alternative algorithm is able to outperform the ML-EM algorithm.

18.
Indian J Nucl Med ; 29(4): 235-40, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25400362

RESUMEN

PURPOSE: Acquisition of higher counts improves visual perception of positron emission tomography-computed tomography (PET-CT) image. Larger radiopharmaceutical doses (implies more radiation dose) are administered to acquire this count in a short time period. However, diagnostic information does not increase after a certain threshold of counts. This study was conducted to develop a post processing method based on principle of "stochastic resonance" to improve visual perception of the PET-CT image having a required threshold counts. MATERIALS AND METHODS: PET-CT images (JPEG file format) with low, medium, and high counts in the image were included in this study. The image was corrupted with the addition of Poisson noise. The amplitude of the Poisson noise was adjusted by dividing each pixel by a constant 1, 2, 4, 8, 16, and 32. The best amplitude of the noise that gave best images quality was selected based on high value of entropy of the output image, high value of structural similarity index and feature similarity index. Visual perception of the image was evaluated by two nuclear medicine physicians. RESULTS: The variation in structural and feature similarity of the image was not appreciable visually, but statistically images deteriorated as the noise amplitude increases although maintaining structural (above 70%) and feature (above 80%) similarity of input images in all cases. We obtained the best image quality at noise amplitude "4" in which 88% structural and 95% feature similarity of the input images was retained. CONCLUSION: This method of stochastic resonance can be used to improve the visual perception of the PET-CT image. This can indirectly lead to reduction of radiation dose.

19.
J Neurosci ; 34(10): 3632-45, 2014 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-24599462

RESUMEN

Errors in short-term memory increase with the quantity of information stored, limiting the complexity of cognition and behavior. In visual memory, attempts to account for errors in terms of allocation of a limited pool of working memory resources have met with some success, but the biological basis for this cognitive architecture is unclear. An alternative perspective attributes recall errors to noise in tuned populations of neurons that encode stimulus features in spiking activity. I show that errors associated with decreasing signal strength in probabilistically spiking neurons reproduce the pattern of failures in human recall under increasing memory load. In particular, deviations from the normal distribution that are characteristic of working memory errors and have been attributed previously to guesses or variability in precision are shown to arise as a natural consequence of decoding populations of tuned neurons. Observers possess fine control over memory representations and prioritize accurate storage of behaviorally relevant information, at a cost to lower priority stimuli. I show that changing the input drive to neurons encoding a prioritized stimulus biases population activity in a manner that reproduces this empirical tradeoff in memory precision. In a task in which predictive cues indicate stimuli most probable for test, human observers use the cues in an optimal manner to maximize performance, within the constraints imposed by neural noise.


Asunto(s)
Potenciales de Acción/fisiología , Memoria a Corto Plazo/fisiología , Neuronas/fisiología , Orientación/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
20.
Philos Trans R Soc Lond B Biol Sci ; 369(1635): 20130290, 2014 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-24366144

RESUMEN

We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple 'modules' of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues.


Asunto(s)
Potenciales de Acción/fisiología , Corteza Entorrinal/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Percepción Espacial/fisiología , Animales , Simulación por Computador , Corteza Entorrinal/citología , Red Nerviosa/citología , Ratas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA