Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Med Image Anal ; 99: 103343, 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39265362

RESUMEN

In computed tomography (CT) imaging, optimizing the balance between radiation dose and image quality is crucial due to the potentially harmful effects of radiation on patients. Although subjective assessments by radiologists are considered the gold standard in medical imaging, these evaluations can be time-consuming and costly. Thus, objective methods, such as the peak signal-to-noise ratio and structural similarity index measure, are often employed as alternatives. However, these metrics, initially developed for natural images, may not fully encapsulate the radiologists' assessment process. Consequently, interest in developing deep learning-based image quality assessment (IQA) methods that more closely align with radiologists' perceptions is growing. A significant barrier to this development has been the absence of open-source datasets and benchmark models specific to CT IQA. Addressing these challenges, we organized the Low-dose Computed Tomography Perceptual Image Quality Assessment Challenge in conjunction with the Medical Image Computing and Computer Assisted Intervention 2023. This event introduced the first open-source CT IQA dataset, consisting of 1,000 CT images of various quality, annotated with radiologists' assessment scores. As a benchmark, this challenge offers a comprehensive analysis of six submitted methods, providing valuable insight into their performance. This paper presents a summary of these methods and insights. This challenge underscores the potential for developing no-reference IQA methods that could exceed the capabilities of full-reference IQA methods, making a significant contribution to the research community with this novel dataset. The dataset is accessible at https://zenodo.org/records/7833096.

2.
Bioengineering (Basel) ; 11(6)2024 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-38927799

RESUMEN

Cinematic rendering (CR) is a new 3D post-processing technology widely used to produce bone computed tomography (CT) images. This study aimed to evaluate the performance quality of CR in bone CT images using blind quality and noise level evaluations. Bone CT images of the face, shoulder, lumbar spine, and wrist were acquired. Volume rendering (VR), which is widely used in the field of diagnostic medical imaging, was additionally set along with CR. A no-reference-based blind/referenceless image spatial quality evaluator (BRISQUE) and coefficient of variation (COV) were used to evaluate the overall quality of the acquired images. The average BRISQUE values derived from the four areas were 39.87 and 46.44 in CR and VR, respectively. The difference between the two values was approximately 1.16, and the difference between the resulting values increased, particularly in the bone CT image, where metal artifacts were observed. In addition, we confirmed that the COV value improved by 2.20 times on average when using CR compared to VR. This study proved that CR is useful in reconstructing bone CT 3D images and that various applications in the diagnostic medical field will be possible.

3.
Comput Biol Med ; 177: 108670, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38838558

RESUMEN

No-reference image quality assessment (IQA) is a critical step in medical image analysis, with the objective of predicting perceptual image quality without the need for a pristine reference image. The application of no-reference IQA to CT scans is valuable in providing an automated and objective approach to assessing scan quality, optimizing radiation dose, and improving overall healthcare efficiency. In this paper, we introduce DistilIQA, a novel distilled Vision Transformer network designed for no-reference CT image quality assessment. DistilIQA integrates convolutional operations and multi-head self-attention mechanisms by incorporating a powerful convolutional stem at the beginning of the traditional ViT network. Additionally, we present a two-step distillation methodology aimed at improving network performance and efficiency. In the initial step, a "teacher ensemble network" is constructed by training five vision Transformer networks using a five-fold division schema. In the second step, a "student network", comprising of a single Vision Transformer, is trained using the original labeled dataset and the predictions generated by the teacher network as new labels. DistilIQA is evaluated in the task of quality score prediction from low-dose chest CT scans obtained from the LDCT and Projection data of the Cancer Imaging Archive, along with low-dose abdominal CT images from the LDCTIQAC2023 Grand Challenge. Our results demonstrate DistilIQA's remarkable performance in both benchmarks, surpassing the capabilities of various CNNs and Transformer architectures. Moreover, our comprehensive experimental analysis demonstrates the effectiveness of incorporating convolutional operations within the ViT architecture and highlights the advantages of our distillation methodology.


Asunto(s)
Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación
4.
J Imaging ; 10(5)2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38786569

RESUMEN

Image quality assessment of magnetic resonance imaging (MRI) data is an important factor not only for conventional diagnosis and protocol optimization but also for fairness, trustworthiness, and robustness of artificial intelligence (AI) applications, especially on large heterogeneous datasets. Information on image quality in multi-centric studies is important to complement the contribution profile from each data node along with quantity information, especially when large variability is expected, and certain acceptance criteria apply. The main goal of this work was to present a tool enabling users to assess image quality based on both subjective criteria as well as objective image quality metrics used to support the decision on image quality based on evidence. The evaluation can be performed on both conventional and dynamic MRI acquisition protocols, while the latter is also checked longitudinally across dynamic series. The assessment provides an overall image quality score and information on the types of artifacts and degrading factors as well as a number of objective metrics for automated evaluation across series (BRISQUE score, Total Variation, PSNR, SSIM, FSIM, MS-SSIM). Moreover, the user can define specific regions of interest (ROIs) to calculate the regional signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR), thus individualizing the quality output to specific use cases, such as tissue-specific contrast or regional noise quantification.

5.
J Microsc ; 293(2): 98-117, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38112173

RESUMEN

Focused ion beam scanning electron microscopy (FIB-SEM) tomography is a serial sectioning technique where an FIB mills off slices from the material sample that is being analysed. After every slicing, an SEM image is taken showing the newly exposed layer of the sample. By combining all slices in a stack, a 3D image of the material is generated. However, specific artefacts caused by the imaging technique distort the images, hampering the morphological analysis of the structure. Typical quality problems in microscopy imaging are noise and lack of contrast or focus. Moreover, specific artefacts are caused by the FIB milling, namely, curtaining and charging artefacts. We propose quality indices for the evaluation of the quality of FIB-SEM data sets. The indices are validated on real and experimental data of different structures and materials.

6.
Sensors (Basel) ; 23(13)2023 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-37448078

RESUMEN

Recently, stereoscopic image quality assessment has attracted a lot attention. However, compared with 2D image quality assessment, it is much more difficult to assess the quality of stereoscopic images due to the lack of understanding of 3D visual perception. This paper proposes a novel no-reference quality assessment metric for stereoscopic images using natural scene statistics with consideration of both the quality of the cyclopean image and 3D visual perceptual information (binocular fusion and binocular rivalry). In the proposed method, not only is the quality of the cyclopean image considered, but binocular rivalry and other 3D visual intrinsic properties are also exploited. Specifically, in order to improve the objective quality of the cyclopean image, features of the cyclopean images in both the spatial domain and transformed domain are extracted based on the natural scene statistics (NSS) model. Furthermore, to better comprehend intrinsic properties of the stereoscopic image, in our method, the binocular rivalry effect and other 3D visual properties are also considered in the process of feature extraction. Following adaptive feature pruning using principle component analysis, improved metric accuracy can be found in our proposed method. The experimental results show that the proposed metric can achieve a good and consistent alignment with subjective assessment of stereoscopic images in comparison with existing methods, with the highest SROCC (0.952) and PLCC (0.962) scores being acquired on the LIVE 3D database Phase I.


Asunto(s)
Percepción de Profundidad , Imagenología Tridimensional , Imagenología Tridimensional/métodos , Percepción Visual , Atención , Bases de Datos Factuales
7.
Sensors (Basel) ; 23(10)2023 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-37430884

RESUMEN

Blind image quality assessment (BIQA) aims to evaluate image quality in a way that closely matches human perception. To achieve this goal, the strengths of deep learning and the characteristics of the human visual system (HVS) can be combined. In this paper, inspired by the ventral pathway and the dorsal pathway of the HVS, a dual-pathway convolutional neural network is proposed for BIQA tasks. The proposed method consists of two pathways: the "what" pathway, which mimics the ventral pathway of the HVS to extract the content features of distorted images, and the "where" pathway, which mimics the dorsal pathway of the HVS to extract the global shape features of distorted images. Then, the features from the two pathways are fused and mapped to an image quality score. Additionally, gradient images weighted by contrast sensitivity are used as the input to the "where" pathway, allowing it to extract global shape features that are more sensitive to human perception. Moreover, a dual-pathway multi-scale feature fusion module is designed to fuse the multi-scale features of the two pathways, enabling the model to capture both global features and local details, thus improving the overall performance of the model. Experiments conducted on six databases show that the proposed method achieves state-of-the-art performance.


Asunto(s)
Sensibilidad de Contraste , Artículos Domésticos , Humanos , Bases de Datos Factuales , Redes Neurales de la Computación
8.
Comput Med Imaging Graph ; 107: 102216, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37001307

RESUMEN

Fluorescence imaging has demonstrated great potential for malignant tissue inspection. However, poor imaging quality of medical fluorescent images inevitably brings challenges to disease diagnosis. Though improvement of image quality can be achieved by translating the images from low-quality domain to high-quality domain, fewer scholars have studied the spectrum translation and the prevalent cycle-consistent generative adversarial network (CycleGAN) is powerless to grasp local and semantic details, leading to produce unsatisfactory translated images. To enhance the visual quality by shifting spectrum and alleviate the under-constraint problem of CycleGAN, this study presents the design and construction of the perception-enhanced spectrum shift GAN (PSSGAN). Besides, by introducing the constraint of perceptual module and relativistic patch, the model learns effective biological structure details of image translation. Moreover, the interpolation technique is innovatively employed to validate that PSSGAN can vividly show the enhancement process and handle the perception-fidelity trade-off dilemma of fluorescent images. A novel no reference quantitative analysis strategy is presented for medical images. On the open data and collected sets, PSSGAN provided 15.32% ∼ 35.19% improvement in structural similarity and 21.55% ∼ 27.29% improvement in perceptual quality over the leading method CycleGAN. Extensive experimental results indicated that our PSSGAN achieved superior performance and exhibited vital clinical significance.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen Óptica , Procesamiento de Imagen Asistido por Computador/métodos
9.
Artículo en Inglés | MEDLINE | ID: mdl-38274002

RESUMEN

Stethoscopes are used ubiquitously in clinical settings to 'listen' to lung sounds. The use of these systems in a variety of healthcare environments (hospitals, urgent care rooms, private offices, community sites, mobile clinics, etc.) presents a range of challenges in terms of ambient noise and distortions that mask lung signals from being heard clearly or processed accurately using auscultation devices. With advances in technology, computerized techniques have been developed to automate analysis or access a digital rendering of lung sounds. However, most approaches are developed and tested in controlled environments and do not reflect real-world conditions where auscultation signals are typically acquired. Without a priori access to a recording of the ambient noise (for signal-to-noise estimation) or a reference signal that reflects the true undistorted lung sound, it is difficult to evaluate the quality of the lung signal and its potential clinical interpretability. The current study proposes an objective reference-free Auscultation Quality Metric (AQM) which incorporates low-level signal attributes with high-level representational embeddings mapped to a nonlinear quality space to provide an independent evaluation of the auscultation quality. This metric is carefully designed to solely judge the signal based on its integrity relative to external distortions and masking effects and not confuse an adventitious breathing pattern as low-quality auscultation. The current study explores the robustness of the proposed AQM method across multiple clinical categorizations and different distortion types. It also evaluates the temporal sensitivity of this approach and its translational impact for deployment in digital auscultation devices.

10.
Sensors (Basel) ; 22(24)2022 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-36560065

RESUMEN

During acquisition, storage, and transmission, the quality of digital videos degrades significantly. Low-quality videos lead to the failure of many computer vision applications, such as object tracking or detection, intelligent surveillance, etc. Over the years, many different features have been developed to resolve the problem of no-reference video quality assessment (NR-VQA). In this paper, we propose a novel NR-VQA algorithm that integrates the fusion of temporal statistics of local and global image features with an ensemble learning framework in a single architecture. Namely, the temporal statistics of global features reflect all parts of the video frames, while the temporal statistics of local features reflect the details. Specifically, we apply a broad spectrum of statistics of local and global features to characterize the variety of possible video distortions. In order to study the effectiveness of the method introduced in this paper, we conducted experiments on two large benchmark databases, i.e., KoNViD-1k and LIVE VQC, which contain authentic distortions, and we compared it to 14 other well-known NR-VQA algorithms. The experimental results show that the proposed method is able to achieve greatly improved results on the considered benchmark datasets. Namely, the proposed method exhibits significant progress in performance over other recent NR-VQA approaches.


Asunto(s)
Algoritmos , Grabación en Video/métodos
11.
Front Neurosci ; 16: 1022041, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36507332

RESUMEN

Omnidirectional images (ODIs) have drawn great attention in virtual reality (VR) due to the capability of providing an immersive experience to users. However, ODIs are usually subject to various quality degradations during different processing stages. Thus, the quality assessment of ODIs is of critical importance to the community of VR. The quality assessment of ODIs is quite different from that of traditional 2D images. Existing IQA methods focus on extracting features from spherical scenes while ignoring the characteristics of actual viewing behavior of humans in continuously browsing an ODI through HMD and failing to characterize the temporal dynamics of the browsing process in terms of the temporal order of viewports. In this article, we resort to the law of gravity to detect the dynamically attentive regions of humans when viewing ODIs. In this article, we propose a novel no-reference (NR) ODI quality evaluation method by making efforts on two aspects including the construction of Dynamically Attentive Viewport Sequence (DAVS) from ODIs and the extraction of Quality-Aware Features (QAFs) from DAVS. The construction of DAVS aims to build a sequence of viewports that are likely to be explored by viewers based on the prediction of visual scanpath when viewers are freely exploring the ODI within the exploration time via HMD. A DAVS that contains only global motion can then be obtained by sampling a series of viewports from the ODI along the predicted visual scanpath. The subsequent quality evaluation of ODIs is performed merely based on the DAVS. The extraction of QAFs aims to obtain effective feature representations that are highly discriminative in terms of perceived distortion and visual quality. Finally, we can adopt a regression model to map the extracted QAFs to a single predicted quality score. Experimental results on two datasets demonstrate that the proposed method is able to deliver state-of-the-art performance.

12.
Sensors (Basel) ; 22(18)2022 Sep 07.
Artículo en Inglés | MEDLINE | ID: mdl-36146123

RESUMEN

Objective quality assessment of natural images plays a key role in many fields related to imaging and sensor technology. Thus, this paper intends to introduce an innovative quality-aware feature extraction method for no-reference image quality assessment (NR-IQA). To be more specific, a various sequence of HVS inspired filters were applied to the color channels of an input image to enhance those statistical regularities in the image to which the human visual system is sensitive. From the obtained feature maps, the statistics of a wide range of local feature descriptors were extracted to compile quality-aware features since they treat images from the human visual system's point of view. To prove the efficiency of the proposed method, it was compared to 16 state-of-the-art NR-IQA techniques on five large benchmark databases, i.e., CLIVE, KonIQ-10k, SPAQ, TID2013, and KADID-10k. It was demonstrated that the proposed method is superior to the state-of-the-art in terms of three different performance indices.


Asunto(s)
Algoritmos , Bases de Datos Factuales , Humanos
13.
J Imaging ; 8(6)2022 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-35735959

RESUMEN

No-reference image quality assessment (NR-IQA) methods automatically and objectively predict the perceptual quality of images without access to a reference image. Therefore, due to the lack of pristine images in most medical image acquisition systems, they play a major role in supporting the examination of resulting images and may affect subsequent treatment. Their usage is particularly important in magnetic resonance imaging (MRI) characterized by long acquisition times and a variety of factors that influence the quality of images. In this work, a survey covering recently introduced NR-IQA methods for the assessment of MR images is presented. First, typical distortions are reviewed and then popular NR methods are characterized, taking into account the way in which they describe MR images and create quality models for prediction. The survey also includes protocols used to evaluate the methods and popular benchmark databases. Finally, emerging challenges are outlined along with an indication of the trends towards creating accurate image prediction models.

14.
J Imaging ; 8(6)2022 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-35735972

RESUMEN

With the development of digital imaging techniques, image quality assessment methods are receiving more attention in the literature. Since distortion-free versions of camera images in many practical, everyday applications are not available, the need for effective no-reference image quality assessment algorithms is growing. Therefore, this paper introduces a novel no-reference image quality assessment algorithm for the objective evaluation of authentically distorted images. Specifically, we apply a broad spectrum of local and global feature vectors to characterize the variety of authentic distortions. Among the employed local features, the statistics of popular local feature descriptors, such as SURF, FAST, BRISK, or KAZE, are proposed for NR-IQA; other features are also introduced to boost the performances of local features. The proposed method was compared to 12 other state-of-the-art algorithms on popular and accepted benchmark datasets containing RGB images with authentic distortions (CLIVE, KonIQ-10k, and SPAQ). The introduced algorithm significantly outperforms the state-of-the-art in terms of correlation with human perceptual quality ratings.

15.
Sensors (Basel) ; 22(6)2022 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-35336380

RESUMEN

With the constantly growing popularity of video-based services and applications, no-reference video quality assessment (NR-VQA) has become a very hot research topic. Over the years, many different approaches have been introduced in the literature to evaluate the perceptual quality of digital videos. Due to the advent of large benchmark video quality assessment databases, deep learning has attracted a significant amount of attention in this field in recent years. This paper presents a novel, innovative deep learning-based approach for NR-VQA that relies on a set of in parallel pre-trained convolutional neural networks (CNN) to characterize versatitely the potential image and video distortions. Specifically, temporally pooled and saliency weighted video-level deep features are extracted with the help of a set of pre-trained CNNs and mapped onto perceptual quality scores independently from each other. Finally, the quality scores coming from the different regressors are fused together to obtain the perceptual quality of a given video sequence. Extensive experiments demonstrate that the proposed method sets a new state-of-the-art on two large benchmark video quality assessment databases with authentic distortions. Moreover, the presented results underline that the decision fusion of multiple deep architectures can significantly benefit NR-VQA.


Asunto(s)
Atención , Redes Neurales de la Computación , Bases de Datos Factuales
16.
J Imaging ; 7(2)2021 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-34460628

RESUMEN

The perceptual quality of digital images is often deteriorated during storage, compression, and transmission. The most reliable way of assessing image quality is to ask people to provide their opinions on a number of test images. However, this is an expensive and time-consuming process which cannot be applied in real-time systems. In this study, a novel no-reference image quality assessment method is proposed. The introduced method uses a set of novel quality-aware features which globally characterizes the statistics of a given test image, such as extended local fractal dimension distribution feature, extended first digit distribution features using different domains, Bilaplacian features, image moments, and a wide variety of perceptual features. Experimental results are demonstrated on five publicly available benchmark image quality assessment databases: CSIQ, MDID, KADID-10k, LIVE In the Wild, and KonIQ-10k.

17.
J Imaging ; 7(3)2021 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-34460711

RESUMEN

Methods for No-Reference Video Quality Assessment (NR-VQA) of consumer-produced video content are largely investigated due to the spread of databases containing videos affected by natural distortions. In this work, we design an effective and efficient method for NR-VQA. The proposed method exploits a novel sampling module capable of selecting a predetermined number of frames from the whole video sequence on which to base the quality assessment. It encodes both the quality attributes and semantic content of video frames using two lightweight Convolutional Neural Networks (CNNs). Then, it estimates the quality score of the entire video using a Support Vector Regressor (SVR). We compare the proposed method against several relevant state-of-the-art methods using four benchmark databases containing user generated videos (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC). The results show that the proposed method at a substantially lower computational cost predicts subjective video quality in line with the state of the art methods on individual databases and generalizes better than existing methods in cross-database setup.

18.
Phys Med ; 89: 29-40, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34343764

RESUMEN

PURPOSE: Feasability of a no-reference image quality metric was assessed on patient-like images using a patient-specific phantom simulating a frame of a coronary angiogram. METHODS: One background and one contrast-filled frame of a coronary angiogram, acquired using a clinical imaging protocol, were selected from a Philips Integris Allura FD (Philips Healthcare, Best, The Netherlands). The background frame's pixels were extruded to a thickness proportional to their grey value. One phantom was 3D printed using composite 80% bronze filament (max. thickness of 5.1 mm), the other was a custom PMMA cast (max thickness of 8.5 cm). A vessel mold was created from the contrast-filled frame and injected with a solution of 320 mg I/ml contrast fluid (75%), water and gelatin. Still X-ray frames of the vessel mold + background phantom + 16 cm PMMA were acquired at manually selected different exposure settings using a Philips Azurion (Philips Healthcare, Best, The Netherlands) in User Quality Control Mode and were exported as RAW images. The signal-difference-to-noise-ratio-squared (SDNR2) and a spatial-domain-equivalent of the noise equivalent quanta (NEQSDE) were calculated. The Spearman's correlation of the latter parameters with a no-reference perceptual image quality metric (NIQE) was investigated. RESULTS: The bronze phantom showed better resemblance to the original patient frame selected from a coronary angiogram of an actual patient, with better contrast and less blur than the PMMA phantom. Both phantoms were imaged using a comparable imaging protocol to the one used to acquire the original frame. The bronze phantom was hence used together with the vessel mold for image quality measurements on the 165 still phantom frames. A strong correlation was noted between NEQSDE and NIQE (SROCC = -0.99, p < 0.0005) and between SDNR2 and NIQE (SROCC = -0.97, p < 0.0005). CONCLUSION: Using a cost-effective and easy to realize patient-specific phantom we were able to generate patient-like X-ray frames. NIQE as a no-reference image quality model has the potential to predict physical image quality from patient images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Impresión Tridimensional , Humanos , Fantasmas de Imagen , Relación Señal-Ruido , Rayos X
19.
Entropy (Basel) ; 23(7)2021 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-34206721

RESUMEN

In recent years, people's daily lives have become inseparable from a variety of electronic devices, especially mobile phones, which have undoubtedly become necessity in people's daily lives. In this paper, we are looking for a reliable way to acquire visual quality of the display product so that we can improve the user's experience with the display product. This paper proposes two major contributions: the first one is the establishment of a new subjective assessment database (DPQAD) of display products' screen images. Specifically, we invited 57 inexperienced observers to rate 150 screen images showing the display product. At the same time, in order to improve the reliability of screen display quality score, we combined the single stimulation method with the stimulation comparison method to evaluate the newly created display products' screen images database effectively. The second one is the development of a new no-reference image quality assessment (IQA) metric. For a given image of the display product, first our method extracts 27 features by analyzing the contrast, sharpness, brightness, etc., and then uses the regression module to obtain the visual quality score. Comprehensive experiments show that our method can evaluate natural scene images and screen content images at the same time. Moreover, compared with ten state-of-the-art IQA methods, our method shows obvious superiority on DPQAD.

20.
Entropy (Basel) ; 23(6)2021 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-34207229

RESUMEN

Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric distortions that seriously deteriorate the users' visual perception. An effective 3D synthesized image quality assessment (IQA) metric can simulate human visual perception and determine the application feasibility of the synthesized content. In this paper, a no-reference IQA metric based on visual-entropy-guided multi-layer features analysis for 3D synthesized images is proposed. According to the energy entropy, the geometric distortions are divided into two visual attention layers, namely, bottom-up layer and top-down layer. The feature of salient distortion is measured by regional proportion plus transition threshold on a bottom-up layer. In parallel, the key distribution regions of insignificant geometric distortion are extracted by a relative total variation model, and the features of these distortions are measured by the interaction of decentralized attention and concentrated attention on top-down layers. By integrating the features of both bottom-up and top-down layers, a more visually perceptive quality evaluation model is built. Experimental results show that the proposed method is superior to the state-of-the-art in assessing the quality of 3D synthesized images.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA