Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(15)2023 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-37571497

RESUMEN

In the past few years, 3D Morphing Model (3DMM)-based methods have achieved remarkable results in single-image 3D face reconstruction. However, high-fidelity 3D face texture generation has been successfully achieved with this method, which mostly uses the power of deep convolutional neural networks during the parameter fitting process, which leads to an increase in the number of network layers and computational burden of the network model and reduces the computational speed. Currently, existing methods increase computational speed by using lightweight networks for parameter fitting, but at the expense of reconstruction accuracy. In order to solve the above problems, we improved the 3D deformation model and proposed an efficient and lightweight network model: Mobile-FaceRNet. First, we combine depthwise separable convolution and multi-scale representation methods to fit the parameters of a 3D deformable model (3DMM); then, we introduce a residual attention module during network training to enhance the network's attention to important features, guaranteeing high-fidelity facial texture reconstruction quality; and, finally, a new perceptual loss function is designed to better address smoothness and image similarity for the smoothing constraints. Experimental results show that the method proposed in this paper can not only achieve high-precision reconstruction under the premise of lightweight, but it is also more robust to influences such as attitude and occlusion.

2.
Bioengineering (Basel) ; 9(11)2022 Oct 27.
Artículo en Inglés | MEDLINE | ID: mdl-36354529

RESUMEN

The 3D reconstruction of an accurate face model is essential for delivering reliable feedback for clinical decision support. Medical imaging and specific depth sensors are accurate but not suitable for an easy-to-use and portable tool. The recent development of deep learning (DL) models opens new challenges for 3D shape reconstruction from a single image. However, the 3D face shape reconstruction of facial palsy patients is still a challenge, and this has not been investigated. The contribution of the present study is to apply these state-of-the-art methods to reconstruct the 3D face shape models of facial palsy patients in natural and mimic postures from one single image. Three different methods (3D Basel Morphable model and two 3D Deep Pre-trained models) were applied to the dataset of two healthy subjects and two facial palsy patients. The reconstructed outcomes were compared to the 3D shapes reconstructed using Kinect-driven and MRI-based information. As a result, the best mean error of the reconstructed face according to the Kinect-driven reconstructed shape is 1.5±1.1 mm. The best error range is 1.9±1.4 mm when compared to the MRI-based shapes. Before using the procedure to reconstruct the 3D faces of patients with facial palsy or other facial disorders, several ideas for increasing the accuracy of the reconstruction can be discussed based on the results. This present study opens new avenues for the fast reconstruction of the 3D face shapes of facial palsy patients from a single image. As perspectives, the best DL method will be implemented into our computer-aided decision support system for facial disorders.

3.
Sensors (Basel) ; 22(10)2022 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-35632241

RESUMEN

In the last few years, Augmented Reality, Virtual Reality, and Artificial Intelligence (AI) have been increasingly employed in different application domains. Among them, the retail market presents the opportunity to allow people to check the appearance of accessories, makeup, hairstyle, hair color, and clothes on themselves, exploiting virtual try-on applications. In this paper, we propose an eyewear virtual try-on experience based on a framework that leverages advanced deep learning-based computer vision techniques. The virtual try-on is performed on a 3D face reconstructed from a single input image. In designing our system, we started by studying the underlying architecture, components, and their interactions. Then, we assessed and compared existing face reconstruction approaches. To this end, we performed an extensive analysis and experiments for evaluating their design, complexity, geometry reconstruction errors, and reconstructed texture quality. The experiments allowed us to select the most suitable approach for our proposed try-on framework. Our system considers actual glasses and face sizes to provide a realistic fit estimation using a markerless approach. The user interacts with the system by using a web application optimized for desktop and mobile devices. Finally, we performed a usability study that showed an above-average score of our eyewear virtual try-on application.


Asunto(s)
Realidad Aumentada , Realidad Virtual , Inteligencia Artificial , Humanos , Programas Informáticos
4.
J Imaging ; 7(9)2021 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-34460805

RESUMEN

Being able to robustly reconstruct 3D faces from 2D images is a topic of pivotal importance for a variety of computer vision branches, such as face analysis and face recognition, whose applications are steadily growing. Unlike 2D facial images, 3D facial data are less affected by lighting conditions and pose. Recent advances in the computer vision field have enabled the use of convolutional neural networks (CNNs) for the production of 3D facial reconstructions from 2D facial images. This paper proposes a novel CNN-based method which targets 3D facial reconstruction from two facial images, one in front and one from the side, as are often available to law enforcement agencies (LEAs). The proposed CNN was trained on both synthetic and real facial data. We show that the proposed network was able to predict 3D faces in the MICC Florence dataset with greater accuracy than the current state-of-the-art. Moreover, a scheme for using the proposed network in cases where only one facial image is available is also presented. This is achieved by introducing an additional network whose task is to generate a rotated version of the original image, which in conjunction with the original facial image, make up the image pair used for reconstruction via the previous method.

5.
Sensors (Basel) ; 21(5)2021 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-33800750

RESUMEN

Mainstream methods treat head pose estimation as a supervised classification/regression problem, whose performance heavily depends on the accuracy of ground-truth labels of training data. However, it is rather difficult to obtain accurate head pose labels in practice, due to the lack of effective equipment and reasonable approaches for head pose labeling. In this paper, we propose a method which does not need to be trained with head pose labels, but matches the keypoints between a reconstructed 3D face model and the 2D input image, for head pose estimation. The proposed head pose estimation method consists of two components: the 3D face reconstruction and the 3D-2D matching keypoints. At the 3D face reconstruction phase, a personalized 3D face model is reconstructed from the input head image using convolutional neural networks, which are jointly optimized by an asymmetric Euclidean loss and a keypoint loss. At the 3D-2D keypoints matching phase, an iterative optimization algorithm is proposed to match the keypoints between the reconstructed 3D face model and the 2D input image efficiently under the constraint of perspective transformation. The proposed method is extensively evaluated on five widely used head pose estimation datasets, including Pointing'04, BIWI, AFLW2000, Multi-PIE, and Pandora. The experimental results demonstrate that the proposed method achieves excellent cross-dataset performance and surpasses most of the existing state-of-the-art approaches, with average MAEs of 4.78∘ on Pointing'04, 6.83∘ on BIWI, 7.05∘ on AFLW2000, 5.47∘ on Multi-PIE, and 5.06∘ on Pandora, although the model of the proposed method is not trained on any of these five datasets.

6.
Sensors (Basel) ; 21(8)2021 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-33917034

RESUMEN

Facial recognition has attracted more and more attention since the rapid growth of artificial intelligence (AI) techniques in recent years. However, most of the related works about facial reconstruction and recognition are mainly based on big data collection and image deep learning related algorithms. The data driven based AI approaches inevitably increase the computational complexity of CPU and usually highly count on GPU capacity. One of the typical issues of RGB-based facial recognition is its applicability in low light or dark environments. To solve this problem, this paper presents an effective procedure for facial reconstruction as well as facial recognition via using a depth sensor. For each testing candidate, the depth camera acquires a multi-view of its 3D point clouds. The point cloud sets are stitched for 3D model reconstruction by using the iterative closest point (ICP). Then, a segmentation procedure is designed to separate the model set into a body part and head part. Based on the segmented 3D face point clouds, certain facial features are then extracted for recognition scoring. Taking a single shot from the depth sensor, the point cloud data is going to register with other 3D face models to determine which is the best candidate the data belongs to. By using the proposed feature-based 3D facial similarity score algorithm, which composes of normal, curvature, and registration similarities between different point clouds, the person can be labeled correctly even in a dark environment. The proposed method is suitable for smart devices such as smart phones and smart pads with tiny depth camera equipped. Experiments with real-world data show that the proposed method is able to reconstruct denser models and achieve point cloud-based 3D face recognition.


Asunto(s)
Inteligencia Artificial , Imagenología Tridimensional , Algoritmos , Cara , Humanos
7.
Comput Vis ECCV ; 12354: 433-449, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33135013

RESUMEN

Fitting 3D morphable models (3DMMs) on faces is a well-studied problem, motivated by various industrial and research applications. 3DMMs express a 3D facial shape as a linear sum of basis functions. The resulting shape, however, is a plausible face only when the basis coefficients take values within limited intervals. Methods based on unconstrained optimization address this issue with a weighted ℓ 2 penalty on coefficients; however, determining the weight of this penalty is difficult, and the existence of a single weight that works universally is questionable. We propose a new formulation that does not require the tuning of any weight parameter. Specifically, we formulate 3DMM fitting as an inequality-constrained optimization problem, where the primary constraint is that basis coefficients should not exceed the interval that is learned when the 3DMM is constructed. We employ additional constraints to exploit sparse landmark detectors, by forcing the facial shape to be within the error bounds of a reliable detector. To enable operation "in-the-wild", we use a robust objective function, namely Gradient Correlation. Our approach performs comparably with deep learning (DL) methods on "in-the-wild" data that have inexact ground truth, and better than DL methods on more controlled data with exact ground truth. Since our formulation does not require any learning, it enjoys a versatility that allows it to operate with multiple frames of arbitrary sizes. This study's results encourage further research on 3DMM fitting with inequality-constrained optimization methods, which have been unexplored compared to unconstrained methods.

8.
Acta Otolaryngol ; 139(4): 340-344, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30794067

RESUMEN

BACKGROUND: This study evaluates otitis media in prehistoric populations in northern Chile. AIMS/OBJECTIVES: Determining prevalence of otitis media and diagnostic usefulness of temporal-bone X-rays in skulls. MATERIALS AND METHODS: 444 skulls belonging to three groups: prehistoric-coastal (400-1000 AD), prehistoric-highland (400-1000 AD) and Pisagua-Regional Developments (1000-1450 AD). Skulls were evaluated visually and with Schuller's view X-rays. Five skulls diagnosed as having had otitis media, five diagnosed as normal, and one with temporal bone fistula also had a computed tomography (CT). RESULTS: Changes suggestive of otitis media were present in Prehistoric-coastal 53.57%; Pisagua-Regional Developments 70.73%; prehistoric-highlands 47.90%. Diagnostic effectiveness of Schuller's view X-rays for assesing middle ear disease was confirmed by CT studies. The case with temporal bone fistula had changes suggestive of mastoiditis and possible post auricular abscess. CONCLUSIONS: There was a high prevalence of otitis media in prehistoric populations in Chile. The higher prevalence in one group was presumably due to racial factors. Temporal-bone X-rays are effective for massive evaluation of ear disease in skulls. A case of mastoiditis with temporal bone fistula and possible post-auricular abscess is documented. SIGNIFICANCE: Documenting racial factors in otitis media. Validating X-rays for massive evaluation of otitis media in skulls.


Asunto(s)
Mastoiditis/diagnóstico por imagen , Otitis Media/diagnóstico por imagen , Hueso Temporal/diagnóstico por imagen , Chile/epidemiología , Humanos , Mastoiditis/etnología , Otitis Media/etnología , Paleopatología , Prevalencia , Tomografía Computarizada por Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA