Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
Int J Comput Dent ; 0(0): 0, 2023 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-37341385

RESUMEN

AIM: An in-vitro study was performed to investigate the overall and regional accuracy and precision of digital three-dimensional facial scans obtained from four tablet-based applications (Bellus Dental Pro, Capture: 3D scan anything, Heges, and Scandy Pro 3D scanner) on an iPad Pro® (Apple Store, Cupertino, CA, USA), equipped with LiDAR and TrueDepth technology, compared to the validated manual measurements using a digital vernier caliper (DVC). MATERIALS AND METHODS: The accuracy of the various applications was determined through multiple scans of a three-dimensional (3D) printed mannequin face using iPad Pro®. For precision evaluation, the mannequin's face was scanned five times with each application, and these models were compared using the coefficient of variation (CV). The descriptive statistics were done from SPSS version 23 (IBM Company, Chicago, USA). One-sample t-test was used to analyze the difference between the control and the various scans. RESULTS: While the applications Capture, Heges, and Scandy tended to overestimate the measured values compared to DVC, the Bellus application underestimated these values. Scandy showed the highest mean difference in the Go - Ch (R) measurement, with a value of 2.19 mm. All the others average differences were less than 1.60mm. The assessment of precision showed that the coefficient of variation ranged from 0.16% and 6.34%. CONCLUSION: The iPad Pro® (2020) showed good precision and reasonable reliability, and it appears to be an interesting and favorable technology for the acquisition of surface images of facial-like structures. Moreover, further clinical investigations should be conducted.

2.
J Dent ; 135: 104533, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37149254

RESUMEN

OBJECTIVES: This study aimed to investigate the overall and regional accuracy (trueness and precision) of digital three-dimensional (3D) facial scans obtained from four tablet-based applications, which were (Bellus) the Bellus Dental Pro® (Bellus3D, Inc. Campbell, CA, USA), (Capture) the Capture®: 3D Scan Anything (Standard Cyborg, Inc. San Francisco, CA, USA), (Heges) the Heges® (by Marek Simonik, Ostrava, North Moravia, Czech Republic), and (Scandy) the Scandy Pro 3D Scanner® (Scandy LLC, New Orleans, LA, USA). METHODS: A mannequin's face was marked with 63 landmarks. Subsequently, it was scanned 5 times using each scan application on an iPad Pro® (Apple Inc., Cupertino, CA, USA). The digital measurements were obtained with MeshLab® (CNR-ISTI, Pisa, Tuscany, Italy) and compared to the manual measurements using a digital vernier calliper (Truper Herramientas S.A., Colonia Granada, Mexico City, Mexico). The absolute mean difference and the standard deviation of the dimensional discrepancies were calculated. Moreover, the data were analysed by using one-way ANOVA, Levene's test, and Bonferroni´s correction. RESULTS: The absolute mean trueness values were Bellus 0.41 ± 0.35 mm, Capture 0.38 ± 0.37 mm, Heges 0.39 ± 0.38 mm, and Scandy 0.47 ± 0.44 mm. Moreover, precision values were Bellus 0.46 mm, Capture 0.46 mm, Heges 0.54 mm, and Scandy 0.64 mm. Comparing the regions, Capture and Scandy showed the highest absolute mean difference, which was 0.81 mm in the Frontal and Zygomaticofacial regions, respectively. CONCLUSIONS: The trueness and precision of all four tablet-based applications were clinically acceptable for diagnosis and treatment planning. CLINICAL SIGNIFICANCE: The future of the three-dimensional facial scan is auspicious, and it has the potential to be affordable, accurate, and of great value for clinicians in their daily practice.


Asunto(s)
Diseño Asistido por Computadora , Imagenología Tridimensional , Técnica de Impresión Dental , Modelos Dentales , Proyectos de Investigación
3.
Multimed Tools Appl ; 82(8): 11305-11319, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35991583

RESUMEN

Facial Expression recognition is a computer vision problem that took relevant benefit from the research in deep learning. Recent deep neural networks achieved superior results, demonstrating the feasibility of recognizing the expression of a user from a single picture or a video recording the face dynamics. Research studies reveal that the most discriminating portions of the face surfaces that contribute to the recognition of facial expressions are located on the mouth and the eyes. The restrictions for COVID pandemic reasons have also revealed that state-of-the-art solutions for the analysis of the face can severely fail due to the occlusions of using the facial masks. This study explores to what extend expression recognition can deal with occluded faces in presence of masks. To a fairer comparison, the analysis is performed in different occluded scenarios to effectively assess if the facial masks can really imply a decrease in the recognition accuracy. The experiments performed on two public datasets show that some famous top deep classifiers expose a significant reduction in accuracy in presence of masks up to half of the accuracy achieved in non-occluded conditions. Moreover, a relevant decrease in performance is also reported also in the case of occluded eyes but the overall drop in performance is not as severe as in presence of the facial masks, thus confirming that, like happens for face biometric recognition, occluded faces by facial mask still represent a challenging limitation for computer vision solutions.

4.
Indian J Dent Res ; 34(3): 247-251, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38197340

RESUMEN

Context: The anthropometric facial clinical proportions are used in the field of orthodontics, maxillofacial and plastic surgery for aesthetic or abnormality corrections. There is lack of enough literature on the facial profiles of Indians. Aim: To assess correlations between facial parameters and stature of young Maharashtrian women by using anthropometry. Settings and Design: It is a cross-sectional observational pilot study at Maharashtra Institute of Dental Sciences & Research, after approval from the Institutional Ethical Committee. Methods and Material: The study included 15 students of 21-23 years age selected by simple randomisation. The facial parameters were measured by sliding vernier calipers after identifying facial landmarks by stickers. Facial height (FH) in thirds; upper FH (UFH), middle FH (MFH) and lower FH (LFH); facial width (FW) and stature or overall height (OH) were calculated to define average facial features. Statistical Analysis: Multiple pairwise statistics and simple linear regression analyses were done for various dependent variables. Results: The means of UFH, MFH, LFH and total facial heights (TFH) were found to be 5.2 ± 0.54, 5.35 ± 0.34, 5.16 ± 0.44 and 15.7 ± 0.98 cm, respectively. The TFH showed a moderate correlation with stature (P ≤ 0.05, r = 0.64) and a strong correlation with lower lip length (P = 0.001, r = 0.78). Facial width showed a negative correlation with facial shape (P ≤ 0.05). Conclusions: The selected sample showed the statistically insignificant difference between UFH, MFH and LFH indicating equitable distribution among Indian women of Maharashtrian origin of 21-23 year age group. Longer TFH is positively correlated with higher stature and longer lower lip length.


Asunto(s)
Atención Odontológica , Cara , Humanos , Femenino , Proyectos Piloto , Estudios Transversales , India
5.
Brain Behav ; 12(7): e2640, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35687720

RESUMEN

INTRODUCTION: The practicality of the idea whether the laughter-involved large-scale brain networks can be stimulated to remediate affective symptoms, namely depression, has remained elusive. METHODS: In this study, 25 healthy individuals were tested through 21-channel quantitative electroencephalography (qEEG) setup upon resting state and while submitted to standardized funny video clips (corated by two behavioral neuroscientists and a verified expert comedian, into neutral and mildly to highly funny). We evaluated the individuals' facial expressions against the valence and intensity of each stimulus through the Nuldos face analysis software. The study also employed an eye-tracking setup to examine fixations, gaze, and saccadic movements upon each task. In addition, changes in polygraphic parameters were monitored upon resting state and exposure to clips using the 4-channel Nexus polygraphy setup. RESULTS: The happy facial expression analysis, as a function of rated funny clips, showed a significant difference against neutral videos (p < 0.001). In terms of the polygraphic changes, heart rate variability and the trapezius muscle surface electromyography measures were significantly higher upon exposure to funny vs. neutral videos (p < 0.5). The average pupil size and fixation drifts were significantly higher and lower, respectively, upon exposure to funny videos (p < 0.01). The qEEG data revealed the highest current source density (CSD) for the alpha frequency band localized in the left frontotemporal network (FTN) upon exposure to funny clips. Additionally, left FTN acquired the highest value for theta coherence z-score, while the beta CSD predominantly fell upon the salience network (SN). CONCLUSIONS: These preliminary data support the notion that left FTN may be targeted as a cortical hub for noninvasive neuromodulation as a single or adjunct therapy in remediating affective disorders in the clinical setting. Further studies are needed to test the hypotheses derived from the present report.


Asunto(s)
Risa , Síntomas Afectivos , Encéfalo/fisiología , Electroencefalografía , Emociones/fisiología , Expresión Facial , Humanos
6.
J Imaging ; 7(9)2021 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-34460805

RESUMEN

Being able to robustly reconstruct 3D faces from 2D images is a topic of pivotal importance for a variety of computer vision branches, such as face analysis and face recognition, whose applications are steadily growing. Unlike 2D facial images, 3D facial data are less affected by lighting conditions and pose. Recent advances in the computer vision field have enabled the use of convolutional neural networks (CNNs) for the production of 3D facial reconstructions from 2D facial images. This paper proposes a novel CNN-based method which targets 3D facial reconstruction from two facial images, one in front and one from the side, as are often available to law enforcement agencies (LEAs). The proposed CNN was trained on both synthetic and real facial data. We show that the proposed network was able to predict 3D faces in the MICC Florence dataset with greater accuracy than the current state-of-the-art. Moreover, a scheme for using the proposed network in cases where only one facial image is available is also presented. This is achieved by introducing an additional network whose task is to generate a rotated version of the original image, which in conjunction with the original facial image, make up the image pair used for reconstruction via the previous method.

7.
J Craniomaxillofac Surg ; 49(9): 775-782, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33941437

RESUMEN

The study aimed at developing a deep-learning (DL)-based algorithm to predict the virtual soft tissue profile after mandibular advancement surgery, and to compare its accuracy with the mass tensor model (MTM). Subjects who underwent mandibular advancement surgery were enrolled and divided into a training group and a test group. The DL model was trained using 3D photographs and CBCT data based on surgically achieved mandibular displacements (training group). Soft tissue simulations generated by DL and MTM based on the actual surgical jaw movements (test group) were compared with soft-tissue profiles on postoperative 3D photographs using distance mapping in terms of mean absolute error in the lower face, lower lip, and chin regions. 133 subjects were included - 119 in the training group and 14 in the test group. The mean absolute error for DL-based simulations of the lower face region was 1.0 ± 0.6 mm and was significantly lower (p = 0.02) compared with MTM-based simulations (1.5 ± 0.5 mm). CONCLUSION: The DL-based algorithm can predict 3D soft tissue profiles following mandibular advancement surgery. With a clinically acceptable mean absolute error. Therefore, it seems to be a relevant option for soft tissue prediction in orthognathic surgery. Therefore, it seems to be a relevant options.


Asunto(s)
Aprendizaje Profundo , Avance Mandibular , Procedimientos Quirúrgicos Ortognáticos , Cefalometría , Mentón/anatomía & histología , Mentón/diagnóstico por imagen , Mentón/cirugía , Humanos , Imagenología Tridimensional , Labio/anatomía & histología , Mandíbula/diagnóstico por imagen , Mandíbula/cirugía
8.
Angle Orthod ; 91(5): 641-649, 2021 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-33826690

RESUMEN

OBJECTIVES: To compare the degree of accuracy of the Face Hunter facial scanner and the Dental Pro application for facial scanning, with respect to both manual measurements and each other. MATERIALS AND METHODS: Twenty-five patients were measured manually and scanned using each device. Six reference markers were placed on each subject's face at the cephalometric points Tr, Na', Prn, Pog', and L-R Zyg. Digital measurement software was used to calculate the distances between the cephalometric reference points on each of the scans. Geomagic X Control was used to superimpose the scans, automatically determining the best-fit alignment and calculating the percentage of overlapping surfaces within the tolerance ranges. RESULTS: Individual comparisons of the four distances measured anthropometrically and on the scans yielded an intraclass correlation coefficient index greater than .9. The t-test for matched samples yielded a P value below the significance threshold. Right and left cheeks reached around 60% of the surface, with a margin of error between 0.5 mm and -0.5 mm. The forehead was the only area in which most of the surface fell within the poorly reproducible range, presenting values out of tolerance of more than 20%. CONCLUSIONS: Three-dimensional scans of the facial surface provide an excellent analytical tool for clinical evaluation; it does not appear that one or the other of the measuring tools is systematically more accurate, and the cheeks are the area with the highest average percentage of surface in the highly reproducible range.


Asunto(s)
Cara , Imagenología Tridimensional , Cefalometría , Mejilla , Cara/diagnóstico por imagen , Humanos , Programas Informáticos
9.
J Pers Med ; 11(3)2021 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-33805736

RESUMEN

Patients with severe facial deformities present serious dysfunctionalities along with an unsatisfactory aesthetic facial appearance. Several methods have been proposed to specifically plan the interventions on the patient's needs, but none of these seem to achieve a sufficient level of accuracy in predicting the resulting facial appearance. In this context, a deep knowledge of what occurs in the face after bony movements in specific surgeries would give the possibility to develop more reliable systems. This study aims to propose a novel 3D approach for the evaluation of soft tissue zygomatic modifications after zygomatic osteotomy; geometrical descriptors usually involved in face analysis tasks, i.e., face recognition and facial expression recognition, are here applied to soft tissue malar region to detect changes in surface shape. As ground truth for zygomatic changes, a zygomatic openness angular measure is adopted. The results show a high sensibility of geometrical descriptors in detecting shape modification of the facial surface, outperforming the results obtained from the angular evaluation.

10.
Psychother Res ; 31(3): 402-417, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33148118

RESUMEN

Objective: We explore state of the art machine learning based tools for automatic facial and linguistic affect analysis to allow easier, faster, and more precise quantification and annotation of children's verbal and non-verbal affective expressions in psychodynamic child psychotherapy. Method: The sample included 53 Turkish children: 41 with internalizing, externalizing and comorbid problems; 12 in the non-clinical range. We collected audio and video recordings of 148 sessions, which were manually transcribed. Independent raters coded children's expressions of pleasure, anger, sadness and anxiety using the Children's Play Therapy Instrument (CPTI). Automatic facial and linguistic affect analysis modalities were adapted, developed, and combined in a system that predicts affect. Statistical regression methods (linear and polynomial regression) and machine learning techniques (deep learning, support vector regression and extreme learning machine) were used for predicting CPTI affect dimensions. Results: Experimental results show significant associations between automated affect predictions and CPTI affect dimensions with small to medium effect sizes. Fusion of facial and linguistic features work best for pleasure predictions; however, for other affect predictions linguistic analyses outperform facial analyses. External validity analyses partially support anger and pleasure predictions. Discussion: The system enables retrieving affective expressions of children, but needs improvement for precision.


Asunto(s)
Ludoterapia , Psicoterapia Psicodinámica , Afecto , Ansiedad , Niño , Humanos
11.
Int J Comput Assist Radiol Surg ; 15(11): 1941-1950, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32888163

RESUMEN

PURPOSE: Rhinoplasty is one of the most common and challenging plastic surgery procedures. The results of the operation have a significant impact on the facial appearance. The planning is critical for successful rhinoplasty surgery. In this paper, we present a web application designed for preoperative rhinoplasty surgery planning. METHODS: The application uses the three-dimensional (3D) model of a patient's face and facilitates marking of an extensive number of facial features and auto-calculation of facial measurements to develop a numerical plan of the surgery. The web application includes definitions, illustrations, and formulas to describe the features and measurements. In addition to the existing measurements, the user can calculate the distance between any two points, the angle between any three points, and the ratio of any two distances. We conducted a survey among experienced rhinoplasty surgeons to get feedback about the web application and to understand their attitude toward utilizing 3D models for preoperative planning. RESULTS: The web application can be accessed and used through any web browser at digitized-rhinoplasty.com. The web application was utilized in our tests and also by the survey participants. The users successfully marked the facial features on the 3D models and reviewed the auto-calculated measurements. The survey results show that the experienced surgeons who tried the web application found it useful for preoperative planning and they also think that utilizing 3D models is beneficial. CONCLUSIONS: The web application introduced in this paper helps analyzing the patient's face in details utilizing 3D models and provides numeric outputs to be used in the rhinoplasty operation planning. The experienced rhinoplasty surgeons that participated to our survey agree that the web app would be a beneficial tool for rhinoplasty surgeons. We aim to further improve the web application with more functionality to help surgeons for preoperative planning of rhinoplasty.


Asunto(s)
Imagenología Tridimensional , Cuidados Preoperatorios , Rinoplastia/métodos , Humanos , Cirujanos
12.
Proteins ; : e25993, 2020 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-32779779

RESUMEN

This article reports on the results of research aimed to translate biometric 3D face recognition concepts and algorithms into the field of protein biophysics in order to precisely and rapidly classify morphological features of protein surfaces. Both human faces and protein surfaces are free-forms and some descriptors used in differential geometry can be used to describe them applying the principles of feature extraction developed for computer vision and pattern recognition. The first part of this study focused on building the protein dataset using a simulation tool and performing feature extraction using novel geometrical descriptors. The second part tested the method on two examples, first involved a classification of tubulin isotypes and the second compared tubulin with the FtsZ protein, which is its bacterial analog. An additional test involved several unrelated proteins. Different classification methodologies have been used: a classic approach with a support vector machine (SVM) classifier and an unsupervised learning with a k-means approach. The best result was obtained with SVM and the radial basis function kernel. The results are significant and competitive with the state-of-the-art protein classification methods. This leads to a new methodological direction in protein structure analysis.

13.
Artículo en Inglés | MEDLINE | ID: mdl-31842255

RESUMEN

Face scanners promise wide applications in medicine and dentistry, including facial recognition, capturing facial emotions, facial cosmetic planning and surgery, and maxillofacial rehabilitation. Higher accuracy improves the quality of the data recorded from the face scanner, which ultimately, will improve the outcome. Although there are various face scanners available on the market, there is no evidence of a suitable face scanner for practical applications. The aim of this in vitro study was to analyze the face scans obtained from four scanners; EinScan Pro (EP), EinScan Pro 2X Plus (EP+) (Shining 3D Tech. Co., Ltd. Hangzhou, China), iPhone X (IPX) (Apple Store, Cupertino, CA, USA), and Planmeca ProMax 3D Mid (PM) (Planmeca USA, Inc. IL, USA), and to compare scans obtained from various scanners with the control (measured from Vernier caliper). This should help to identify the appropriate scanner for face scanning. A master face model was created and printed from polylactic acid using the resolution of 200 microns on x, y, and z axes and designed in Rhinoceros 3D modeling software (Rhino, Robert McNeel and Associates for Windows, Washington DC, USA). The face models were 3D scanned with four scanners, five times, according to the manufacturer's recommendations; EinScan Pro (Shining 3D Tech. Co., Ltd. Hangzhou, China), EinScan Pro 2X Plus (Shining 3D Tech. Co., Ltd. Hangzhou, China) using Shining Software, iPhone X (Apple Store, Cupertino, CA, USA) using Bellus3D Face Application (Bellus3D, version 1.6.2, Bellus3D, Inc. Campbell, CA, USA), and Planmeca ProMax 3D Mid (PM) (Planmeca USA, Inc. IL, USA). Scan data files were saved as stereolithography (STL) files for the measurements. From the STL files, digital face models are created in the computer using Rhinoceros 3D modeling software (Rhino, Robert McNeel and Associates for Windows, Washington DC, USA). Various measurements were measured five times from the reference points in three axes (x, y, and z) using a digital Vernier caliper (VC) (Mitutoyo 150 mm Digital Caliper, Mitutoyo Co., Kanagawa, Japan), and the mean was calculated, which was used as the control. Measurements were measured on the digital face models of EP, EP+, IPX, and PM using Rhinoceros 3D modeling software (Rhino, Robert McNeel and Associates for Windows, Washington DC, USA). The descriptive statistics were done from SPSS version 20 (IBM Company, Chicago, USA). One-way ANOVA with post hoc using Scheffe was done to analyze the differences between the control and the scans (EP, EP+, IPX, and PM). The significance level was set at p = 0.05. EP+ showed the highest accuracy. EP showed medium accuracy and some lesser accuracy (accurate until 10 mm of length), but IPX and PM showed the least accuracy. EP+ showed accuracy in measuring the 2 mm of depth (diameter 6 mm). All other scanners (EP, IPX, and PM) showed less accuracy in measuring depth. Finally, the accuracy of an optical scan is dependent on the technology used by each scanner. It is recommended to use EP+ for face scanning.


Asunto(s)
Cara , Procesamiento de Imagen Asistido por Computador , Programas Informáticos , Humanos
14.
Sensors (Basel) ; 19(19)2019 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-31554260

RESUMEN

We present a system that utilizes a range of image processing algorithms to allow fully automated thermal face analysis under both laboratory and real-world conditions. We implement methods for face detection, facial landmark detection, face frontalization and analysis, combining all of these into a fully automated workflow. The system is fully modular and allows implementing own additional algorithms for improved performance or specialized tasks. Our suggested pipeline contains a histogtam of oriented gradients support vector machine (HOG-SVM) based face detector and different landmark detecion methods implemented using feature-based active appearance models, deep alignment networks and a deep shape regression network. Face frontalization is achieved by utilizing piecewise affine transformations. For the final analysis, we present an emotion recognition system that utilizes HOG features and a random forest classifier and a respiratory rate analysis module that computes average temperatures from an automatically detected region of interest. Results show that our combined system achieves a performance which is comparable to current stand-alone state-of-the-art methods for thermal face and landmark datection and a classification accuracy of 65.75% for four basic emotions.


Asunto(s)
Cara , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Reconocimiento Facial/fisiología , Máquina de Vectores de Soporte
15.
Sensors (Basel) ; 19(17)2019 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-31450687

RESUMEN

We present the first study in the literature that has aimed to determine Depression Anxiety Stress Scale (DASS) levels by analyzing facial expressions using Facial Action Coding System (FACS) by means of a unique noninvasive architecture on three layers designed to offer high accuracy and fast convergence: in the first layer, Active Appearance Models (AAM) and a set of multiclass Support Vector Machines (SVM) are used for Action Unit (AU) classification; in the second layer, a matrix is built containing the AUs' intensity levels; and in the third layer, an optimal feedforward neural network (FFNN) analyzes the matrix from the second layer in a pattern recognition task, predicting the DASS levels. We obtained 87.2% accuracy for depression, 77.9% for anxiety, and 90.2% for stress. The average prediction time was 64 s, and the architecture could be used in real time, allowing health practitioners to evaluate the evolution of DASS levels over time. The architecture could discriminate with 93% accuracy between healthy subjects and those affected by Major Depressive Disorder (MDD) or Post-traumatic Stress Disorder (PTSD), and 85% for Generalized Anxiety Disorder (GAD). For the first time in the literature, we determined a set of correlations between DASS, induced emotions, and FACS, which led to an increase in accuracy of 5%. When tested on AVEC 2014 and ANUStressDB, the method offered 5% higher accuracy, sensitivity, and specificity compared to other state-of-the-art methods.


Asunto(s)
Trastorno Depresivo Mayor/psicología , Cara/fisiología , Expresión Facial , Trastornos por Estrés Postraumático/psicología , Trastornos de Ansiedad/psicología , Depresión/psicología , Humanos , Pronóstico , Distrés Psicológico
16.
J Biomech ; 93: 86-93, 2019 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-31327523

RESUMEN

Nowadays, facial mimicry studies have acquired a great importance in the clinical domain and 3D motion capture systems are becoming valid tools for analysing facial muscles movements, thanks to the remarkable developments achieved in the 1990s. However, the face analysis domain suffers from a lack of valid motion capture protocol, due to the complexity of the human face. Indeed, a framework for defining the optimal marker set layout does not exist yet and, up to date, researchers still use their traditional facial point sets with manually allocated markers. Therefore, the study proposes an automatic approach to compute a minimum optimized marker layout to be exploited in facial motion capture, able to simplify the marker allocation without decreasing the significance level. Specifically, the algorithm identifies the optimal facial marker layouts selecting the subsets of linear distances among markers that allow to automatically recognizing with the highest performances, through a k-nearest neighbours classification technique, the acted facial movements. The marker layouts are extracted from them. Various validation and testing phases have demonstrated the accuracy, robustness and usefulness of the custom approach.


Asunto(s)
Biomimética , Cara/fisiología , Movimiento (Física) , Movimiento , Fenómenos Ópticos , Algoritmos , Humanos
17.
Int J Psychophysiol ; 145: 57-64, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31173768

RESUMEN

BACKGROUND: Face processing is impaired in long-term schizophrenia as indexed by a reduced face-related N170 event-related potential (ERP) that corresponds with volumetric decreases in right fusiform gyrus. Impairment in face processing may constitute an object-specific deficit in schizophrenia that relates to social impairment and misattribution of social signs in the disease, or the face deficit may be part of a more general deficit in complex visual processing. Further, it is not clear the degree to which face and complex object processing deficits are present early in disease course. To that end, the current study investigated face- and object-elicited N170 in long-term schizophrenia and the first hospitalized schizophrenia-spectrum. METHODS: ERPs were collected from 32 long-term schizophrenia patients and 32 matched controls, and from 31 first hospitalization patients and 31 matched controls. Subjects detected rarely presented butterflies among non-target neutral faces and automobiles. RESULTS: For both patient groups, the N170s to all stimuli were significantly attenuated. Despite this overall reduction, the increase in N170 amplitude to faces was intact in both patient samples. Symptoms were not correlated with N170 amplitude or latency to faces. CONCLUSIONS: Information processing of complex stimuli is fundamentally impaired in schizophrenia, as reflected in attenuated N170 ERPs in both first hospitalized and long-term patients. This suggests the presence of low-level visual complex object processing deficits near disease onset that persist with disease course.


Asunto(s)
Encéfalo/fisiopatología , Potenciales Evocados/fisiología , Reconocimiento Facial/fisiología , Esquizofrenia/fisiopatología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa
18.
Sensors (Basel) ; 19(12)2019 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-31207911

RESUMEN

The term "plenoptic" comes from the Latin words plenus ("full") + optic. The plenoptic function is the 7-dimensional function representing the intensity of the light observed from every position and direction in 3-dimensional space. Thanks to the plenoptic function it is thus possible to define the direction of every ray in the light-field vector function. Imaging systems are rapidly evolving with the emergence of light-field-capturing devices. Consequently, existing image-processing techniques need to be revisited to match the richer information provided. This article explores the use of light fields for face analysis. This field of research is very recent but already includes several works reporting promising results. Such works deal with the main steps of face analysis and include but are not limited to: face recognition; face presentation attack detection; facial soft-biometrics classification; and facial landmark detection. This article aims to review the state of the art on light fields for face analysis, identifying future challenges and possible applications.


Asunto(s)
Cara/anatomía & histología , Reconocimiento Facial , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Biometría/métodos , Humanos
19.
Sensors (Basel) ; 19(9)2019 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-31075816

RESUMEN

This paper focuses on the analysis of reactions captured by the face analysis system. The experiment was conducted on a sample of 50 university students. Each student was shown 100 random images and the student´s reaction to every image was recorded. The recorded reactions were subsequently compared to the reaction of the image that was expected. The results of the experiment have shown several imperfections of the face analysis system. The system has difficulties classifying expressions and cannot detect and identify inner emotions that a person may experience when shown the image. Face analysis systems can only detect emotions that are expressed externally on a face by physiological changes in certain parts of the face.


Asunto(s)
Expresión Facial , Adolescente , Adulto , Emociones/fisiología , Femenino , Humanos , Masculino , Reconocimiento Visual de Modelos/fisiología , Programas Informáticos , Adulto Joven
20.
Entropy (Basel) ; 21(7)2019 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-33267361

RESUMEN

Accurate face segmentation strongly benefits the human face image analysis problem. In this paper we propose a unified framework for face image analysis through end-to-end semantic face segmentation. The proposed framework contains a set of stack components for face understanding, which includes head pose estimation, age classification, and gender recognition. A manually labeled face data-set is used for training the Conditional Random Fields (CRFs) based segmentation model. A multi-class face segmentation framework developed through CRFs segments a facial image into six parts. The probabilistic classification strategy is used, and probability maps are generated for each class. The probability maps are used as features descriptors and a Random Decision Forest (RDF) classifier is modeled for each task (head pose, age, and gender). We assess the performance of the proposed framework on several data-sets and report better results as compared to the previously reported results.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA