Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Sensors (Basel) ; 24(12)2024 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-38931614

RESUMEN

Traditional switching operations require on-site work, and the high voltage generated by arc discharges can pose a risk of injury to the operator. Therefore, a combination of visual servo and robot control is used to localize the switching operation and construct hand-eye calibration equations. The solution to the hand-eye calibration equations is coupled with the rotation matrix and translation vectors, and it depends on the initial value determination. This article presents a convex relaxation global optimization hand-eye calibration algorithm based on dual quaternions. Firstly, the problem model is simplified using the mathematical tools of dual quaternions, and then the linear matrix inequality convex optimization method is used to obtain a rotation matrix with higher accuracy. Afterwards, the calibration equations of the translation vectors are rewritten, and a new objective function is established to solve the coupling influence between them, maintaining positioning precision at approximately 2.9 mm. Considering the impact of noise on the calibration process, Gaussian noise is added to the solutions of the rotation matrix and translation vector to make the data more closely resemble the real scene in order to evaluate the performance of different hand-eye calibration algorithms. Eventually, an experiment comparing different hand-eye calibration methods proves that the proposed algorithm is better than other hand-eye calibration algorithms in terms of calibration accuracy, robustness to noise, and stability, satisfying the accuracy requirements of switching operations.

2.
J Am Stat Assoc ; 118(542): 858-868, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37313368

RESUMEN

We investigate the effectiveness of convex relaxation and nonconvex optimization in solving bilinear systems of equations under two different designs (i.e. a sort of random Fourier design and Gaussian design). Despite the wide applicability, the theoretical understanding about these two paradigms remains largely inadequate in the presence of random noise. The current paper makes two contributions by demonstrating that: (1) a two-stage nonconvex algorithm attains minimax-optimal accuracy within a logarithmic number of iterations, and (2) convex relaxation also achieves minimax-optimal statistical accuracy vis-à-vis random noise. Both results significantly improve upon the state-of-the-art theoretical guarantees.

3.
J Am Stat Assoc ; 118(544): 2383-2393, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38283734

RESUMEN

We propose a sparse reduced rank Huber regression for analyzing large and complex high-dimensional data with heavy-tailed random noise. The proposed method is based on a convex relaxation of a rank- and sparsity-constrained nonconvex optimization problem, which is then solved using a block coordinate descent and an alternating direction method of multipliers algorithm. We establish nonasymptotic estimation error bounds under both Frobenius and nuclear norms in the high-dimensional setting. This is a major contribution over existing results in reduced rank regression, which mainly focus on rank selection and prediction consistency. Our theoretical results quantify the tradeoff between heavy-tailedness of the random noise and statistical bias. For random noise with bounded (1+δ) th moment with δ∈(0,1), the rate of convergence is a function of δ, and is slower than the sub-Gaussian-type deviation bounds; for random noise with bounded second moment, we obtain a rate of convergence as if sub-Gaussian noise were assumed. We illustrate the performance of the proposed method via extensive numerical studies and a data application. Supplementary materials for this article are available online.

4.
SIAM J Optim ; 32(3): 2180-2207, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-37200831

RESUMEN

This paper studies an optimization problem on the sum of traces of matrix quadratic forms in m semiorthogonal matrices, which can be considered as a generalization of the synchronization of rotations. While the problem is nonconvex, this paper shows that its semidefinite programming relaxation solves the original nonconvex problems exactly with high probability under an additive noise model with small noise in the order of O(m1/4). In addition, it shows that, with high probability, the sufficient condition for global optimality considered in Won, Zhou, and Lange [SIAM J. Matrix Anal. Appl., 2 (2021), pp. 859-882] is also necessary under a similar small noise condition. These results can be considered as a generalization of existing results on phase synchronization.

5.
Sensors (Basel) ; 21(21)2021 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-34770665

RESUMEN

We study the Perspective-n-Point (PNP) problem, which is fundamental in 3D vision, for the recovery of camera translation and rotation. A common solution applies polynomial sum-of-squares (SOS) relaxation techniques via semidefinite programming. Our main result is that the polynomials which should be optimized can be non-negative but not SOS, hence the resulting convex relaxation is not tight; specifically, we present an example of real-life configurations for which the convex relaxation in the Lasserre Hierarchy fails, in both the second and third levels. In addition to the theoretical contribution, the conclusion for practitioners is that this commonly-used approach can fail; our experiments suggest that using higher levels of the Lasserre Hierarchy reduces the probability of failure. The methods we use are mostly drawn from the area of polynomial optimization and convex relaxation; we also use some results from real algebraic geometry, as well as Matlab optimization packages for PNP.


Asunto(s)
Algoritmos , Modelos Estadísticos
6.
Phys Med ; 82: 122-133, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33611049

RESUMEN

PURPOSE: The purpose of this work was to present a new single-arc mixed photon (6&18MV) VMAT (SAMP) optimization framework that concurrently optimizes for two photon energies with corresponding partial arc lengths. METHODS AND MATERIALS: Owing to simultaneous optimization of energy dependent intensity maps and corresponding arc locations, the proposed model poses nonlinearity. Unique relaxation constraints based on McCormick approximations were introduced for linearization. Energy dependent intensity maps were then decomposed to generate apertures. Feasibility of the proposed framework was tested on a sample of ten prostate cancer cases with lateral separation ranging from 34 cm (case no.1) to 52 cm (case no.6). The SAMP plans were compared against single energy (6MV) VMAT (SE) plans through dose volume histograms (DVHs) and radiobiological parameters including normal tissue complication probability (NTCP) and equivalent uniform dose (EUD). RESULTS: The contribution of higher energy photon beam optimized by the algorithm demonstrated an increase for cases with a lateral separation >40 cm. SAMP-VMAT notably improved bladder and rectum sparing in large size cases. Compared to single energy, SAMP-VMAT plans reduced bladder and rectum NTCP in cases with large lateral separation. With the exception of one case, SAMP-VMAT either improved or maintained femoral heads compared to SE-VMAT. SAMP-VMAT reduced the nontarget tissue integral dose in all ten cases. CONCLUSIONS: A single-arc VMAT optimization framework comprising mixed photon energy partial arcs was presented. Overall results underline the feasibility and potential of the proposed approach for improving OAR sparing in large size patients without compromising the target homogeneity and coverage.


Asunto(s)
Neoplasias de la Próstata , Radioterapia de Intensidad Modulada , Humanos , Masculino , Fotones , Neoplasias de la Próstata/radioterapia , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador
7.
Ann Stat ; 49(5): 2948-2971, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36148268

RESUMEN

This paper delivers improved theoretical guarantees for the convex programming approach in low-rank matrix estimation, in the presence of (1) random noise, (2) gross sparse outliers, and (3) missing data. This problem, often dubbed as robust principal component analysis (robust PCA), finds applications in various domains. Despite the wide applicability of convex relaxation, the available statistical support (particularly the stability analysis vis-à-vis random noise) remains highly suboptimal, which we strengthen in this paper. When the unknown matrix is well-conditioned, incoherent, and of constant rank, we demonstrate that a principled convex program achieves near-optimal statistical accuracy, in terms of both the Euclidean loss and the ℓ ∞ loss. All of this happens even when nearly a constant fraction of observations are corrupted by outliers with arbitrary magnitudes. The key analysis idea lies in bridging the convex program in use and an auxiliary nonconvex optimization algorithm, and hence the title of this paper.

8.
J Math Imaging Vis ; 62(3): 417-444, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32300265

RESUMEN

A variational model for learning convolutional image atoms from corrupted and/or incomplete data is introduced and analyzed both in function space and numerically. Building on lifting and relaxation strategies, the proposed approach is convex and allows for simultaneous image reconstruction and atom learning in a general, inverse problems context. Further, motivated by an improved numerical performance, also a semi-convex variant is included in the analysis and the experiments of the paper. For both settings, fundamental analytical properties allowing in particular to ensure well-posedness and stability results for inverse problems are proven in a continuous setting. Exploiting convexity, globally optimal solutions are further computed numerically for applications with incomplete, noisy and blurry data and numerical results are shown.

9.
SIAM J Optim ; 30(4): 3098-3121, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-34305368

RESUMEN

This paper studies noisy low-rank matrix completion: given partial and noisy entries of a large low-rank matrix, the goal is to estimate the underlying matrix faithfully and efficiently. Arguably one of the most popular paradigms to tackle this problem is convex relaxation, which achieves remarkable efficacy in practice. However, the theoretical support of this approach is still far from optimal in the noisy setting, falling short of explaining its empirical success. We make progress towards demystifying the practical efficacy of convex relaxation vis-à-vis random noise. When the rank and the condition number of the unknown matrix are bounded by a constant, we demonstrate that the convex programming approach achieves near-optimal estimation errors - in terms of the Euclidean loss, the entrywise loss, and the spectral norm loss - for a wide range of noise levels. All of this is enabled by bridging convex relaxation with the nonconvex Burer-Monteiro approach, a seemingly distinct algorithmic paradigm that is provably robust against noise. More specifically, we show that an approximate critical point of the nonconvex formulation serves as an extremely tight approximation of the convex solution, thus allowing us to transfer the desired statistical guarantees of the nonconvex approach to its convex counterpart.

10.
Proc Natl Acad Sci U S A ; 116(46): 22931-22937, 2019 11 12.
Artículo en Inglés | MEDLINE | ID: mdl-31666329

RESUMEN

Noisy matrix completion aims at estimating a low-rank matrix given only partial and corrupted entries. Despite remarkable progress in designing efficient estimation algorithms, it remains largely unclear how to assess the uncertainty of the obtained estimates and how to perform efficient statistical inference on the unknown matrix (e.g., constructing a valid and short confidence interval for an unseen entry). This paper takes a substantial step toward addressing such tasks. We develop a simple procedure to compensate for the bias of the widely used convex and nonconvex estimators. The resulting debiased estimators admit nearly precise nonasymptotic distributional characterizations, which in turn enable optimal construction of confidence intervals/regions for, say, the missing entries and the low-rank factors. Our inferential procedures do not require sample splitting, thus avoiding unnecessary loss of data efficiency. As a byproduct, we obtain a sharp characterization of the estimation accuracy of our debiased estimators in both rate and constant. Our debiased estimators are tractable algorithms that provably achieve full statistical efficiency.


Asunto(s)
Estadística como Asunto/normas , Algoritmos , Sesgo , Intervalos de Confianza , Modelos Estadísticos , Incertidumbre
11.
Sensors (Basel) ; 19(5)2019 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-30866560

RESUMEN

The accuracy of cooperative localization can be severely degraded in non-line-of-sight (NLOS) environments. Although most existing approaches modify models to alleviate NLOS impact, computational speed does not satisfy practical applications. In this paper, we propose a distributed cooperative localization method for wireless sensor networks (WSNs) in NLOS environments. The convex model in the proposed method is based on projection relaxation. This model was designed for situations where prior information on NLOS connections is unavailable. We developed an efficient decomposed formulation for the convex counterpart, and designed a parallel distributed algorithm based on the alternating direction method of multipliers (ADMM), which significantly improves computational speed. To accelerate the convergence rate of local updates, we approached the subproblems via the proximal algorithm and analyzed its computational complexity. Numerical simulation results demonstrate that our approach is superior in processing speed and accuracy to other methods in NLOS scenarios.

12.
Phys Med ; 57: 123-136, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30738516

RESUMEN

PURPOSE: The segmentation of organs and lesions from medical images is a challenging task due to the presents of noise, intensity inhomogeneity, blurry/weak boundaries. In this paper, a point distance shape constraint is proposed and incorporated in the level set framework for the segmentation of objects with various shapes. METHODS: The proposed shape constraint is a linear combination of the Euclidean distance of user selected points. By selecting different numbers of points, it can generate different shape constraints and therefore is more flexible in dealing with different shapes. Then this shape constraint is incorporated into the variational level set framework. A convex relaxation is applied to get a convex model which can be efficiently solved by a primal-dual hybrid gradient algorithm. RESULTS: The proposed algorithm is tested on 60 CT images with the nodular type of hepatic cellular cancer (HCC), 100 ultrasound kidney images, 20 prostate MR images, 20 lumbar CT images and 100 transrectal ultrasound prostate images. The algorithms performance is evaluated using a number of metrics by comparison with expert delineations. The validation results show that, for five datasets mentioned previously, the average DSCs of the proposed algorithm are 95.6% ±â€¯1.4%, 94.3% ±â€¯3.1%, 91.3% ±â€¯3.8%, 92.7% ±â€¯1.5% and 94.4% ±â€¯2.2% respectively. Both quantitative and qualitative evaluation confirm that the proposed method can provide more accurate segmentation than four state-of-the-art methods. CONCLUSION: The proposed point distance shape constraint segmentation model can accurately segment organs and lesions with a number of shapes in medical images.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Riñón/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Ultrasonografía
13.
Comput Methods Programs Biomed ; 156: 85-95, 2018 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-29428079

RESUMEN

BACKGROUND AND OBJECTIVES: The segmentation of muscle and bone structures in CT is of interest to physicians and surgeons for surgical planning, disease diagnosis and/or the analysis of fractures or bone/muscle densities. Recently, the issue has been addressed in many research works. However, most studies have focused on only one of the two tissues and on the segmentation of one particular bone or muscle. This work addresses the segmentation of muscle and bone structures in 3D CT volumes. METHODS: The proposed bone and muscle segmentation algorithm is based on a three-label convex relaxation approach. The main novelty is that the proposed energy function to be minimized includes distance to histogram models of bone and muscle structures combined with gray-level information. RESULTS: 27 CT volumes corresponding to different sections from 20 different patients were manually segmented and used as ground-truth for training and evaluation purposes. Different metrics (Dice index, Jaccard index, Sensitivity, Specificity, Positive Predictive Value, accuracy and computational cost) were computed and compared with those used in some state-of-the art algorithms. The proposed algorithm outperformed the other methods, obtaining a Dice coefficient of 0.88 ±â€¯0.14, a Jaccard index of 0.80 ±â€¯0.19, a Sensitivity of 0.94 ±â€¯0.15 and a Specificity of 0.95 ±â€¯0.04 for bone segmentation, and 0.78 ±â€¯0.12, 0.65 ±â€¯0.16, 0.94 ±â€¯0.04 and 0.95 ±â€¯0.04 for muscle tissue. CONCLUSIONS: A fast, generalized method has been presented for segmenting muscle and bone structures in 3D CT volumes using a multilabel continuous convex relaxation approach. The results obtained show that the proposed algorithm outperforms some state-of-the art methods. The algorithm will help physicians and surgeons in surgical planning, disease diagnosis and/or the analysis of fractures or bone/muscle densities.


Asunto(s)
Huesos/diagnóstico por imagen , Fracturas Óseas/diagnóstico por imagen , Músculos/diagnóstico por imagen , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Masculino , Persona de Mediana Edad , Reconocimiento de Normas Patrones Automatizadas , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X , Resultado del Tratamiento , Adulto Joven
14.
Int J Comput Assist Radiol Surg ; 12(12): 2055-2067, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28188486

RESUMEN

PURPOSE: In 2005, an application for surgical planning called AYRA[Formula: see text] was designed and validated by different surgeons and engineers at the Virgen del Rocío University Hospital, Seville (Spain). However, the segmentation methods included in AYRA and in other surgical planning applications are not able to segment accurately tumors that appear in soft tissue. The aims of this paper are to offer an exhaustive validation of an accurate semiautomatic segmentation tool to delimitate retroperitoneal tumors from CT images and to aid physicians in planning both radiotherapy doses and surgery. METHODS: A panel of 6 experts manually segmented 11 cases of tumors, and the segmentation results were compared exhaustively with: the results provided by a surgical planning tool (AYRA), the segmentations obtained using a radiotherapy treatment planning system (Pinnacle[Formula: see text]), the segmentation results obtained by a group of experts in the delimitation of retroperitoneal tumors and the segmentation results using the algorithm under validation. RESULTS: 11 cases of retroperitoneal tumors were tested. The proposed algorithm provided accurate results regarding the segmentation of the tumor. Moreover, the algorithm requires minimal computational time-an average of 90.5% less than that required when manually contouring the same tumor. CONCLUSION: A method developed for the semiautomatic selection of retroperitoneal tumor has been validated in depth. AYRA, as well as other surgical and radiotherapy planning tools, could be greatly improved by including this algorithm.


Asunto(s)
Algoritmos , Imagenología Tridimensional/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Neoplasias Retroperitoneales/diagnóstico , Adolescente , Adulto , Humanos , Masculino , Neoplasias Retroperitoneales/terapia , Tomografía Computarizada por Rayos X , Adulto Joven
15.
Med Biol Eng Comput ; 55(1): 1-15, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27099157

RESUMEN

An innovative algorithm has been developed for the segmentation of retroperitoneal tumors in 3D radiological images. This algorithm makes it possible for radiation oncologists and surgeons semiautomatically to select tumors for possible future radiation treatment and surgery. It is based on continuous convex relaxation methodology, the main novelty being the introduction of accumulated gradient distance, with intensity and gradient information being incorporated into the segmentation process. The algorithm was used to segment 26 CT image volumes. The results were compared with manual contouring of the same tumors. The proposed algorithm achieved 90 % sensitivity, 100 % specificity and 84 % positive predictive value, obtaining a mean distance to the closest point of 3.20 pixels. The algorithm's dependence on the initial manual contour was also analyzed, with results showing that the algorithm substantially reduced the variability of the manual segmentation carried out by different specialists. The algorithm was also compared with four benchmark algorithms (thresholding, edge-based level-set, region-based level-set and continuous max-flow with two labels). To the best of our knowledge, this is the first time the segmentation of retroperitoneal tumors for radiotherapy planning has been addressed.


Asunto(s)
Imagenología Tridimensional , Planificación de la Radioterapia Asistida por Computador , Neoplasias Retroperitoneales/diagnóstico por imagen , Neoplasias Retroperitoneales/radioterapia , Adolescente , Adulto , Algoritmos , Femenino , Humanos , Modelos Lineales , Masculino , Variaciones Dependientes del Observador , Adulto Joven
16.
IEEE J Sel Top Signal Process ; 10(4): 672-687, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28450978

RESUMEN

We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information.

17.
Proc IEEE Int Symp Biomed Imaging ; 2015: 1048-1052, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26677402

RESUMEN

In single particle reconstruction (SPR) from cryo-electron microscopy (EM), the 3D structure of a molecule needs to be determined from its 2D projection images taken at unknown viewing directions. Zvi Kam showed already in 1980 that the autocorrelation function of the 3D molecule over the rotation group SO(3) can be estimated from 2D projection images whose viewing directions are uniformly distributed over the sphere. The autocorrelation function determines the expansion coefficients of the 3D molecule in spherical harmonics up to an orthogonal matrix of size (2l + 1) × (2l + 1) for each l = 0,1,2,…. In this paper we show how techniques for solving the phase retrieval problem in X-ray crystallography can be modified for the cryo-EM setup for retrieving the missing orthogonal matrices. Specifically, we present two new approaches that we term Orthogonal Extension and Orthogonal Replacement, in which the main algorithmic components are the singular value decomposition and semidefinite programming. We demonstrate the utility of these approaches through numerical experiments on simulated data.

18.
Med Image Anal ; 26(1): 120-32, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26387053

RESUMEN

Three-dimensional (3D) measurements of peripheral arterial disease (PAD) plaque burden extracted from fast black-blood magnetic resonance (MR) images have shown to be more predictive of clinical outcomes than PAD stenosis measurements. To this end, accurate segmentation of the femoral artery lumen and outer wall is required for generating volumetric measurements of PAD plaque burden. Here, we propose a semi-automated algorithm to jointly segment the femoral artery lumen and outer wall surfaces from 3D black-blood MR images, which are reoriented and reconstructed along the medial axis of the femoral artery to obtain improved spatial coherence between slices of the long, thin femoral artery and to reduce computation time. The developed segmentation algorithm enforces two priors in a global optimization manner: the spatial consistency between the adjacent 2D slices and the anatomical region order between the femoral artery lumen and outer wall surfaces. The formulated combinatorial optimization problem for segmentation is solved globally and exactly by means of convex relaxation using a coupled continuous max-flow (CCMF) model, which is a dual formulation to the convex relaxed optimization problem. In addition, the CCMF model directly derives an efficient duality-based algorithm based on the modern multiplier augmented optimization scheme, which has been implemented on a GPU for fast computation. The computed segmentations from the developed algorithm were compared to manual delineations from experts using 20 black-blood MR images. The developed algorithm yielded both high accuracy (Dice similarity coefficients ≥ 87% for both the lumen and outer wall surfaces) and high reproducibility (intra-class correlation coefficient of 0.95 for generating vessel wall area), while outperforming the state-of-the-art method in terms of computational time by a factor of ≈ 20.


Asunto(s)
Arteria Femoral/patología , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Angiografía por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Enfermedad Arterial Periférica/patología , Algoritmos , Humanos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Técnica de Sustracción
19.
Proc Natl Acad Sci U S A ; 112(10): 2942-7, 2015 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-25713342

RESUMEN

We consider the problem of exact and inexact matching of weighted undirected graphs, in which a bijective correspondence is sought to minimize a quadratic weight disagreement. This computationally challenging problem is often relaxed as a convex quadratic program, in which the space of permutations is replaced by the space of doubly stochastic matrices. However, the applicability of such a relaxation is poorly understood. We define a broad class of friendly graphs characterized by an easily verifiable spectral property. We prove that for friendly graphs, the convex relaxation is guaranteed to find the exact isomorphism or certify its inexistence. This result is further extended to approximately isomorphic graphs, for which we develop an explicit bound on the amount of weight disagreement under which the relaxation is guaranteed to find the globally optimal approximate isomorphism. We also show that in many cases, the graph matching problem can be further harmlessly relaxed to a convex quadratic program with only n separable linear equality constraints, which is substantially more efficient than the standard relaxation involving n2 equality and n2 inequality constraints. Finally, we show that our results are still valid for unfriendly graphs if additional information in the form of seeds or attributes is allowed, with the latter satisfying an easy to verify spectral characteristic.

20.
Image Vis Comput ; 33: 1-14, 2015 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-25558120

RESUMEN

Accurate reconstruction of 3D geometrical shape from a set of calibrated 2D multiview images is an active yet challenging task in computer vision. The existing multiview stereo methods usually perform poorly in recovering deeply concave and thinly protruding structures, and suffer from several common problems like slow convergence, sensitivity to initial conditions, and high memory requirements. To address these issues, we propose a two-phase optimization method for generalized reprojection error minimization (TwGREM), where a generalized framework of reprojection error is proposed to integrate stereo and silhouette cues into a unified energy function. For the minimization of the function, we first introduce a convex relaxation on 3D volumetric grids which can be efficiently solved using variable splitting and Chambolle projection. Then, the resulting surface is parameterized as a triangle mesh and refined using surface evolution to obtain a high-quality 3D reconstruction. Our comparative experiments with several state-of-the-art methods show that the performance of TwGREM based 3D reconstruction is among the highest with respect to accuracy and efficiency, especially for data with smooth texture and sparsely sampled viewpoints.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA