Your browser doesn't support javascript.
loading
Distributed semi-supervised support vector machines.
Scardapane, Simone; Fierimonte, Roberto; Di Lorenzo, Paolo; Panella, Massimo; Uncini, Aurelio.
Afiliación
  • Scardapane S; Department of Information Engineering, Electronics and Telecommunications (DIET), "Sapienza" University of Rome, Via Eudossiana 18, 00184 Rome, Italy. Electronic address: simone.scardapane@uniroma1.it.
  • Fierimonte R; Department of Information Engineering, Electronics and Telecommunications (DIET), "Sapienza" University of Rome, Via Eudossiana 18, 00184 Rome, Italy. Electronic address: robertofierimonte@gmail.com.
  • Di Lorenzo P; Department of Engineering, University of Perugia, Via G. Duranti 93, 06125, Perugia, Italy. Electronic address: paolo.dilorenzo@unipg.it.
  • Panella M; Department of Information Engineering, Electronics and Telecommunications (DIET), "Sapienza" University of Rome, Via Eudossiana 18, 00184 Rome, Italy. Electronic address: massimo.panella@uniroma1.it.
  • Uncini A; Department of Information Engineering, Electronics and Telecommunications (DIET), "Sapienza" University of Rome, Via Eudossiana 18, 00184 Rome, Italy. Electronic address: aurelio.uncini@uniroma1.it.
Neural Netw ; 80: 43-52, 2016 Aug.
Article en En | MEDLINE | ID: mdl-27179615
The semi-supervised support vector machine (S(3)VM) is a well-known algorithm for performing semi-supervised inference under the large margin principle. In this paper, we are interested in the problem of training a S(3)VM when the labeled and unlabeled samples are distributed over a network of interconnected agents. In particular, the aim is to design a distributed training protocol over networks, where communication is restricted only to neighboring agents and no coordinating authority is present. Using a standard relaxation of the original S(3)VM, we formulate the training problem as the distributed minimization of a non-convex social cost function. To find a (stationary) solution in a distributed manner, we employ two different strategies: (i) a distributed gradient descent algorithm; (ii) a recently developed framework for In-Network Nonconvex Optimization (NEXT), which is based on successive convexifications of the original problem, interleaved by state diffusion steps. Our experimental results show that the proposed distributed algorithms have comparable performance with respect to a centralized implementation, while highlighting the pros and cons of the proposed solutions. To the date, this is the first work that paves the way toward the broad field of distributed semi-supervised learning over networks.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Máquina de Vectores de Soporte / Aprendizaje Automático Supervisado Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2016 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Máquina de Vectores de Soporte / Aprendizaje Automático Supervisado Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2016 Tipo del documento: Article Pais de publicación: Estados Unidos