Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
IEEE Trans Neural Netw ; 17(4): 1091-7, 2006 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-16856672

RESUMEN

A truly distributed (as opposed to parallelized) support vector machine (SVM) algorithm is presented. Training data are assumed to come from the same distribution and are locally stored in a number of different locations with processing capabilities (nodes). In several examples, it has been found that a reasonably small amount of information is interchanged among nodes to obtain an SVM solution, which is better than that obtained when classifiers are trained only with the local data and comparable (although a little bit worse) to that of the centralized approach (obtained when all the training data are available at the same place). We propose and analyze two distributed schemes: a "naïve" distributed chunking approach, where raw data (support vectors) are communicated, and the more elaborated distributed semiparametric SVM, which aims at further reducing the total amount of information passed between nodes while providing a privacy-preserving mechanism for information sharing. We show the feasibility of our proposal by evaluating the performance of the algorithms in benchmarks with both synthetic and real-world datasets.


Asunto(s)
Inteligencia Artificial , Algoritmos , Sistemas de Administración de Bases de Datos/estadística & datos numéricos , Bases de Datos Factuales/estadística & datos numéricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA