Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
IEEE Trans Neural Netw Learn Syst ; 24(3): 448-59, 2013 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24808317

RESUMEN

In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.


Asunto(s)
Inteligencia Artificial , Procesamiento Automatizado de Datos/métodos , Modelos Económicos , Redes Neurales de la Computación
2.
IEEE Trans Pattern Anal Mach Intell ; 32(7): 1324-8, 2010 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-20489234

RESUMEN

A novel loss function to train a net of K single-layer perceptrons (KSLPs) is suggested, where pairwise misclassification cost matrix can be incorporated directly. The complexity of the network remains the same; a gradient's computation of the loss function does not necessitate additional calculations. Minimization of the loss requires a smaller number of training epochs. Efficacy of cost-sensitive methods depends on the cost matrix, the overlap of the pattern classes, and sample sizes. Experiments with real-world pattern recognition (PR) tasks show that employment of novel loss function usually outperforms three benchmark methods.

3.
IEEE Trans Neural Netw ; 21(5): 784-95, 2010 May.
Artículo en Inglés | MEDLINE | ID: mdl-20215067

RESUMEN

The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.


Asunto(s)
Algoritmos , Inteligencia Artificial , Redes Neurales de la Computación , Bases de Datos Factuales/estadística & datos numéricos , Generalización Psicológica , Humanos , Dinámicas no Lineales
4.
Neural Netw ; 19(10): 1506-16, 2006 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-16580815

RESUMEN

A wide selection of standard statistical pattern classification algorithms can be applied as trainable fusion rules while designing neural network ensembles. A focus of the present two-part paper is finite sample effects: the complexity of base classifiers and fusion rules; the type of outputs provided by experts to the fusion rule; non-linearity of the fusion rule; degradation of experts and the fusion rule due to the lack of information in the design set; the adaptation of base classifiers to training set size, etc. In the first part of this paper, we consider arguments for utilizing continuous outputs of base classifiers versus categorical outputs and conclude: if one succeeds in having a small number of expert networks working perfectly in different parts of the input feature space, then crisp outputs may be preferable over continuous outputs. Afterwards, we oppose fixed fusion rules versus trainable ones and demonstrate situations where weighted average fusion can outperform simple average fusion. We present a review of statistical classification rules, paying special attention to these linear and non-linear rules, which are employed rarely but, according to our opinion, could be useful in neural network ensembles. We consider ideal and sample-based oracle decision rules and illustrate characteristic features of diverse fusion rules by considering an artificial two-dimensional (2D) example where the base classifiers perform well in different regions of input feature space.


Asunto(s)
Lógica Difusa , Aprendizaje/fisiología , Redes Neurales de la Computación , Neuronas/fisiología , Animales , Simulación por Computador , Humanos , Modelos Estadísticos , Dinámicas no Lineales , Tamaño de la Muestra
5.
Neural Netw ; 19(10): 1517-27, 2006 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-17243204

RESUMEN

Profound theoretical analysis is performed of small-sample properties of trainable fusion rules to determine in which situations neural network ensembles can improve or degrade classification results. We consider small sample effects, specific only to multiple classifiers system design in the two-category case of two important fusion rules: (1) linear weighted average (weighted voting), realized either by the standard Fisher classifier or by the single-layer perceptron, and (2) the non-linear Behavior-Knowledge-Space method. The small sample effects include: (i) training bias, i.e. learning sample size influence on generalization error of the base experts or of the fusion rule, (ii) optimistic biased outputs of the experts (self-boasting effect) and (iii) sample size impact on determining optimal complexity of the fusion rule. Correction terms developed to reduce the self-boasting effect are studied. It is shown that small learning sets increase classification error of the expert classifiers and damage correlation structure between their outputs. If the sizes of learning sets used to develop the expert classifiers are too small, non-trainable fusion rules can outperform more sophisticated trainable ones. A practical technique to fight sample size problems is a noise injection technique. The noise injection reduces the fusion rule's complexity and diminishes the expert's boasting bias.


Asunto(s)
Lógica Difusa , Aprendizaje/fisiología , Redes Neurales de la Computación , Neuronas/fisiología , Simulación por Computador , Humanos , Modelos Estadísticos , Tamaño de la Muestra
6.
Comput Biol Med ; 35(1): 67-83, 2005 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-15567353

RESUMEN

The classification problem of respiratory sound signals has been addressed by taking into account their cyclic nature, and a novel hierarchical decision fusion scheme based on the cooperation of classifiers has been developed. Respiratory signals from three different classes are partitioned into segments, which are later joined to form six different phases of the respiration cycle. Multilayer perceptron classifiers classify the parameterized segments from each phase and decision vectors obtained from different phases are combined using a nonlinear decision combination function to form a final decision on each subject. Furthermore a new regularization scheme is applied to the data to stabilize training and consultation.


Asunto(s)
Auscultación/clasificación , Redes Neurales de la Computación , Ruidos Respiratorios/clasificación , Algoritmos , Técnicas de Apoyo para la Decisión , Sistemas Especialistas , Humanos , Enfermedades Pulmonares/fisiopatología , Modelos Biológicos , Dinámicas no Lineales , Enfermedad Pulmonar Obstructiva Crónica/fisiopatología , Respiración , Procesamiento de Señales Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA