Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Evol Comput ; : 1-25, 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38713737

RESUMEN

Evolutionary Computation (EC) often throws away learned knowledge as it is reset for each new problem addressed. Conversely, humans can learn from small-scale problems, retain this knowledge (plus functionality) and then successfully reuse them in larger-scale and/or related problems. Linking solutions to problems together has been achieved through layered learning, where an experimenter sets a series of simpler related problems to solve a more complex task. Recent works on Learning Classifier Systems (LCSs) has shown that knowledge reuse through the adoption of Code Fragments, GP-like tree-based programs, is plausible. However, random reuse is inefficient. Thus, the research question is how LCS can adopt a layered-learning framework, such that increasingly complex problems can be solved efficiently? An LCS (named XCSCF*) has been developed to include the required base axioms necessary for learning, refined methods for transfer learning and learning recast as a decomposition into a series of subordinate problems. These subordinate problems can be set as a curriculum by a teacher, but this does not mean that an agent can learn from it. Especially if it only extracts over-fitted knowledge of each problem rather than the underlying scalable patterns and functions. Results show that from a conventional tabula rasa, with only a vague notion of what subordinate problems might be relevant, XCSCF* captures the general logic behind the tested domains and therefore can solve any n-bit Multiplexer, n-bit Carry-one, n-bit Majority-on, and n-bit Even-parity problems. This work demonstrates a step towards continual learning as learned knowledge is effectively reused in subsequent problems.

2.
Psychophysiology ; 61(2): e14446, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37724831

RESUMEN

This article describes a new database (named "EMAP") of 145 individuals' reactions to emotion-provoking film clips. It includes electroencephalographic and peripheral physiological data as well as moment-by-moment ratings for emotional arousal in addition to overall and categorical ratings. The resulting variation in continuous ratings reflects inter-individual variability in emotional responding. To make use of the moment-by-moment data for ratings as well as neurophysiological activity, we used a machine learning approach. The results show that algorithms that are based on temporal information improve predictions compared to algorithms without a temporal component, both within and across participant modeling. Although predicting moment-by-moment changes in emotional experiences by analyzing neurophysiological activity was more difficult than using aggregated experience ratings, selecting a subset of predictors improved the prediction. This also showed that not only single features, for example, skin conductance, but a range of neurophysiological parameters explain variation in subjective fluctuations of subjective experience.


Asunto(s)
Emociones , Psicofisiología , Humanos , Emociones/fisiología , Nivel de Alerta/fisiología , Electroencefalografía , Algoritmos
3.
Psychophysiology ; 60(9): e14303, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37052214

RESUMEN

Autonomic nervous system (ANS) responses such as heart rate (HR) and galvanic skin responses (GSR) have been linked with cerebral activity in the context of emotion. Although much work has focused on the summative effect of emotions on ANS responses, their interaction in a continuously changing context is less clear. Here, we used a multimodal data set of human affective states, which includes electroencephalogram (EEG) and peripheral physiological signals of participants' moment-by-moment reactions to emotional provoking video clips and modeled HR and GSR changes using machine learning techniques, specifically, long short-term memory (LSTM), decision tree (DT), and linear regression (LR). We found that LSTM achieved a significantly lower error rate compared with DT and LR due to its inherent ability to handle sequential data. Importantly, the prediction error was significantly reduced for DT and LR when used together with particle swarm optimization to select relevant/important features for these algorithms. Unlike summative analysis, and contrary to expectations, we found a significantly lower error rate when the prediction was made across different participants than within a participant. Moreover, the predictive selected features suggest that the patterns predictive of HR and GSR were substantially different across electrode sites and frequency bands. Overall, these results indicate that specific patterns of cerebral activity track autonomic body responses. Although individual cerebral differences are important, they might not be the only factors influencing the moment-by-moment changes in ANS responses.


Asunto(s)
Emociones , Respuesta Galvánica de la Piel , Humanos , Frecuencia Cardíaca/fisiología , Emociones/fisiología , Nivel de Alerta/fisiología , Electroencefalografía/métodos
4.
IEEE Trans Cybern ; 53(11): 6761-6775, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35476559

RESUMEN

Modern classifier systems can effectively classify targets that consist of simple patterns. However, they can fail to detect hierarchical patterns of features that exist in many real-world problems, such as understanding speech or recognizing object ontologies. Biological nervous systems have the ability to abstract knowledge from simple and small-scale problems in order to then apply it to resolve more complex problems in similar and related domains. It is thought that lateral asymmetry of biological brains allows modular learning to occur at different levels of abstraction, which can then be transferred between tasks. This work develops a novel evolutionary machine-learning (EML) system that incorporates lateralization and modular learning at different levels of abstraction. The results of analyzable Boolean tasks show that the lateralized system has the ability to encapsulate underlying knowledge patterns in the form of building blocks of knowledge (BBK). Lateralized abstraction transforms complex problems into simple ones by reusing general patterns (e.g., any parity problem becomes a sequence of the 2-bit parity problem). By enabling abstraction in evolutionary computation, the lateralized system is able to identify complex patterns (e.g., in hierarchical multiplexer (HMux) problems) better than existing systems.


Asunto(s)
Encéfalo , Aprendizaje Automático
5.
IEEE Trans Cybern ; 52(8): 7362-7376, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33400672

RESUMEN

Multipoint dynamic aggregation is a meaningful optimization problem due to its important real-world applications, such as post-disaster relief, medical resource scheduling, and bushfire elimination. The problem aims to design the optimal plan for a set of robots to execute geographically distributed tasks. Unlike the majority of scheduling and routing problems, the tasks in this problem can be executed by multiple robots collaboratively. Meanwhile, the demand of each task changes over time at an incremental rate and is affected by the abilities of the robots executing it. This poses extra challenges to the problem, as it has to consider complex coupled relationships among robots and tasks. To effectively solve the problem, this article develops a new metaheuristic algorithm, called adaptive coordination ant colony optimization (ACO). We develop a novel coordinated solution construction process using multiple ants and pheromone matrices (each robot/ant forages a path according to its own pheromone matrix) to effectively handle the collaborations between robots. We also propose adaptive heuristic information based on domain knowledge to promote efficiency, a pheromone-based repair mechanism to tackle the tight constraints of the problem, and an elaborate local search to enhance the exploitation ability of the algorithm. The experimental results show that the proposed adaptive coordination ACO significantly outperforms the state-of-the-art methods in terms of both effectiveness and efficiency.


Asunto(s)
Algoritmos , Feromonas
6.
IEEE Trans Cybern ; 52(12): 13521-13535, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34077383

RESUMEN

The multipoint dynamic aggregation (MPDA) problem of the multirobot system is of great significance for its real-world applications such as bush fire elimination. The problem is to design the optimal plan for a set of heterogeneous robots to complete some geographically distributed tasks collaboratively. In this article, we consider the dynamic version of the problem, where new tasks keep appearing after the robots are dispatched from the depot. The dynamic MPDA problem is a complicated optimization problem due to several characteristics, such as the collaboration of robots, the accumulative task demand, the relationships among robots and tasks, and the unpredictable task arrivals. In this article, a new model of the problem considering these characteristics is proposed. To solve the problem, we develop a new genetic programming hyperheuristic (GPHH) method to evolve reactive coordination strategies (RCSs), which can guide the robots to make decisions in real time. The proposed GPHH method contains a newly designed effective RCS heuristic template to generate the execution plan for the robots according to a GP tree. A new terminal set of features related to both robots and tasks and a cluster filter that assigns the robots to urgent tasks are designed. The experimental results show that the proposed GPHH significantly outperformed the state-of-the-art methods. Through further analysis, useful insights such as how to distribute and coordinate robots to execute different types of tasks are discovered.


Asunto(s)
Algoritmos , Robótica , Robótica/métodos
7.
Evol Comput ; 25(2): 173-204, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-26406166

RESUMEN

A main research direction in the field of evolutionary machine learning is to develop a scalable classifier system to solve high-dimensional problems. Recently work has begun on autonomously reusing learned building blocks of knowledge to scale from low-dimensional problems to high-dimensional ones. An XCS-based classifier system, known as XCSCFC, has been shown to be scalable, through the addition of expression tree-like code fragments, to a limit beyond standard learning classifier systems. XCSCFC is especially beneficial if the target problem can be divided into a hierarchy of subproblems and each of them is solvable in a bottom-up fashion. However, if the hierarchy of subproblems is too deep, then XCSCFC becomes impractical because of the needed computational time and thus eventually hits a limit in problem size. A limitation in this technique is the lack of a cyclic representation, which is inherent in finite state machines (FSMs). However, the evolution of FSMs is a hard task owing to the combinatorially large number of possible states, connections, and interaction. Usually this requires supervised learning to minimize inappropriate FSMs, which for high-dimensional problems necessitates subsampling or incremental testing. To avoid these constraints, this work introduces a state-machine-based encoding scheme into XCS for the first time, termed XCSSMA. The proposed system has been tested on six complex Boolean problem domains: multiplexer, majority-on, carry, even-parity, count ones, and digital design verification problems. The proposed approach outperforms XCSCFA (an XCS that computes actions) and XCSF (an XCS that computes predictions) in three of the six problem domains, while the performance in others is similar. In addition, XCSSMA evolved, for the first time, compact and human readable general classifiers (i.e., solving any n-bit problems) for the even-parity and carry problem domains, demonstrating its ability to produce scalable solutions using a cyclic representation.


Asunto(s)
Algoritmos , Aprendizaje Automático/tendencias , Inteligencia Artificial , Evolución Biológica , Humanos , Aprendizaje Automático/normas
8.
IEEE Trans Image Process ; 25(9): 4298-4313, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27392354

RESUMEN

Salient object detection is typically accomplished by combining the outputs of multiple primitive feature detectors (that output feature maps or features). The diversity of images means that different basic features are useful in different contexts, which motivates the use of complementary feature detectors in a general setting. However, naive inclusion of features that are not useful for a particular image leads to a reduction in performance. In this paper, we introduce four novel measures of feature quality and then use those measures to dynamically select useful features for the combination process. The resulting saliency is thereby individually tailored to each image. Using benchmark data sets, we demonstrate the efficacy of our dynamic feature selection system by measuring the performance enhancement over the state-of-the-art models for complementary feature selection and saliency aggregation tasks. We show that a salient object detection technique using our approach outperforms competitive models on the PASCAL VOC 2012 dataset. We find that the most pronounced performance improvements occur in challenging images with cluttered backgrounds, or containing multiple salient objects.

9.
Evol Comput ; 22(4): 629-50, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24697596

RESUMEN

Image pattern classification is a challenging task due to the large search space of pixel data. Supervised and subsymbolic approaches have proven accurate in learning a problem's classes. However, in the complex image recognition domain, there is a need for investigation of learning techniques that allow humans to interpret the learned rules in order to gain an insight about the problem. Learning classifier systems (LCSs) are a machine learning technique that have been minimally explored for image classification. This work has developed the feature pattern classification system (FPCS) framework by adopting Haar-like features from the image recognition domain for feature extraction. The FPCS integrates Haar-like features with XCS, which is an accuracy-based LCS. A major contribution of this work is that the developed framework is capable of producing human-interpretable rules. The FPCS system achieved 91 [Formula: see text] 1% accuracy on the unseen test set of the MNIST dataset. In addition, the FPCS is capable of autonomously adjusting the rotation angle in unaligned images. This rotation adjustment raised the accuracy of FPCS to 95%. Although the performance is competitive with equivalent approaches, this was not as accurate as subsymbolic approaches on this dataset. However, the benefit of the interpretability of rules produced by FPCS enabled us to identify the distribution of the learned angles-a normal distribution around [Formula: see text]-which would have been very difficult in subsymbolic approaches. The analyzable nature of FPCS is anticipated to be beneficial in domains such as speed sign recognition, where underlying reasoning and confidence of recognition needs to be human interpretable.


Asunto(s)
Inteligencia Artificial , Clasificación/métodos , Metodologías Computacionales , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Simulación por Computador , Humanos
10.
IEEE Trans Cybern ; 43(6): 1656-71, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-24273143

RESUMEN

Classification problems often have a large number of features in the data sets, but not all of them are useful for classification. Irrelevant and redundant features may even reduce the performance. Feature selection aims to choose a small number of relevant features to achieve similar or even better classification performance than using all features. It has two main conflicting objectives of maximizing the classification performance and minimizing the number of features. However, most existing feature selection algorithms treat the task as a single objective problem. This paper presents the first study on multi-objective particle swarm optimization (PSO) for feature selection. The task is to generate a Pareto front of nondominated solutions (feature subsets). We investigate two PSO-based multi-objective feature selection algorithms. The first algorithm introduces the idea of nondominated sorting into PSO to address feature selection problems. The second algorithm applies the ideas of crowding, mutation, and dominance to PSO to search for the Pareto front solutions. The two multi-objective algorithms are compared with two conventional feature selection methods, a single objective feature selection method, a two-stage feature selection algorithm, and three well-known evolutionary multi-objective algorithms on 12 benchmark data sets. The experimental results show that the two PSO-based multi-objective algorithms can automatically evolve a set of nondominated solutions. The first algorithm outperforms the two conventional methods, the single objective method, and the two-stage algorithm. It achieves comparable results with the existing three well-known multi-objective algorithms in most cases. The second algorithm achieves better results than the first algorithm and all other methods mentioned previously.


Asunto(s)
Algoritmos , Inteligencia Artificial , Técnicas de Apoyo para la Decisión , Almacenamiento y Recuperación de la Información/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA