Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 180: 106600, 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39208463

RESUMEN

Few-shot learning is often challenged by low generalization performance due to the model is mostly learned with the base classes only. To mitigate the above issues, a few-shot learning method with representative global prototype is proposed in this paper. Specifically, to enhance generalization to novel class, we propose a strategy for jointly training base and novel classes. This process produces prototypes characterizing the class information called representative global prototypes. Additionally, to avoid the problem of data imbalance and prototype bias caused by newly added categories of sparse samples, a novel sample synthesis method is proposed for augmenting more representative samples of novel class. Finally, representative samples and non-representative samples with high uncertainty are selected to enhance the representational and discriminative abilities of the global prototype. Intensive experiments have been conducted on two popular benchmark datasets, and the experimental results show that this method significantly improves the classification ability of few-shot learning tasks and achieves state-of-the-art performance.

2.
Entropy (Basel) ; 24(6)2022 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-35741495

RESUMEN

Poker has been considered a challenging problem in both artificial intelligence and game theory because poker is characterized by imperfect information and uncertainty, which are similar to many realistic problems like auctioning, pricing, cyber security, and operations. However, it is not clear that playing an equilibrium policy in multi-player games would be wise so far, and it is infeasible to theoretically validate whether a policy is optimal. Therefore, designing an effective optimal policy learning method has more realistic significance. This paper proposes an optimal policy learning method for multi-player poker games based on Actor-Critic reinforcement learning. Firstly, this paper builds the Actor network to make decisions with imperfect information and the Critic network to evaluate policies with perfect information. Secondly, this paper proposes a novel multi-player poker policy update method: asynchronous policy update algorithm (APU) and dual-network asynchronous policy update algorithm (Dual-APU) for multi-player multi-policy scenarios and multi-player sharing-policy scenarios, respectively. Finally, this paper takes the most popular six-player Texas hold 'em poker to validate the performance of the proposed optimal policy learning method. The experiments demonstrate the policies learned by the proposed methods perform well and gain steadily compared with the existing approaches. In sum, the policy learning methods of imperfect information games based on Actor-Critic reinforcement learning perform well on poker and can be transferred to other imperfect information games. Such training with perfect information and testing with imperfect information models show an effective and explainable approach to learning an approximately optimal policy.

3.
Entropy (Basel) ; 24(4)2022 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-35455134

RESUMEN

With the development and appliance of multi-agent systems, multi-agent cooperation is becoming an important problem in artificial intelligence. Multi-agent reinforcement learning (MARL) is one of the most effective methods for solving multi-agent cooperative tasks. However, the huge sample complexity of traditional reinforcement learning methods results in two kinds of training waste in MARL for cooperative tasks: all homogeneous agents are trained independently and repetitively, and multi-agent systems need training from scratch when adding a new teammate. To tackle these two problems, we propose the knowledge reuse methods of MARL. On the one hand, this paper proposes sharing experience and policy within agents to mitigate training waste. On the other hand, this paper proposes reusing the policies learned by original teams to avoid knowledge waste when adding a new agent. Experimentally, the Pursuit task demonstrates how sharing experience and policy can accelerate the training speed and enhance the performance simultaneously. Additionally, transferring the learned policies from the N-agent enables the (N+1)-agent team to immediately perform cooperative tasks successfully, and only a minor training resource can allow the multi-agents to reach optimal performance identical to that from scratch.

4.
Appl Opt ; 61(2): 546-553, 2022 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-35200896

RESUMEN

The ability to identify virus particles is important for research and clinical applications. Because of the optical diffraction limit, conventional optical microscopes are generally not suitable for virus particle detection, and higher resolution instruments such as transmission electron microscopy (TEM) and scanning electron microscopy (SEM) are required. In this paper, we propose a new method for identifying virus particles based on polarization parametric indirect microscopic imaging (PIMI) and deep learning techniques. By introducing an abrupt change of refractivity at the virus particle using antibody-conjugated gold nanoparticles (AuNPs), the strength of the photon scattering signal can be magnified. After acquiring the PIMI images, a deep learning method was applied to identify discriminating features and classify the virus particles, using electron microscopy (EM) images as the ground truth. Experimental results confirm that gold-virus particles can be identified in PIMI images with a high level of confidence.


Asunto(s)
Aprendizaje Profundo , Nanopartículas del Metal , Oro , Microscopía Electrónica de Transmisión , Virión
5.
Appl Opt ; 60(8): 2141-2149, 2021 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-33690308

RESUMEN

Vibrations cause many problems such as displacement, distortion, and defocusing in microscopic imaging systems. Because vibration errors are random in direction, amplitude, and frequency, it is not known which aspect of the image quality will be affected by these problems and to what extent. Polarization parametric indirect microscopic imaging (PIMI) is a technique that records polarization parameters in a conventional wide-field reflection microscope using polarization modulation of the illumination beam and additional data analysis of the raw images. This indirect imaging technique allows the spatial resolution of the system to be improved. Here, the influence of vibration on the image sharpness and spatial resolution of a PIMI system is analyzed theoretically and experimentally. Degradation in the sharpness of PIMI images is quantified by means of the modulation transfer function and deterioration in the effective spatial resolution by the Fourier ring correlation. These results show that the quality of PIMI images can be improved significantly using vibration isolation.

6.
Opt Express ; 29(2): 1221-1231, 2021 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-33726341

RESUMEN

Optical-matter interactions and photon scattering in a sub-wavelength space are of great interest in many applications, such as nanopore-based gene sequencing and molecule characterization. Previous studies show that spatial distribution features of the scattering photon states are highly sensitive to the dielectric and structural properties of the nanopore array and matter contained on or within them, as a result of the complex optical-matter interaction in a confined system. In this paper, we report a method for shape characterization of subwavelength nanowells using photon state spatial distribution spectra in the scattering near field. Far-field parametric images of the near-field optical scattering from sub-wavelength nanowell arrays on a SiN substrate were obtained experimentally. Finite-difference time-domain simulations were used to interpret the experimental results. The rich features of the parametric images originating from the interaction of the photons and the nanowells were analyzed to recover the size of the nanowells. Experiments on nanoholes modified with Shp2 proteins were also performed. Results show that the scattering distribution of modified nanoholes exhibits significant differences compared to empty nanoholes. This work highlights the potential of utilizing the photon status scattering of nanowells for molecular characterization or other virus detection applications.


Asunto(s)
Microscopía de Polarización/instrumentación , Nanoestructuras/química , Dispersión de Radiación , Compuestos de Silicona/química , Diseño de Equipo , Luz , Fotones
7.
Neural Netw ; 137: 54-62, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33545611

RESUMEN

Few-shot learning tries to solve the problems that suffer the limited number of samples. In this paper we present a novel conditional Triplet loss for solving few-shot problems using deep metric learning. While the conventional Triplet loss suffers the limitation of random sampling of triplets which leads to slow convergence in training process, our proposed network tries to distinguish between samples so that it improves the training speed. Our main contributions are two-fold. (i) We propose a conditional Triplet loss to train a deep Triplet network for deep metric embedding. The proposed Triplet loss employs a penalty-reward technique to enhance the convergence of standard Triplet loss. (ii) We improve the performance of the existing image co-segmentation model by replacing the conventional loss function by our proposed conditional Triplet loss. To demonstrate the performance of the proposed network, experiments carry out on MNIST and CIFAR. Simulation results are evaluated by AUC and Recall (sensitivity) and indicate that the proposed conditional Triplet network achieves higher accuracy in comparison to state-of-the-arts.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos
8.
Neural Netw ; 118: 127-139, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31254767

RESUMEN

Facial landmark detection is to localize multiple facial key-points for a given facial image. While many methods have achieved remarkable performance in recent years, the accuracy remains unsatisfactory due to some uncontrolled conditions such as occlusion, head pose variations and illumination, under which, the L2 loss function is conventionally dominated by errors from those facial components on which the landmarks are hard predicted. In this paper, a novel branched convolutional neural network incorporated with Jacobian deep regression framework, hereafter referred to as BCNN-JDR, is proposed to solve the facial landmark detection problem. Our proposed framework consists of two parts: initialization stage and cascaded refinement stages. We firstly exploit branched convolutional neural networks as the robust initializer to estimate initial shape, which is incorporated with the knowledge of component-aware branches. By virtue of the component-aware branches mechanism, BCNN can effectively alleviate this issue of the imbalance errors among facial components and provide the robust initial face shape. Following the BCNN, a sequence of refinement stages are cascaded to fine-tune the initial shape within a narrow range. In each refinement stage, the local texture information is adopted to fit the facial local nonlinear variation. Moreover, our entire framework is jointly optimized via the Jacobian deep regression optimization strategy in an end-to-end manner. Jacobian deep regression optimization strategy has an ability to backward propagate the training error of the last stage to all previous stages, which implements a global optimization approach to our proposed framework. Experimental results on benchmark datasets demonstrate that the proposed BCNN-JDR is robust against uncontrolled conditions and outperforms the state-of-the-art approaches.


Asunto(s)
Reconocimiento Facial , Redes Neurales de la Computación , Estimulación Luminosa/métodos , Algoritmos , Humanos
9.
Biomed Res Int ; 2015: 259157, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25722972

RESUMEN

A method for predicting protein-protein interactions based on detected protein complexes is proposed to repair deficient interactions derived from high-throughput biological experiments. Protein complexes are pruned and decomposed into small parts based on the adaptive k-cores method to predict protein-protein interactions associated with the complexes. The proposed method is adaptive to protein complexes with different structure, number, and size of nodes in a protein-protein interaction network. Based on different complex sets detected by various algorithms, we can obtain different prediction sets of protein-protein interactions. The reliability of the predicted interaction sets is proved by using estimations with statistical tests and direct confirmation of the biological data. In comparison with the approaches which predict the interactions based on the cliques, the overlap of the predictions is small. Similarly, the overlaps among the predicted sets of interactions derived from various complex sets are also small. Thus, every predicted set of interactions may complement and improve the quality of the original network data. Meanwhile, the predictions from the proposed method replenish protein-protein interactions associated with protein complexes using only the network topology.


Asunto(s)
Mapas de Interacción de Proteínas/fisiología , Proteínas/química , Proteínas/metabolismo , Algoritmos , Biología Computacional/métodos , Reproducibilidad de los Resultados
10.
IEEE Trans Cybern ; 44(10): 1795-807, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25222723

RESUMEN

Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

11.
IEEE Trans Image Process ; 23(2): 952-5, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26270930

RESUMEN

A novel application of the Hough transform (HT) neighborhood approach to collinear segment detection was proposed in [1]. It, however, suffered from one major weakness in that it could not provide an effective solution to the case of segment intersection. This paper analyzes a vital prerequisite step, disturbance elimination in the Hough space, and shows why, this method alone, is incapable of distinguishing the true segment endpoints. To address the problem, a unique HT butterfly separation method is proposed in this correspondence, as an essential complement to the above publication.

12.
Neural Netw ; 48: 173-9, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-24055959

RESUMEN

Tipping's relevance vector machine (RVM) applies kernel methods to construct basis function networks using a least number of relevant basis functions. Compared to the well-known support vector machine (SVM), the RVM provides a better sparsity, and an automatic estimation of hyperparameters. However, the performance of the original RVM purely depends on the smoothness of the presumed prior of the connection weights and parameters. Consequently, the sparsity is actually still controlled by the choice of kernel functions and/or kernel parameters. This may lead to severe underfitting or overfitting in some cases. In this research, we explicitly involve the number of basis functions into the objective of the optimization procedure, and construct the RVM by maximizing the harmony function between "hypothetical" probability distribution in the forward training pathway and "true" probability distribution in the backward testing pathway, using Xu's Bayesian Ying-Yang (BYY) harmony learning technique. The experimental results have shown that our proposed methodology can achieve both the least complexity of structure and goodness of fit to data.


Asunto(s)
Inteligencia Artificial , Teorema de Bayes , Máquina de Vectores de Soporte , Algoritmos , Funciones de Verosimilitud , Distribución Normal
13.
IEEE Trans Image Process ; 22(6): 2500-5, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23412620

RESUMEN

In this paper, we extend our previously proposed line detection method to line segmentation using a so-called unite-and-divide (UND) approach. The methodology includes two phases, namely the union of spectra in the frequency domain, and the division of the sinogram in Radon space. In the union phase, given an image, its sinogram is obtained by parallel 2D multilayer Fourier transforms, Cartesian-to-polar mapping and 1D inverse Fourier transform. In the division phase, the edges of butterfly wings in the neighborhood of every sinogram peak are firstly specified, with each neighborhood area corresponding to a window in image space. By applying the separated sinogram of each such windowed image, we can extract the line segments. The division Phase identifies the edges of butterfly wings in the neighborhood of every sinogram peak such that each neighborhood area corresponds to a window in image space. Line segments are extracted by applying the separated sinogram of each windowed image. Our experiments are conducted on benchmark images and the results reveal that the UND method yields higher accuracy, has lower computational cost and is more robust to noise, compared to existing state-of-the-art methods.

14.
IEEE Trans Image Process ; 19(6): 1558-66, 2010 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-20144919

RESUMEN

The Hough transform (HT) is a commonly used technique for the identification of straight lines in an image. The Hough transform can be equivalently computed using the Radon transform (RT), by performing line detection in the frequency domain through use of central-slice theorem. In this research, an advanced Radon transform is developed using a multilayer fractional Fourier transform, a Cartesian-to-polar mapping, and 1-D inverse Fourier transforms, followed by peak detection in the sinogram. The multilayer fractional Fourier transform achieves a more accurate sampling in the frequency domain, and requires no zero padding at the stage of Cartesian-to-polar coordinate mapping. Our experiments were conducted on mix-shape images, noisy images, mixed-thickness lines and a large data set consisting of 751,000 handwritten Chinese characters. The experimental results have shown that our proposed method outperforms all known representative line detection methods based on the standard Hough transform or the Fourier transform.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Análisis de Fourier , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
15.
Neural Netw ; 23(2): 257-64, 2010 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-19604671

RESUMEN

Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers [Gao, J., Antolovich, M., & Kwan, P. H. (2008). L1 LASSO and its Bayesian inference. In W. Wobcke, & M. Zhang (Eds.), Lecture notes in computer science: Vol. 5360 (pp. 318-324); Wang, G., Yeung, D. Y., & Lochovsky, F. (2007). The kernel path in kernelized LASSO. In International conference on artificial intelligence and statistics (pp. 580-587). San Juan, Puerto Rico: MIT Press]. This paper is concerned with learning kernels under the LASSO formulation via adopting a generative Bayesian learning and inference approach. A new robust learning algorithm is proposed which produces a sparse kernel model with the capability of learning regularized parameters and kernel hyperparameters. A comparison with state-of-the-art methods for constructing sparse regression models such as the relevance vector machine (RVM) and the local regularization assisted orthogonal least squares regression (LROLS) is given. The new algorithm is also demonstrated to possess considerable computational advantages.


Asunto(s)
Algoritmos , Inteligencia Artificial , Teorema de Bayes , Simulación por Computador , Bases de Datos Factuales , Análisis de los Mínimos Cuadrados , Distribución Normal , Análisis de Regresión
16.
Neural Netw ; 20(7): 791-8, 2007 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-17604953

RESUMEN

A novel significant vector (SV) regression algorithm is proposed in this paper based on an analysis of Chen's orthogonal least squares (OLS) regression algorithm. The proposed regularized SV algorithm finds the significant vectors in a successive greedy process in which, compared to the classical OLS algorithm, the orthogonalization has been removed from the algorithm. The performance of the proposed algorithm is comparable to the OLS algorithm while it saves a lot of time complexities in implementing the orthogonalization needed in the OLS algorithm.


Asunto(s)
Algoritmos , Inteligencia Artificial , Redes Neurales de la Computación , Análisis de Regresión , Dinámicas no Lineales
17.
IEEE Trans Syst Man Cybern B Cybern ; 36(5): 1180-90, 2006 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-17036822

RESUMEN

As an associative memory neural network model, the cerebellar model articulation controller (CMAC) has attractive properties of fast learning and simple computation, but its rigid structure makes it difficult to approximate certain functions. This research attempts to construct a novel neural fuzzy CMAC, in which Bayesian Ying-Yang (BYY) learning is introduced to determine the optimal fuzzy sets, and a truth-value restriction inference scheme is subsequently employed to derive the truth values of the rule weights of implication rules. The BYY is motivated from the famous Chinese ancient Ying-Yang philosophy: everything in the universe can be viewed as a product of a constant conflict between opposites-Ying and Yang, a perfect status is reached when Ying and Yang achieve harmony. The proposed fuzzy CMAC (FCMAC)-BYY enjoys the following advantages. First, it has a higher generalization ability because the fuzzy rule sets are systematically optimized by BYY; second, it reduces the memory requirement of the network by a significant degree as compared to the original CMAC; and third, it provides an intuitive fuzzy logic reasoning and has clear semantic meanings. The experimental results on some benchmark datasets show that the proposed FCMAC-BYY outperforms the existing representative techniques in the research literature.


Asunto(s)
Algoritmos , Inteligencia Artificial , Biomimética/métodos , Cerebelo/fisiología , Lógica Difusa , Memoria/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Teorema de Bayes , Simulación por Computador , Humanos , Modelos Logísticos , Modelos Neurológicos , Modelos Estadísticos
18.
Chem Pharm Bull (Tokyo) ; 53(10): 1227-31, 2005 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-16204974

RESUMEN

Pre-formulation studies constitute the first step of any pharmaceutical product development and manufacture. Establishment of a comprehensive library of critical physical, chemical, biological and mechanical properties of all materials used for a formulation can be costly, tedious and time consuming, despite its importance in quality manufacturing management. This study seeks to demonstrate the pharmaceutical application of multidimensional scaling (MDS) by incorporating it as a pre-formulation tool for grouping an expanded range of microcrystalline celluloses (MCC). MDS presents the various MCC grades in two-dimensional space based on their torque rheological properties; thus conferring an extra dimension to the pre-formulation tool to facilitate the visualization of the relative positions of each MCC grade. Through this work, the utility of MDS for expediting pre-formulation studies, in particular, grouping of excipients that are available in different brands and grades can be amply exemplified.


Asunto(s)
Celulosa/química , Celulosa/clasificación , Fenómenos Químicos , Química Farmacéutica , Química Física , Redes Neurales de la Computación , Tamaño de la Partícula , Control de Calidad , Reología
19.
Pharm Res ; 21(12): 2360-8, 2004 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-15648270

RESUMEN

PURPOSE: To group microcrystalline celluloses (MCCs) using a combination of artificial neural network (ANN) and data clustering. METHODS: Radial basis function (RBF) network was used to model the torque measurements of the various MCCs. Output from the RBF network was used to group the MCCs using a data clustering technique known as discrete incremental clustering (DIC). Rheological or torque profiles of various MCCs at different combinations of mixing time and water:MCC ratios were obtained using mixer torque rheometry (MTR). Correlation analysis was performed on the derived torque parameter Torque(max) and physical properties of the MCCs. RESULTS: Depending on the leniency of the predefined threshold parameters, the 11 MCCs can be assigned into 2 or 3 groups. Grouping results were also able to identify bulk and tapped densities as major factors governing water-MCC interaction. MCCs differed in their water retentive capacities whereby the denser Avicel PH 301 and PH 302 were more sensitive to the added water. CONCLUSIONS: An objective grouping of MCCs can be achieved with a combination of ANN and DIC. This aids in the preliminary assessment of new or unknown MCCs. Key properties that control the performance of MCCs in their interactions with water can be discovered.


Asunto(s)
Celulosa/análisis , Celulosa/química , Redes Neurales de la Computación , Química Farmacéutica , Análisis por Conglomerados
20.
Int J Neural Syst ; 13(5): 291-305, 2003 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-14652871

RESUMEN

In this paper, entropy is a term used in the learning phase of a neural network. As learning progresses, more hidden nodes get into saturation. The early creation of such hidden nodes may impair generalisation. Hence an entropy approach is proposed to dampen the early creation of such nodes by using a new computation called entropy cycle. Entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes. At the end of learning, the less important nodes can then be pruned to reduce the memory requirements of the neural network.


Asunto(s)
Entropía , Aprendizaje , Redes Neurales de la Computación , Enseñanza , Algoritmos , Inteligencia Artificial , Biología Computacional , Simulación por Computador , Retroalimentación , Generalización Psicológica , Humanos , Memoria , Modelos Neurológicos , Sensación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA