Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros











Intervalo de año de publicación
1.
Cuad. Hosp. Clín ; 61(1): [17], jul. 2020. ilus.
Artículo en Español | LILACS, LIBOCS | ID: biblio-1118869

RESUMEN

OBJETIVO: Probar una metodología de enseñanza-aprendizaje, instrumentos de medición y sistema de implementación de la ECOE en relación con lactancia materna, alimentación complementaria, crecimiento y consejería. MATERIAL Y MÉTODOS: Se estudió la adquisición de competencias sobre alimentación en menores de dos años en internos de pediatría aplicando la evaluación clínica objetiva estructurada (ECOE) antes y después del desarrollo de un proceso de enseñanza aprendizaje (PEA) estructurado. Se organizaron cuatro estaciones de evaluación de los aspectos centrales de alimentación y crecimiento, en un grupo de internos seleccionados al azar. RESULTADOS: Las cuatro estaciones de la ECOE se aplicaron sin dificultades antes y después del PEA. Los resultados mostraron una mejora en el rendimiento de los internos, de manera individual y de grupo; en este último las diferencias en la media fueron para alimentación complementaria pre 2,5 (DE 0,93) y post 5 (DE 2,39); consejería pre 5,75 (DE 1,49) y post 8,13 (DE 1,25); lactancia materna pre 12,63 (DE 2,5) y post 16,38 (DE 2) y velocidad de crecimiento pre 3,13 (DE 1,36) y post 3,38 (DE 0,92). Los resultados fueron estadísticamente significativos para los tres primeros rubros. CONCLUSIONES: En base a estos resultados se sugieren mejoras en el programa de enseñanza y se verifica la aplicabilidad de la ECOE en el internado del Hospital del Niño Dr. Ovidio Aliaga Uría.


OBJECTIVE: To test a teaching-learning methodology, measurement tools and OSCE implementation system in relation to breastfeeding, complementary feeding, growth and counseling. METHODOLOGY: The acquisition of competences was studied by applying objective structured clinical examination (OSCE) before and after the development of a structured teaching-learning process. Four assessment stations were organized considering central aspects on feeding and growth of children under two years of age, in a group of randomly selected students during medical internship. RESULTS: The four OSCE stations were applied without difficulties before and after the learning and teaching process. The results showed an improvement in the performance of interns, individually and in groups; in the latter, mean differences were: for complementary feeding pre 2.5 (SD 0.93) and post 5 (SD 2.39); counseling pre 5.75 (SD 1.49) and post 8.13 (SD 1.25); breastfeeding pre 12.63 (SD 2.5) and post 16.38 (SD 2) and growth velocity pre 3.13 (SD 1.36) and post 3.38 (SD 0.92). The results were statistically significant for the first three items. CONCLUSIONS: Based on these results, the authors suggest improvements in the teaching program, and verify the applicability of the OSCE for the evaluation of rotatory internship at the Hospital del Nino Dr. Ovidio Aliaga Uria.


Asunto(s)
Humanos , Lactante , Lactancia Materna , Menores , Internado y Residencia , Enseñanza , Aprendizaje , Métodos
2.
IEEE Trans Neural Netw ; 14(2): 296-303, 2003.
Artículo en Inglés | MEDLINE | ID: mdl-18238013

RESUMEN

In this paper, we propose a general technique for solving support vector classifiers (SVCs) for an arbitrary loss function, relying on the application of an iterative reweighted least squares (IRWLS) procedure. We further show that three properties of the SVC solution can be written as conditions over the loss function. This technique allows the implementation of the empirical risk minimization (ERM) inductive principle on large margin classifiers obtaining, at the same time, very compact (in terms of number of support vectors) solutions. The improvements obtained by changing the SVC loss function are illustrated with synthetic and real data examples.

3.
IEEE Trans Neural Netw ; 12(3): 445-55, 2001.
Artículo en Inglés | MEDLINE | ID: mdl-18249879

RESUMEN

In the context of classification problems, the paper analyzes the general structure of the strict sense Bayesian (SSB) cost functions, those having a unique minimum when the soft decisions are equal to the posterior class probabilities. We show that any SSB cost is essentially the sum of a generalized measure of entropy, which does not depend on the targets, and an error component. Symmetric cost functions are analyzed in detail. Our results provide a further insight on the behavior of this family of objective functions and are the starting point for the exploration of novel algorithms. Two applications are proposed. First, the use of asymmetric SSB cost functions for posterior probability estimation in non-maximum a posteriori (MAP) decision problems. Second, a novel entropy minimization principle for hybrid learning: use labeled data to minimize the cost function, and unlabeled data to minimize the corresponding entropy measure.

4.
IEEE Trans Neural Netw ; 12(5): 1047-59, 2001.
Artículo en Inglés | MEDLINE | ID: mdl-18249932

RESUMEN

An iterative block training method for support vector classifiers (SVCs) based on weighted least squares (WLS) optimization is presented. The algorithm, which minimizes structural risk in the primal space, is applicable to both linear and nonlinear machines. In some nonlinear cases, it is necessary to previously find a projection of data onto an intermediate-dimensional space by means of either principal component analysis or clustering techniques. The proposed approach yields very compact machines, the complexity reduction with respect to the SVC solution is especially notable in problems with highly overlapped classes. Furthermore, the formulation in terms of WLS minimization makes the development of adaptive SVCs straightforward, opening up new fields of application for this type of model, mainly online processing of large amounts of (static/stationary) data, as well as online update in nonstationary scenarios (adaptive solutions). The performance of this new type of algorithm is analyzed by means of several simulations.

5.
Neural Comput ; 12(6): 1429-47, 2000 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-10935721

RESUMEN

The attractive possibility of applying layerwise block training algorithms to multilayer perceptrons MLP, which offers initial advantages in computational effort, is refined in this article by means of introducing a sensitivity correction factor in the formulation. This results in a clear performance advantage, which we verify in several applications. The reasons for this advantage are discussed and related to implicit relations with second-order techniques, natural gradient formulations through Fisher's information matrix, and sample selection. Extensions to recurrent networks and other research lines are suggested at the close of the article.


Asunto(s)
Algoritmos , Aprendizaje , Redes Neurales de la Computación , Sistemas Especialistas , Humanos , Neoplasias/patología
6.
IEEE Trans Neural Netw ; 10(3): 645-56, 1999.
Artículo en Inglés | MEDLINE | ID: mdl-18252565

RESUMEN

The problem of designing cost functions to estimate a posteriori probabilities in multiclass problems is addressed in this paper. We establish necessary and sufficient conditions that these costs must satisfy in one-class one-output networks whose outputs are consistent with probability laws. We focus our attention on a particular subset of the corresponding cost functions; those which verify two usually interesting properties: symmetry and separability (well-known cost functions, such as the quadratic cost or the cross entropy are particular cases in this subset). Finally, we present a universal stochastic gradient learning rule for single-layer networks, in the sense of minimizing a general version of these cost functions for a wide family of nonlinear activation functions.

7.
IEEE Trans Neural Netw ; 10(6): 1474-81, 1999.
Artículo en Inglés | MEDLINE | ID: mdl-18252648

RESUMEN

This paper explores the possibility of constructing RBF classifiers which, somewhat like support vector machines, use a reduced number of samples as centroids, by means of selecting samples in a direct way. Because sample selection is viewed as a hard computational problem, this selection is done after a previous vector quantization: this way obtaining also other similar machines using centroids selected from those that are learned in a supervised manner. Several forms of designing these machines are considered, in particular with respect to sample selection; as well as some different criteria to train them. Simulation results for well-known classification problems show very good performance of the corresponding designs, improving that of support vector machines and reducing substantially their number of units. This shows that our interest in selecting samples (or centroids) in an efficient manner is justified. Many new research avenues appear from these experiments and discussions, as suggested in our conclusions.

8.
IEEE Trans Neural Netw ; 9(6): 1509-14, 1998.
Artículo en Inglés | MEDLINE | ID: mdl-18255828

RESUMEN

The cerebellar model articulation controller (CMAC) is a simple and fast neural-network based on local approximations. However, its rigid structure reduces its accuracy of approximation and speed of convergence with heterogeneous inputs. In this paper, we propose a generalized CMAC (GCMAC) network that considers different degrees of generalization for each input. Its representation abilities are analyzed, and a set of local relationships that the output function must satisfy are derived. An adaptive growing method of the network is also presented. The validity of our approach and methods are shown by some simulated examples.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA