Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Appl Crystallogr ; 57(Pt 4): 931-944, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39108821

RESUMEN

Serial crystallography (SX) involves combining observations from a very large number of diffraction patterns coming from crystals in random orientations. To compile a complete data set, these patterns must be indexed (i.e. their orientation determined), integrated and merged. Introduced here is TORO (Torch-powered robust optimization) Indexer, a robust and adaptable indexing algorithm developed using the PyTorch framework. TORO is capable of operating on graphics processing units (GPUs), central processing units (CPUs) and other hardware accelerators supported by PyTorch, ensuring compatibility with a wide variety of computational setups. In tests, TORO outpaces existing solutions, indexing thousands of frames per second when running on GPUs, which positions it as an attractive candidate to produce real-time indexing and user feedback. The algorithm streamlines some of the ideas introduced by previous indexers like DIALS real-space grid search [Gildea, Waterman, Parkhurst, Axford, Sutton, Stuart, Sauter, Evans & Winter (2014). Acta Cryst. D70, 2652-2666] and XGandalf [Gevorkov, Yefanov, Barty, White, Mariani, Brehm, Tolstikova, Grigat & Chapman (2019). Acta Cryst. A75, 694-704] and refines them using faster and principled robust optimization techniques which result in a concise code base consisting of less than 500 lines. On the basis of evaluations across four proteins, TORO consistently matches, and in certain instances outperforms, established algorithms such as XGandalf and MOSFLM [Powell (1999). Acta Cryst. D55, 1690-1695], occasionally amplifying the quality of the consolidated data while achieving superior indexing speed. The inherent modularity of TORO and the versatility of PyTorch code bases facilitate its deployment into a wide array of architectures, software platforms and bespoke applications, highlighting its prospective significance in SX.

2.
J Cheminform ; 16(1): 8, 2024 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-38238779

RESUMEN

The majority of tandem mass spectrometry (MS/MS) spectra in untargeted metabolomics and exposomics studies lack any annotation. Our deep learning framework, Integrated Data Science Laboratory for Metabolomics and Exposomics-Mass INTerpreter (IDSL_MINT) can translate MS/MS spectra into molecular fingerprint descriptors. IDSL_MINT allows users to leverage the power of the transformer model for mass spectrometry data, similar to the large language models. Models are trained on user-provided reference MS/MS libraries via any customizable molecular fingerprint descriptors. IDSL_MINT was benchmarked using the LipidMaps database and improved the annotation rate of a test study for MS/MS spectra that were not originally annotated using existing mass spectral libraries. IDSL_MINT may improve the overall annotation rates in untargeted metabolomics and exposomics studies. The IDSL_MINT framework and tutorials are available in the GitHub repository at https://github.com/idslme/IDSL_MINT .Scientific contribution statement.Structural annotation of MS/MS spectra from untargeted metabolomics and exposomics datasets is a major bottleneck in gaining new biological insights. Machine learning models to convert spectra into molecular fingerprints can help in the annotation process. Here, we present IDSL_MINT, a new, easy-to-use and customizable deep-learning framework to train and utilize new models to predict molecular fingerprints from spectra for the compound annotation workflows.

3.
Expert Opin Drug Metab Toxicol ; 19(6): 367-380, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37395108

RESUMEN

INTRODUCTION: Acute poisoning is a significant global health burden, and the causative agent is often unclear. The primary aim of this pilot study was to develop a deep learning algorithm that predicts the most probable agent a poisoned patient was exposed to from a pre-specified list of drugs. RESEARCH DESIGN & METHODS: Data were queried from the National Poison Data System (NPDS) from 2014 through 2018 for eight single-agent poisonings (acetaminophen, diphenhydramine, aspirin, calcium channel blockers, sulfonylureas, benzodiazepines, bupropion, and lithium). Two Deep Neural Networks (PyTorch and Keras) designed for multi-class classification tasks were applied. RESULTS: There were 201,031 single-agent poisonings included in the analysis. For distinguishing among selected poisonings, PyTorch model had specificity of 97%, accuracy of 83%, precision of 83%, recall of 83%, and a F1-score of 82%. Keras had specificity of 98%, accuracy of 83%, precision of 84%, recall of 83%, and a F1-score of 83%. The best performance was achieved in the diagnosis of single-agent poisoning in diagnosing poisoning by lithium, sulfonylureas, diphenhydramine, calcium channel blockers, then acetaminophen, in PyTorch (F1-score = 99%, 94%, 85%, 83%, and 82%, respectively) and Keras (F1-score = 99%, 94%, 86%, 82%, and 82%, respectively). CONCLUSION: Deep neural networks can potentially help in distinguishing the causative agent of acute poisoning. This study used a small list of drugs, with polysubstance ingestions excluded.Reproducible source code and results can be obtained at https://github.com/ashiskb/npds-workspace.git.


Asunto(s)
Aprendizaje Profundo , Humanos , Bloqueadores de los Canales de Calcio , Proyectos Piloto , Acetaminofén , Litio , Redes Neurales de la Computación , Difenhidramina
4.
bioRxiv ; 2023 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-37066284

RESUMEN

One area of medical imaging that has recently experienced innovative deep learning advances is diffusion MRI (dMRI) streamline tractography with recurrent neural networks (RNNs). Unlike traditional imaging studies which utilize voxel-based learning, these studies model dMRI features at points in continuous space off the voxel grid in order to propagate streamlines, or virtual estimates of axons. However, implementing such models is non-trivial, and an open-source implementation is not yet widely available. Here, we describe a series of considerations for implementing tractography with RNNs and demonstrate they allow one to approximate a deterministic streamline propagator with comparable performance to existing algorithms. We release this trained model and the associated implementations leveraging popular deep learning libraries. We hope the availability of these resources will lower the barrier of entry into this field, spurring further innovation.

5.
Quant Imaging Med Surg ; 13(4): 2314-2327, 2023 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-37064348

RESUMEN

Background: There were a very large number of intrauterine adhesion (IUA) patients. As improving the classification of three-dimensional transvaginal ultrasound (3D-TVUS) of IUA or non-IUA images remains a clinical challenge and is needed to avoid inappropriate surgery. Our study aimed to evaluate deep learning as a method to classify 3D-TVUS of IUA or non-IUA images taken with panoramic technology. Methods: After meeting an inclusion/exclusion criteria, a total of 4,401 patients were selected for this study. This included 2,803 IUA patients and 1,598 non-IUA patients. IUA was confirmed by hysteroscopy, and each patient underwent one 3D-TVUS examination. Four well-known convolutional neural network (CNN) architectures were selected to classify the IUA images: Visual Geometry Group16 (VGG16), InceptionV3, ResNet50, and ResNet101. We used these pretrained CNNs on ImageNet by applying both TensorFlow and PyTorch. All 3D-TVUS images were normalized and mixed together. We split the data set into a training set, validation set, and test set. The performance of our classification model was evaluated according to sensitivity, precision, F1-score, and accuracy, which were determined by equations that used true-positive (TP), false-positive (FP), true-negative (TN), and false-negative (FN) numbers. Results: The overall performances of VGG16, InceptionV3, ResNet50, and ResNet101 were better in PyTorch as opposed to TensorFlow. Through PyTorch, the best CNN model was InceptionV3 with its performance measured as 94.2% sensitivity, 99.4% precision, 96.8% F1-score, and 97.3% accuracy. The area under the curve (AUC) results of VGG16, InceptionV3, ResNet50, and ResNet101 were 0.959, 0.999, 0.997, and 0.999, respectively. PyTorch also successfully transferred information from the source to the target domain where we were able to use another center's data as an external test data set. No overfitting that could have adversely affected the classification accuracy occurred. Finally, we successfully established a webpage to diagnose IUA based on the 3D-TVUS images. Conclusions: Deep learning can assist in the binary classification of 3D-TVUS images to diagnose IUA. This study lays the foundation for future research into the integration of deep learning and blockchain technology.

6.
Front Physiol ; 14: 1098893, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37008008

RESUMEN

Objective: To analyze the cranial computed tomography (CT) imaging features of patients with primary ciliary dyskinesia (PCD) who have exudative otitis media (OME) and sinusitis using a deep learning model for early intervention in PCD. Methods: Thirty-two children with PCD diagnosed at the Children's Hospital of Fudan University, Shanghai, China, between January 2010 and January 2021 who had undergone cranial CT were retrospectively analyzed. Thirty-two children with OME and sinusitis diagnosed using cranial CT formed the control group. Multiple deep learning neural network training models based on PyTorch were built, and the optimal model was trained and selected to observe the differences between the cranial CT images of patients with PCD and those of general patients and to screen patients with PCD. Results: The Swin-Transformer, ConvNeXt, and GoogLeNet training models had optimal results, with an accuracy of approximately 0.94; VGG11, VGG16, VGG19, ResNet 34, and ResNet 50, which are neural network models with fewer layers, achieved relatively strong results; and Transformer and other neural networks with more layers or neural network models with larger receptive fields exhibited a relatively weak performance. A heat map revealed the differences in the sinus, middle ear mastoid, and fourth ventricle between the patients with PCD and the control group. Transfer learning can improve the modeling effect of neural networks. Conclusion: Deep learning-based CT imaging models can accurately screen for PCD and identify differences between the cranial CT images.

7.
Sensors (Basel) ; 22(22)2022 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-36433470

RESUMEN

In this paper, we present an analysis of important aspects that arise during the development of neural network applications. Our aim is to determine if the choice of library can impact the system's overall performance, either during training or design, and to extract a set of criteria that could be used to highlight the advantages and disadvantages of each library under consideration. To do so, we first extracted the previously mentioned aspects by comparing two of the most popular neural network libraries-PyTorch and TensorFlow-and then we performed an analysis on the obtained results, with the intent of determining if our initial hypothesis was correct. In the end, the results of the analysis are gathered, and an overall picture of what tasks are better suited for what library is presented.


Asunto(s)
Redes Neurales de la Computación
8.
Am J Transl Res ; 14(7): 4728-4735, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35958478

RESUMEN

OBJECTIVE: To investigate the diagnostic value of deep learning (DL) in differentiating otitis media (OM) caused by otitis media with effusion (OME) and primary ciliary dyskinesia (PCD), so as to provide reference for early intervention. METHODS: From January 2010 to January 2021, 31 patients with PCD who had temporal bone computed tomography (TBCT) in the Children's Hospital of Fudan University were retrospectively analyzed. Another 30 age-matched cases of OME with TBCT were collected as the control group. The CT imaging signatures of children were observed. Besides, a variety of DL neural network training models were established based on PyTorch, and the optimal models were trained and selected for PCD screening. RESULTS: The google net-trained model worked best, with an accuracy of 0.99. Vgg16_bn, vgg19_bn, resnet18, and resnet34; having neural networks with fewer layers, better model effects, with an accuracy rate of 0.86, 0.9, 0.86, and 0.86, respectively. Resnet50 and other neural networks with more layers had relatively poor results. CONCLUSION: DL-based CT radiomics can accurately distinguish OM caused by OME from that induced by PCD, which can be used for screening the PCD.

9.
Front Pediatr ; 10: 809523, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36016875

RESUMEN

Objective: This study aimed to conduct an in-depth investigation of the learning framework used for deriving diagnostic results of temporal bone diseases, including cholesteatoma and Langerhans cell histiocytosis (LCH). In addition, middle ear inflammation (MEI) was diagnosed by CT scanning of the temporal bone in pediatric patients. Design: A total of 119 patients were included in this retrospective study; among them, 40 patients had MEI, 38 patients had histology-proven cholesteatoma, and 41 patients had histology-proven LCH of the temporal bone. Each of the 119 patients was matched with one-third of the disease labels. The study included otologists and radiologists, and the reference criteria were histopathology results (70% of cases for training and 30% of cases for validation). A multilayer perceptron artificial neural network (VGG16_BN) was employed and classified, based on radiometrics. This framework structure was compared and analyzed by clinical experts according to CT images and performance. Results: The deep learning framework results vs. a physician's diagnosis, respectively, in multiclassification tasks, were as follows. Receiver operating characteristic (ROC) (cholesteatoma): (0.98 vs. 0.91), LCH (0.99 vs. 0.98), and MEI (0.99 vs. 0.85). Accuracy (cholesteatoma): (0.99 vs. 0.89), LCH (0.99 vs. 0.97), and MEI (0.99 vs. 0.89). Sensitivity (cholesteatoma): (0.96 vs. 0.97), LCH (0.99 vs. 0.98), and MEI (1 vs. 0.69). Specificity (cholesteatoma): (1 vs. 0.89), LCH (0.99 vs. 0.97), and MEI (0.99 vs. 0.89). Conclusion: This article presents a research and learning framework for the diagnosis of cholesteatoma, MEI, and temporal bone LCH in children, based on CT scans. The research framework performed better than the clinical experts.

10.
Water Res ; 220: 118685, 2022 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-35671685

RESUMEN

Clarification basins are ubiquitous water treatment units applied across urban water systems. Diverse applications include stormwater systems, stabilization lagoons, equalization, storage and green infrastructure. Residence time (RT), surface overflow rate (SOR) and the Storm Water Management Model (SWMM) are readily implemented but are not formulated to optimize basin geometrics because transport dynamics remain unresolved. As a result, basin design yields high costs from hundreds of thousands to tens of million USD. Basin optimization and retrofits can benefit from more robust and efficient tools. More advanced methods such as computational fluid dynamics (CFD), while demonstrating benefits for resolving transport, can be complex and computationally expensive for routine applications. To provide stakeholders with an efficient and robust tool, this study develops a novel optimization framework for basin geometrics with machine learning (ML). This framework (1) leverages high-performance computing (HPC) and the predictive capability of CFD to provide artificial neural network (ANN) development and (2) integrates a trained ANN model with a hybrid evolutionary-gradient-based optimization algorithm through the ANN automatic differentiation (AD) functionality. ANN model results for particulate matter (PM) clarification demonstrate high predictive capability with a coefficient of determination (R2) of 0.998 on the test dataset. The ANN model for total PM clarification of three (3) heterodisperse particle size distributions (PSDs) also illustrates good performance (R2>0.986). The proposed framework was implemented for a basin and watershed loading conditions in Florida (USA), the ML basin designs yield substantially improved cost-effectiveness compared to common designs (square and circular basins) and RT-based design for all PSDs tested. To meet a presumptive regulatory criteria of 80% PM separation (widely adopted in the USA), the ML framework yields 4.7X to 8X lower cost than the common basin designs tested. Compared to the RT-based design, the ML design yields 5.6X to 83.5X cost reduction as a function of the finer, medium, and coarser PSDs. Furthermore, the proposed framework benefits from ANN's high computational efficiency. Optimization of basin geometrics is performed in minutes on a laptop using the framework. The framework is a promising adjuvant tool for cost-effective and sustainable basin implementation across urban water systems.


Asunto(s)
Hidrodinámica , Material Particulado , Algoritmos , Análisis Costo-Beneficio , Aprendizaje Automático
11.
Mater Today Proc ; 66: 1201-1210, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35572043

RESUMEN

Automatic recognition of lung system is use to identify normal and covid infected lungs from chest X-ray images of the people. In the year 2020, the coronavirus forcefully pushed the entire world into a freakish situation, the foremost challenge is to diagnosis the coronavirus. We have got standard diagnosis test called PCR test which is complex and costlier to check the patient's sample at initial stage. Keeping this in mind, we developed a work to recognize the chest X-ray image automatically and label it as Covid or normal lungs. For this work, we collected the dataset from open-source data repository and then pre-process each X-ray images from each category such as covid X-ray images and non-covid X-ray images using various techniques such as filtering, edge detection, segmentation, etc., and then the pre-processed X-ray images are trained using CNN-Resnet18 network. Using PyTorch python package, the resnet-18 network layer is created which gives more accuracy than any other algorithm. From the acquired knowledge the model is correctly classifies the testing X-ray images. Then the performance of the model is calculated and analyzed with various algorithms and hence gives that the resnet-18 network improves our model performance in terms of specificity and sensitivity with more than 90%.

12.
Sensors (Basel) ; 22(4)2022 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-35214213

RESUMEN

A suitable framework for the development of artificial neural networks is important because it decides the level of accuracy, which can be reached for a certain dataset and increases the certainty about the reached classification results. In this paper, we conduct a comparative study for the performance of four frameworks, Keras with TensorFlow, Pytorch, TensorFlow, and Cognitive Toolkit (CNTK), for the elaboration of neural networks. The number of neurons in the hidden layer of the neural networks is varied from 8 to 64 to understand its effect on the performance metrics of the frameworks. A test dataset is synthesized using an analytical model and real measured impedance spectra by an eddy current sensor coil on EUR 2 and TRY 1 coins. The dataset has been extended by using a novel method based on interpolation technique to create datasets with different difficulty levels to replicate the scenario with a good imitation of EUR 2 coins and to investigate the limit of the prediction accuracy. It was observed that the compared frameworks have high accuracy performance for a lower level of difficulty in the dataset. As the difficulty in the dataset is raised, there was a drop in the accuracy of CNTK and Keras with TensorFlow depending upon the number of neurons in the hidden layers. It was observed that CNTK has the overall worst accuracy performance with an increase in the difficulty level of the datasets. Therefore, the major comparison was confined to Pytorch and TensorFlow. It was observed for Pytorch and TensorFlow with 32 and 64 neurons in hidden layers that there is a minor drop in the accuracy with an increase in the difficulty level of the dataset and was above 90% until both the coins were 80% closer to each other in terms of electrical and magnetic properties. However, Pytorch with 32 neurons in the hidden layer has a reduction in model size by 70% and 16.3% and predicts the class, 73.6% and 15.6% faster in comparison to TensorFlow and Pytorch with 64 neurons.


Asunto(s)
Redes Neurales de la Computación , Numismática , Recolección de Datos , Aprendizaje Automático
13.
Curr Med Imaging ; 18(4): 409-416, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-33602102

RESUMEN

AIMS: Early detection of breast cancer has reduced many deaths. Earlier CAD systems used to be the second opinion for radiologists and clinicians. Machine learning and deep learning have brought tremendous changes in medical diagnosis and imagining. BACKGROUND: Breast cancer is the most commonly occurring cancer in women and it is the second most common cancer overall. According to the 2018 statistics, there were over 2million cases all over the world. Belgium and Luxembourg have the highest rate of cancer. OBJECTIVE: A method for breast cancer detection has been proposed using Ensemble learning. 2- class and 8-class classification is performed. METHODS: To deal with imbalance classification, the authors have proposed an ensemble of pretrained models. RESULTS: 98.5% training accuracy and 89% of test accuracy are achieved on 8-class classification. Moreover, 99.1% and 98% train and test accuracy are achieved on 2 class classification. CONCLUSION: it is found that there are high misclassifications in class DC when compared to the other classes, this is due to the imbalance in the dataset. In the future, one can increase the size of the datasets or use different methods. In implement this research work, authors have used 2 Nvidia Tesla V100 GPU's in google cloud platform.


Asunto(s)
Neoplasias de la Mama , Mama , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
14.
Softw Impacts ; 10: 100185, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34870242

RESUMEN

The COVID-19 pandemic has accelerated the need for automatic triaging and summarization of ultrasound videos for fast access to pathologically relevant information in the Emergency Department and lowering resource requirements for telemedicine. In this work, a PyTorch based unsupervised reinforcement learning methodology which incorporates multi feature fusion to output classification labels, segmentation maps and summary videos for lung ultrasound is presented. The use of unsupervised training eliminates tedious manual labeling of key-frames by clinicians opening new frontiers in scalability in training using unlabeled or weakly labeled data. Our approach was benchmarked against expert clinicians from different geographies displaying superior Precision and F1 scores (over 80% and 44%).

15.
Toxicol Pathol ; 49(4): 843-850, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33287654

RESUMEN

In order to automate the counting of ovarian follicles required in multigeneration reproductive studies performed in the rat according to Organization for Economic Co-operation and Development guidelines 443 and 416, the application of deep neural networks was tested. The manual evaluation of the differential ovarian follicle count is a tedious and time-consuming task that requires highly trained personnel. In this regard, deep learning outputs provide overlay pictures for a more detailed documentation, together with an increased reproducibility of the counts. To facilitate the planned good laboratory practice (GLP) validation a workflow was set up using MLFlow to make all steps from generating of scans, training of the neural network, uploading of study images to the neural network, generation and storage of the results in a compliant manner controllable and reproducible. PyTorch was used as main framework to build the Faster region-based convolutional neural network for the training. We compared the performances of different depths of ResNet models with specific regard to the sensitivity, specificity, accuracy of the models. In this paper, we describe all steps from data labeling, training of networks, and the performance metrics chosen to evaluate different network architectures. We also make recommendation on steps, which should be taken into consideration when GLP validation is aimed for.


Asunto(s)
Redes Neurales de la Computación , Folículo Ovárico , Animales , Femenino , Neuronas , Ratas , Reproducibilidad de los Resultados , Flujo de Trabajo
16.
Front Plant Sci ; 11: 541960, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33365037

RESUMEN

Plant counting runs through almost every stage of agricultural production from seed breeding, germination, cultivation, fertilization, pollination to yield estimation, and harvesting. With the prevalence of digital cameras, graphics processing units and deep learning-based computer vision technology, plant counting has gradually shifted from traditional manual observation to vision-based automated solutions. One of popular solutions is a state-of-the-art object detection technique called Faster R-CNN where plant counts can be estimated from the number of bounding boxes detected. It has become a standard configuration for many plant counting systems in plant phenotyping. Faster R-CNN, however, is expensive in computation, particularly when dealing with high-resolution images. Unfortunately high-resolution imagery is frequently used in modern plant phenotyping platforms such as unmanned aerial vehicles, engendering inefficient image analysis. Such inefficiency largely limits the throughput of a phenotyping system. The goal of this work hence is to provide an effective and efficient tool for high-throughput plant counting from high-resolution RGB imagery. In contrast to conventional object detection, we encourage another promising paradigm termed object counting where plant counts are directly regressed from images, without detecting bounding boxes. In this work, by profiling the computational bottleneck, we implement a fast version of a state-of-the-art plant counting model TasselNetV2 with several minor yet effective modifications. We also provide insights why these modifications make sense. This fast version, TasselNetV2+, runs an order of magnitude faster than TasselNetV2, achieving around 30 fps on image resolution of 1980 × 1080, while it still retains the same level of counting accuracy. We validate its effectiveness on three plant counting tasks, including wheat ears counting, maize tassels counting, and sorghum heads counting. To encourage the use of this tool, our implementation has been made available online at https://tinyurl.com/TasselNetV2plus.

17.
Sensors (Basel) ; 20(10)2020 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-32429341

RESUMEN

The estimation of human hand pose has become the basis for many vital applications where the user depends mainly on the hand pose as a system input. Virtual reality (VR) headset, shadow dexterous hand and in-air signature verification are a few examples of applications that require to track the hand movements in real-time. The state-of-the-art 3D hand pose estimation methods are based on the Convolutional Neural Network (CNN). These methods are implemented on Graphics Processing Units (GPUs) mainly due to their extensive computational requirements. However, GPUs are not suitable for the practical application scenarios, where the low power consumption is crucial. Furthermore, the difficulty of embedding a bulky GPU into a small device prevents the portability of such applications on mobile devices. The goal of this work is to provide an energy efficient solution for an existing depth camera based hand pose estimation algorithm. First, we compress the deep neural network model by applying the dynamic quantization techniques on different layers to achieve maximum compression without compromising accuracy. Afterwards, we design a custom hardware architecture. For our device we selected the FPGA as a target platform because FPGAs provide high energy efficiency and can be integrated in portable devices. Our solution implemented on Xilinx UltraScale+ MPSoC FPGA is 4.2× faster and 577.3× more energy efficient than the original implementation of the hand pose estimation algorithm on NVIDIA GeForce GTX 1070.


Asunto(s)
Algoritmos , Mano , Redes Neurales de la Computación , Humanos , Movimiento , Fenómenos Físicos
18.
Front Neuroinform ; 12: 9, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29563867

RESUMEN

We developed Convis, a Python simulation toolbox for large scale neural populations which offers arbitrary receptive fields by 3D convolutions executed on a graphics card. The resulting software proves to be flexible and easily extensible in Python, while building on the PyTorch library (The Pytorch Project, 2017), which was previously used successfully in deep learning applications, for just-in-time optimization and compilation of the model onto CPU or GPU architectures. An alternative implementation based on Theano (Theano Development Team, 2016) is also available, although not fully supported. Through automatic differentiation, any parameter of a specified model can be optimized to approach a desired output which is a significant improvement over e.g., Monte Carlo or particle optimizations without gradients. We show that a number of models including even complex non-linearities such as contrast gain control and spiking mechanisms can be implemented easily. We show in this paper that we can in particular recreate the simulation results of a popular retina simulation software VirtualRetina (Wohrer and Kornprobst, 2009), with the added benefit of providing (1) arbitrary linear filters instead of the product of Gaussian and exponential filters and (2) optimization routines utilizing the gradients of the model. We demonstrate the utility of 3d convolution filters with a simple direction selective filter. Also we show that it is possible to optimize the input for a certain goal, rather than the parameters, which can aid the design of experiments as well as closed-loop online stimulus generation. Yet, Convis is more than a retina simulator. For instance it can also predict the response of V1 orientation selective cells. Convis is open source under the GPL-3.0 license and available from https://github.com/jahuth/convis/ with documentation at https://jahuth.github.io/convis/.

19.
Front Neuroinform ; 12: 89, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30631269

RESUMEN

The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA