Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Animals (Basel) ; 14(9)2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38731328

RESUMEN

Standing and lying are the fundamental behaviours of quadrupedal animals, and the ratio of their durations is a significant indicator of calf health. In this study, we proposed a computer vision method for non-invasively monitoring of calves' behaviours. Cameras were deployed at four viewpoints to monitor six calves on six consecutive days. YOLOv8n was trained to detect standing and lying calves. Daily behavioural budget was then summarised and analysed based on automatic inference on untrained data. The results show a mean average precision of 0.995 and an average inference speed of 333 frames per second. The maximum error in the estimated daily standing and lying time for a total of 8 calf-days is less than 14 min. Calves with diarrhoea had about 2 h more daily lying time (p < 0.002), 2.65 more daily lying bouts (p < 0.049), and 4.3 min less daily lying bout duration (p = 0.5) compared to healthy calves. The proposed method can help in understanding calves' health status based on automatically measured standing and lying time, thereby improving their welfare and management on the farm.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 11561-11574, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37145942

RESUMEN

Confluence is a novel non-Intersection over Union (IoU) alternative to Non-Maxima Suppression (NMS) in bounding box post-processing in object detection. It overcomes the inherent limitations of IoU-based NMS variants to provide a more stable, consistent predictor of bounding box clustering by using a normalized Manhattan Distance inspired proximity metric to represent bounding box clustering. Unlike Greedy and Soft NMS, it does not rely solely on classification confidence scores to select optimal bounding boxes, instead selecting the box which is closest to every other box within a given cluster and removing highly confluent neighboring boxes. Confluence is experimentally validated on the MS COCO and CrowdHuman benchmarks, improving Average Precision by 0.2--2.7% and 1--3.8% respectively and Average Recall by 1.3--9.3 and 2.4--7.3% when compared against Greedy and Soft-NMS variants. Quantitative results are supported by extensive qualitative analysis and threshold sensitivity analysis experiments support the conclusion that Confluence is more robust than NMS variants. Confluence represents a paradigm shift in bounding box processing, with potential to replace IoU in bounding box regression processes.

3.
Ecol Evol ; 11(9): 4494-4506, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33976825

RESUMEN

A time-consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models.We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain-specific applications.We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image-sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out-of-sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training.In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%-88.59%) than that achieved by the models trained only on camera trap datasets (38.5%-66.74%). Infusion further improved mAP by 1.78%-32.08%.Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out-of-the-box software. Models can be further optimized by infusion of 5%-10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: https://doi.org/10.5061/dryad.1c59zw3tx.

4.
Sensors (Basel) ; 21(8)2021 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-33917792

RESUMEN

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.

5.
Animals (Basel) ; 10(1)2019 Dec 27.
Artículo en Inglés | MEDLINE | ID: mdl-31892236

RESUMEN

We present ClassifyMe a software tool for the automated identification of animal species from camera trap images. ClassifyMe is intended to be used by ecologists both in the field and in the office. Users can download a pre-trained model specific to their location of interest and then upload the images from a camera trap to a laptop or workstation. ClassifyMe will identify animals and other objects (e.g., vehicles) in images, provide a report file with the most likely species detections, and automatically sort the images into sub-folders corresponding to these species categories. False Triggers (no visible object present) will also be filtered and sorted. Importantly, the ClassifyMe software operates on the user's local machine (own laptop or workstation)-not via internet connection. This allows users access to state-of-the-art camera trap computer vision software in situ, rather than only in the office. The software also incurs minimal cost on the end-user as there is no need for expensive data uploads to cloud services. Furthermore, processing the images locally on the users' end-device allows them data control and resolves privacy issues surrounding transfer and third-party access to users' datasets.

6.
Ecol Evol ; 6(10): 3216-25, 2016 05.
Artículo en Inglés | MEDLINE | ID: mdl-27096080

RESUMEN

Camera trapping is widely used in ecological studies. It is often considered nonintrusive simply because animals are not captured or handled. However, the emission of light and sound from camera traps can be intrusive. We evaluated the daytime and nighttime behavioral responses of four mammalian predators to camera traps in road-based, passive (no bait) surveys, in order to determine how this might affect ecological investigations. Wild dogs, European red foxes, feral cats, and spotted-tailed quolls all exhibited behaviors indicating they noticed camera traps. Their recognition of camera traps was more likely when animals were approaching the device than if they were walking away from it. Some individuals of each species retreated from camera traps and some moved toward them, with negative behaviors slightly more common during the daytime. There was no consistent response to camera traps within species; both attraction and repulsion were observed. Camera trapping is clearly an intrusive sampling method for some individuals of some species. This may limit the utility of conclusions about animal behavior obtained from camera trapping. Similarly, it is possible that behavioral responses to camera traps could affect detection probabilities, introducing as yet unmeasured biases into camera trapping abundance surveys. These effects demand consideration when utilizing camera traps in ecological research and will ideally prompt further work to quantify associated biases in detection probabilities.

7.
PLoS One ; 9(10): e110832, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25354356

RESUMEN

Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.


Asunto(s)
Mamíferos/fisiología , Fotograbar/instrumentación , Grabación en Video/instrumentación , Animales , Audición , Fotograbar/métodos , Grabación en Video/métodos , Visión Ocular
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA