Your browser doesn't support javascript.
loading
An approach to rapid processing of camera trap images with minimal human input.
Duggan, Matthew T; Groleau, Melissa F; Shealy, Ethan P; Self, Lillian S; Utter, Taylor E; Waller, Matthew M; Hall, Bryan C; Stone, Chris G; Anderson, Layne L; Mousseau, Timothy A.
Afiliación
  • Duggan MT; Department of Biological Sciences University of South Carolina (UofSC) Columbia South Carolina USA.
  • Groleau MF; Department of Biological Sciences University of South Carolina (UofSC) Columbia South Carolina USA.
  • Shealy EP; Department of Biological Sciences University of South Carolina (UofSC) Columbia South Carolina USA.
  • Self LS; Department of Biological Sciences University of South Carolina (UofSC) Columbia South Carolina USA.
  • Utter TE; Department of Biological Sciences University of South Carolina (UofSC) Columbia South Carolina USA.
  • Waller MM; Department of Biological Sciences University of South Carolina (UofSC) Columbia South Carolina USA.
  • Hall BC; South Carolina Army National Guard Environmental Office Eastover South Carolina USA.
  • Stone CG; South Carolina Army National Guard Environmental Office Eastover South Carolina USA.
  • Anderson LL; South Carolina Army National Guard Environmental Office Eastover South Carolina USA.
  • Mousseau TA; Department of Biological Sciences University of South Carolina (UofSC) Columbia South Carolina USA.
Ecol Evol ; 11(17): 12051-12063, 2021 Sep.
Article en En | MEDLINE | ID: mdl-34522360
Camera traps have become an extensively utilized tool in ecological research, but the manual processing of images created by a network of camera traps rapidly becomes an overwhelming task, even for small camera trap studies.We used transfer learning to create convolutional neural network (CNN) models for identification and classification. By utilizing a small dataset with an average of 275 labeled images per species class, the model was able to distinguish between species and remove false triggers.We trained the model to detect 17 object classes with individual species identification, reaching an accuracy up to 92% and an average F1 score of 85%. Previous studies have suggested the need for thousands of images of each object class to reach results comparable to those achieved by human observers; however, we show that such accuracy can be achieved with fewer images.With transfer learning and an ongoing camera trap study, a deep learning model can be successfully created by a small camera trap study. A generalizable model produced from an unbalanced class set can be utilized to extract trap events that can later be confirmed by human processors.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Ecol Evol Año: 2021 Tipo del documento: Article Pais de publicación: Reino Unido

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Ecol Evol Año: 2021 Tipo del documento: Article Pais de publicación: Reino Unido