Loading…

Automation of video-based location tracking tool for dairy cows in their housing stalls using deep learning

•Deep learning models can accurately annotate animal coordinates.•Resnet-18 best locates cow hips and neck within housing stall.•Future applications will aim to analyze cow activity patterns to optimize comfort. Animal welfare research has raised concerns regarding the intensification of farm animal...

Full description

Saved in:
Bibliographic Details
Published in:Smart agricultural technology 2021-12, Vol.1, p.100015, Article 100015
Main Authors: Zambelis, A., Saadati, M., Dallago, G.M., Stecko, P., Boyer, V., Parent, J.-P., Pedersoli, M., Vasseur, E.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Deep learning models can accurately annotate animal coordinates.•Resnet-18 best locates cow hips and neck within housing stall.•Future applications will aim to analyze cow activity patterns to optimize comfort. Animal welfare research has raised concerns regarding the intensification of farm animal housing systems that offer limited opportunity for movement. Applying deep learning models to location tracking provides an opportunity for accurate and timely measurement of cow movement within the housing environment. The objective of this study is to develop an accurate and low-cost alternative to manual cow tracking of spatial use in their tie-stall by applying deep learning techniques. Twenty-four lactating Holstein cows were video recorded for a continuous 24-h period on weeks 1, 2, 3, 6, 8, and 10. Individual images showing the in-stall position of each cow were extracted from each 24-h recording at a rate of one image per minute. Three coordinates on each cow were manually annotated on the image sequences to track the location of the left hip, the right hip, and the neck. The final dataset used to validate the deep learning approach consisted of 199,100 Red-Green-Blue images with manual coordinate annotations. Leave-one-out cross-validation was used to train 5 variants of different deep residual networks. Model performance was expressed in terms of pixel error for each coordinate annotated from the validation image set. Pixel error was converted to a standard measure in cm using the average pixel/cm ratio for each cow in each week. The average error from all 3 coordinates in the best model was equivalent to a 1.44 cm error for Resnet-18 in the actual physical placement of the coordinates within the stall environment. Based on this high degree of accuracy, this model could be used to analyze the activity patterns of individual cows for optimization of stall spaces and improved ease of movement.
ISSN:2772-3755
2772-3755
DOI:10.1016/j.atech.2021.100015