Scalable computer vision-based assessment of bait lamina sticks to quantify soil fauna activity
Autoren: Adrija Roy, Lukas Thielemann, Masahiro Ryo, Juan Camilo Rivera-Palacio, Konlavach Mengsuwan, Kathrin Grahmann
Abstract
Soil fauna plays a critical role in ecosystem functions such as nutrient cycling, organic matter decomposition, and soil structure maintenance. Accurately assessing their activity is therefore essential for monitoring soil health. Traditional methods like the bait lamina test, while widely used, rely on manual visual scoring, which can be subjective, time-consuming, and difficult to scale. In this study, we present an automated computer vision approach to quantify soil fauna activity by assessing bait consumption on bait lamina sticks, using high-resolution imagery processed with a Python-based pipeline. We implemented this approach on 159 bait sticks gathered from field plots in Brandenburg, Germany, and compared the automated findings with assessments from five independent human operators. The automated method displayed a strong agreement with manual evaluations, yielding Pearson's r between 0.80 and 0.92, depending on the operator, and Cohen's kappa of 0.48 in categorical concordance. The Bland-Altman analysis revealed that over 90 % of the automated scores were within +/− 0.2 of the manual measurements. This automated technique reduced the time required for analysis in comparison to manual scoring, along with removing operator subjectivity and bias. Although there was an underestimation in identifying fully consumed bait holes, the average difference between the automated and manual scores was only 0.02 (p = 0.0049), suggesting a negligible effect size. The automated approach is straight-forward, reproducible, and flexible, which facilitates the efficient and impartial evaluation of soil fauna activity for large-scale soil health monitoring. Possible improvements could involve enhancing the image-analysis workflow, such as improving hole-detection robustness, reducing sensitivity to coating or lighting variation, and exploring more advanced classification models.