Nearly every problem in wildlife monitoring is fundamentally "fine-grained": every ecologist aims to identify observations to the species level, and projects frequently require annotations that are even more fine-grained than species information: age, sex, behavior, morphology, etc. Nearly every problem in wildlife monitoring also has another property in common: tedious data review. AI offers the potential to dramatically accelerate that tedious data review, and when designing an AI system to accelerate wildlife identification, it's tempting to match the AI problem precisely to the fine-grained problem you're trying to solve. Want to identify species in acoustic recordings? Train a model to identify species in acoustic recordings. Want to identify species, sex, and count in camera trap images? Train a model to do all of those things. In taking that approach, you assume that either (a) you will be able to totally automate the review process (spoiler alert: this has yet to happen) or (b) your model won't be perfect, but it will be imperfect in a way that still accelerates your review. The latter is sometimes true, but the goal of this talk is to caution ecologists about this assumption; we claim that when an AI system designed for full automation fails, even just a little bit, you often end up with a system that doesn't save you any time (for example, errors don't necessarily fall along taxonomic lines). Moreover, if you set out to solve a coarse-grained problem from the beginning, you might allocate your training data collection resources differently, you might choose different AI tools, you might be able to use more off-the-shelf starting points, and - maybe most importantly - you will almost certainly design a different workflow to use the results from your model. We will discuss successes and failures of fine- and coarse-grained AI approaches from several sensor domains, and we hope that the audience leaves with some new ideas about how they might structure their own AI systems.