Item talk:Q149387
The importance of simulation assumptions when evaluating detectability in population models
Population monitoring is important for investigating a variety of ecological questions, and N-mixture models are increasingly used to model population size (N) and trends (lambda) while estimating detectability (p) from repeated counts within primary periods (when populations are closed to changes). Extending these models to dynamic processes with serial dependence across primary periods may relax the closure assumption, but simulations to evaluate models and inform effort (e.g., number of repeated counts) typically assume p is constant or random across sites and years. Thus, it is unknown how these models perform under scenarios where trends in p confound inferences on N and lambda, and conclusions regarding effort may be overoptimistic. Here, we used global positioning system (GPS) data from greater sage-grouse (Centrocercus urophasianus) to inform simulations of the detection process for lek counts of this species, and we created scenarios with and without linear annual trends in p. We then compared estimates of N and lambda from hierarchical population models either fit with single maximum counts or with detectability estimated from repeated counts (dynamic N-mixture models). We also explored using auxiliary data to correct counts for variation in detectability. Uncorrected count models consistently underestimated N by >50% whereas N-mixture models without auxiliary data underestimated N to a lesser degree due to unmodeled heterogeneity in p such as age. Nevertheless, estimates of lambda from both types of models were unbiased and similar for scenarios without trends in p. When p declined systematically across years, uncorrected count models underestimated lambda whereas N-mixture models estimated lambda with little bias when all sites were counted repeatedly. Auxiliary data also reduced bias in parameter estimates. Evaluating population models using scenarios with systematic variation in p may better reveal potential biases and inform effort than simulations that assume p is constant or random. Dynamic N-mixture models can distinguish between trends in p and N, but also require repeated counts within primary periods for accurate estimates. Auxiliary data may be useful when researchers lack repeated counts, wish to monitor more sites less intensively, or require unbiased estimates of N.