ABSTRACT

An underappreciated facet of seroepidemiological analysis is that it sometimes allows us to estimate the time at which an individual was infected with a particular pathogen. In some cases, this estimate may be helpful to a patient or a clinician treating a particular syndrome. However, in the majority of use cases, these estimates will be most practically applicable when grouped together at the population level. In this chapter, we discuss the usefulness of estimating the time of infection τ in cross-sectional serology, showing that it can help in reconstructing a past force of infection that varied in both age and time. We introduce the minimal data set and good practice in study design needed to obtain an antibody waning rate λ, which can in turn be used to estimate τ. Estimates of waning λ and time of infection τ can be carried out with straightforward regression techniques and likelihood methods. We use data from a Danish cohort of 138 patients infected with Salmonella enterica serovar Enteriditis to estimate the waning rate of an optical density (OD) measure from an ELISA IgG assay at 0.99 log–OD units per year (95% CI: 0.80–1.17). Using this cohort’s inferred antibody waning rate, we show that the 95 percent confidence and 95 percent credible intervals for time of infection are too wide to be practically useful at an individual scale, and we suggest methods for aggregating these estimates into more useful population-level analyses. We characterize the bias in estimates of τ and show that these will be biased upward when the time of infection is recent, and that this bias will decrease for times of infection further in the past.