Efficient coupling of within- and between-host infectious disease dynamics

Hosts can evolve a variety of defences against parasitism, including resistance (which prevents or reduces the spread of infection) and tolerance (which protects against virulence). Some organisms have evolved different levels of tolerance at different life-stages, which is likely to be the result of coevolution with pathogens, and yet it is currently unclear how coevolution drives patterns of age-specific tolerance. Here, we use a model of tolerance-virulence coevolution to investigate how age structure influences coevolutionary dynamics. Specifically, we explore how coevolution unfolds when tolerance and virulence (disease-induced mortality) are age-specific compared to when these traits are uniform across the host lifespan. We find that coevolutionary cycling is relatively common when host tolerance is age-specific, but cycling does not occur when tolerance is the same across all ages. We also find that age-structured tolerance can lead to selection for higher virulence in shorter-lived than in longer-lived hosts, whereas non-age-structured tolerance always leads virulence to increase with host lifespan. Our findings therefore suggest that age structure can have substantial qualitative impacts on host-pathogen coevolution.

Tolerance-conferring defensive symbionts and the evolution of parasite virulence

Defensive symbionts in the host microbiome can confer protection from infection or reduce the harms of being infected by a parasite. Defensive symbionts are therefore promising agents of biocontrol that could be used to control or ameliorate the impact of infectious diseases. Previous theory has shown how symbionts can evolve along the parasitism-mutualism continuum to confer greater or lesser protection to their hosts, and in turn how hosts may coevolve with their symbionts to potentially form a mutualistic relationship. However, the consequences of introducing a defensive symbiont for parasite evolution and how the symbiont may coevolve with the parasite have yet to be explored theoretically. Here, we investigate the ecological and evolutionary implications of introducing a tolerance-conferring defensive symbiont into an established host-parasite system. We show that while the defensive symbiont may initially have a positive impact on the host population, parasite and symbiont evolution tend to have a net negative effect on the host population in the long-term. This is because the introduction of the defensive symbiont always selects for an increase in parasite virulence and may cause diversification into high- and low-virulence strains. Even if the symbiont experiences selection for greater host protection, this simply increases selection for virulence in the parasite, resulting in a net negative effect on the host population. Our results therefore suggest that tolerance-conferring defensive symbionts may be poor biocontrol agents for population-level infectious disease control.

Non-pharmaceutical interventions and the emergence of pathogen variants

Non-pharmaceutical interventions (NPIs), such as social distancing and contact tracing, are important public health measures that can reduce pathogen transmission. In addition to playing a crucial role in suppressing transmission, NPIs influence pathogen evolution by mediating mutation supply, restricting the availability of susceptible hosts, and altering the strength of selection for novel variants. Yet it is unclear how NPIs might affect the emergence of novel variants that are able to escape pre-existing immunity (partially or fully), are more transmissible, or cause greater mortality. We analyse a stochastic two-strain epidemiological model to determine how the strength and timing of NPIs affects the emergence of variants with similar or contrasting life-history characteristics to the wildtype. We show that, while stronger and timelier NPIs generally reduce the likelihood of variant emergence, it is possible for more transmissible variants with high cross immunity to have a greater probability of emerging at intermediate levels of NPIs. This is because intermediate levels of NPIs allow an epidemic of the wildtype that is neither too small (facilitating high mutation supply), nor too large (leaving a large pool of susceptible hosts), to prevent a novel variant becoming established in the host population. However, since one cannot predict the characteristics of a variant, the best strategy to prevent emergence is likely to be implementation of strong, timely NPIs.

Antigenic evolution of SARS-CoV-2 in immunocompromised hosts

The apparent lack of antigenic evolution by the Delta variant (B.1.617.2) of SARS-CoV-2 during the COVID-19 pandemic is puzzling. The combination of increasing immune pressure due to the rollout of vaccines and a relatively high number of infections following the relaxation of non-pharmaceutical interventions should have created perfect conditions for immune escape variants to evolve from the Delta lineage. Instead, the Omicron variant (B.1.1.529), which is hypothesised to have evolved in an immunocompromised individual, is the first major variant to exhibit significant immune escape following vaccination programmes and is set to become globally dominant in 2022. Here, we use a simple mathematical model to explore possible reasons why the Delta lineage did not exhibit antigenic evolution and to understand how and when immunocompromised individuals affect the emergence of immune escape variants. We show that when the pathogen does not have to cross a fitness valley for immune escape to occur, immunocompromised individuals have no qualitative effect on antigenic evolution (although they may accelerate immune escape if within-host evolutionary dynamics are faster in immunocompromised individuals). But if a fitness valley exists between immune escape variants at the between-host level, then persistent infections of immunocompromised individuals allow mutations to accumulate, therefore facilitating rather than simply speeding up antigenic evolution. Our results suggest that better global health equality, including improving access to vaccines and treatments for individuals who are immunocompromised (especially in lower- and middle-income countries), may be crucial to preventing the emergence of future immune escape variants of SARS-CoV-2.

Critical weaknesses in shielding strategies for COVID-19

The COVID-19 pandemic, caused by the coronavirus SARS-CoV-2, has led to a wide range of non-pharmaceutical interventions being implemented around the world to curb transmission. However, the economic and social costs of some of these measures, especially lockdowns, has been high. An alternative and widely discussed public health strategy for the COVID-19 pandemic would have been to ‘shield’ those most vulnerable to COVID-19, while allowing infection to spread among lower risk individuals with the aim of reaching herd immunity. Here we retrospectively explore the effectiveness of this strategy, showing that even under the unrealistic assumption of perfect shielding, hospitals would have been rapidly overwhelmed with many avoidable deaths among lower risk individuals. Crucially, even a small (20%) reduction in the effectiveness of shielding would have likely led to a large increase (>150%) in the number of deaths compared to perfect shielding. Our findings demonstrate that shielding the vulnerable while allowing infections to spread among the wider population would not have been a viable public health strategy for COVID-19, and is unlikely to be effective for future pandemics.