The benefits of lockdown

I was recently looking at mortality statistics for England and Wales in 2019 and 2020. There’s been a lot of talk about excess mortality due to Covid, but there hasn’t been much discussion of mortality reduction. Given that Covid rarely caused serious symptoms — and hardly ever death — in young people, and given the high proportion of deaths due to accidents — particularly vehicular accidents — you might expect the lockdown to have reduced their overall mortality. And this is indeed what we see.

Age01-1415-4445-6465-7475-8485+
Mortality 20193959654151474416014090
Mortality 20203758694701651474416120
Relative % excess 2020-5.2-12.84.613.412.114.014.4
Force of mortality in 2019 and 2020, in deaths/100,000

Above age 45 we see a fairly consistent 13% increase in mortality from 2019 to 2020. But there was a 13% decrease in mortality among children under 15. Even in the newborn group there was a 5% decrease. In total there were 116 fewer deaths recorded in 2020 in the 1-14 age group compared with 2019, and 200 fewer deaths in the first year of life (of which about 70 may be attributed to a 3% decrease in the number of births).

Absence of caution: The European vaccine suspension fiasco

[Repeated from commoninfirmities blog.]

Multiple European countries have now suspended use of the Oxford/AstraZeneca vaccine, because of scattered reports of rare clotting disorders following vaccination. In all the talk of “precautionary” approaches the urgency of the situation seems to be suddenly ignored. Every vaccine triggers serious side effects in some small number of individuals, occasionally fatal, and we recognise that in special systems for compensating the victims. It seems worth considering, when looking at the possibility of several-in-a-million complications, how many lives may be lost because of delayed vaccinations.

I start with the case fatality rate (CFR) from this metaanalysis, and multiply them by the current overall weekly case rate, which is 1.78 cases/thousand population in the EU (according to data from the ECDC). This ignores the differences between countries, and differences between age groups in infection rate, and certainly underestimates the infection rate for obvious reasons of selective testing.

Age group0-3435-4445-5455-6465-7475-8485+
CFR (per thousand)0.040.682.37.52585283
Expected fatalities per week per million population0.071.24.11345151504
Number of days delay to match VFR120070206.41.80.60.2

Let’s assume now that all of the blood clotting problems that have occurred in the EEA — 30 in total, according to this report — among the 5 million receiving the AZ vaccine were actually caused by the vaccine, and suppose (incorrectly) that all of those people had died.* That would produce a vaccine fatality rate (VFR) of 6 per million. We can double that to account for possible additional unreported cases, or other kinds of complications that have not yet been recognised. We can then calculate how many days of delay would cause as many extra deaths as the vaccine itself might cause.

The result is fairly clear: even the most extreme concerns raised about the AZ vaccine could not justify even a one-week delay in vaccination, at least among the population 55 years old and over. (I am also ignoring here the compounding effect of onward transmission prevented by vaccination, which makes the delay even more costly.) As is so often the case, “abundance of caution” turns out to be fundamentally reckless.

* I’m using only European data here, to account for the contention that there may be a specific problem with European production of the vaccine. The UK has used much more of the AZ vaccine, with even fewer problems.

What’s the matter with Santa Clara?

I can’t get out of my head the instantly infamous paper by Bendavid et al. at Stanford Medical School from a few weeks back, claiming to estimate the prevailing exposure to SARS-Cov-2 in Santa Clara County, California, by antibody tests on a random sample of the population. The study has garnered a lot of attention because of its optimistic outcome: Something between 1.5 and 4 percent of the population were estimated to have been already infected, far more than the official number of cases, which, if true, would imply an infection fatality rate below 0.2%.

The paper has attracted a great deal of criticism, and has already been revised. Much of the critique was targeted at the problematic sampling procedure — which I think is not such a big deal — and the weird ethical lapses. But the gaping hole in the paper is in its treatment of test specificity.

The problem, in a nutshell, is this: 1.5% of the tests they carried out (50 out of 3,330) yielded positive results. In order to estimate the true rate of seropositivity in the population, we need to know approximately how many of those positive test results were correct. They estimate the specificity from their laboratory tests of samples known to be negative (because they were produced before the origin of the virus, or, at least, its entrance into the US), of which 368 out of 371 were indeed negative. (Strangely, the first version of the paper reported 369 out of 371, which is a pretty substantial difference. This figure has not been consistently updated.) This suggests the test specificity is 99.2%, which would make about half the positive results false. But it’s even worse than that, because 3 false results out of 371 might be a randomly low number for a much higher failure rate. The paper recognises this, appealing to the Clopper-Pearson method for computing confidence intervals for binomial probabilities, which yields at a 95% confidence intervals of about (97.6%, 99.9%). If the specificity is anything below 98.5% and they measured 1.5% positives, then the maximum likelihood estimate for the true positive rate is 0. This raises two questions:

  1. They claim to have taken the specificity into account, saying “We use a bootstrap procedure to estimate confidence bounds for the unweighted and weighted prevalence, while accounting for sampling error and propagating the uncertainty in the sensitivity and specificity.” They produced a 95% CI of (0.7%, 1.8%). What did they think they were doing?
  2. How should this be analysed?

The bootstrap is a brilliant tool, but since it resamples from the data, or from the fitted model, it tends to miss out on uncertainty from what could have been observed but wasn’t. It’s not clear, though, whether the bootstrap has anything to do with the problem here. The updated version includes an appendix with a calculation from Bayes’ Theorem that the probability of a sampled person being infected is \frac{q+s-1}{r+s-1} , where s is the specificity, r is the sensitivity (the probability that a seropositive subject receives a positive test) and q is the fraction of the population with a positive test. They claim to have generated a bootstrap estimate sampling their s from the “empirical distribution” of specificity, but they don’t say what they do with the negative estimates for the infection incidence that must inevitably arise. Instead, they give (in the appendix to the updated version of the paper) a long and (to my mind) incoherent argument against critics who say that the confidence interval should include 0 — since the false positive rate has a reasonable chance of being larger than the fraction of positive test results — mainly because we know from other evidence that the infection incidence can’t actually be 0. True enough, but the point is not that the rate is actually 0%, but that these data give us no reason to believe in any particular lower bound above 0.

How should the data be analysed? The problem is, Bayes’ Theorem doesn’t play well with frequentist confidence intervals, so it’s easy to end up in a logical muddle if we try to calculate a confidence interval for the true probability of seropositive, given that we’ve observed 1.5% positive. What does it even mean? It would be quite reasonable to say, values of false-positive rate above 1.5% are perfectly plausible, in which case true positive rates down to 0% are plausible, hence there is no justification for claiming evidence for any positive lower bound on the true incidence of seropositivity, and then call it a day.

The “bootstrap” approach that they seem to have made up without much justification is a kind of incoherent mixed-Bayesian-frequentist approach: Inverting the normal approximation to the sampling distribution of the parameters (I think that’s what they mean by the “empirical distribution”) as though it were a posterior, that they then sample from to generate .

Instead, it would be most natural to analyse this in a purely Bayesian framework. You could start off with a uniform prior on the true seropositive rate π and the specificity s. Then the posterior will be proportional to \bigl( 1 - s(1-\pi)\bigr)^{50}\bigl(s(1 - \pi) \bigr)^{3280} s^{368}(1-s)^3. This doesn’t provide a nice closed-form solution, but it can be calculated numerically with various software tools, such as the Wolfram Alpha double integral calculator. The posterior distribution of π has mean 0.0067, and a symmetric 95% credible interval (0.049%, 1.41%) for the seropositive incidence. This could be repeated for each of their demographic groups, which I’m loath to take the time to do, but it would presumably inflate both ends of the confidence interval by a similar factor, producing something like (0.06%,3.8%).

The updated version of the paper includes a large number of additional tests on pre-Covid samples, bringing the total to 3324, with just 16 false positives, so again about a 99.5% specificity. If we believe this, we get a much more favourable credible interval — though still not as favourable as reported in the paper — of (0.55%,1.52%). (They report a 95% confidence interval for the unweighted incidence (0.7%, 1.8%).)

Should we believe their new results? I’m not sure. One argument against: They report on 13 different sources of negative samples, but these produce wildly variable specificities. The most extreme is a sample of 150 pre-Covid serum samples that yielded 4 false positives, and on the other side 1102 samples that yielded no false positives — both have substantially less than 1% probability if specificity is 99.5%, but they pull in different directions.

For the aficionados: Actually, there’s also a sensitivity that needs to be considered. In tests of 157 known seropositive samples, only 130 produced positive results. This isn’t as problematic as the specificity — which has the potential for wiping out all of the positive tests — but it will increase our estimates of the true positive rate, while widening the uncertainty. Writing r for the sensitivity (the probability of a positive test given that the individual is seropositive) the posterior likelihood now becomes

\bigl( 1-s+ \pi(r+s-1)\bigr)^{50}\bigl(s - \pi(r+s-1) \bigr)^{3280} s^{368}(1-s)^3 r^{130}(1-r)^{27}

with the original sensitivity data, or

\bigl( 1-s+ \pi(r+s-1)\bigr)^{50}\bigl(s - \pi(r+s-1) \bigr)^{3280} s^{3308}(1-s)^{16} r^{130}(1-r)^{27}

with the updated sensitivity data.

The 95% credible intervals for π (without demographic weighting) are then (.06%,1.7%) with the original sensitivity data, and (0.67%, 1.9%) with the updated sensitivity data. This last is fairly close to the reported results in the revised paper. Unsurprising, since with 3000 instead of 300 samples, the uncertainty in the sensitivity is no longer a major issue.

ACE2 and the variability hypothesis for Covid-19

One of the big mysteries of Covid-19 has been the difference in severity between men and women. About 2/3 of deaths have been among men, despite the fact that there are far more women than men in the most vulnerable age groups. This has been attributed to general differences between men and women in immune responses — women being generally more resistant to infection, and more susceptible to auto-immune diseases.

I recently went to look up the genetics of ACE2 — the membrane protein that SARS-Cov-2 uses to enter human cells — and discovered the basic fact (presumably known to anyone familiar with biochemistry) that the ACE2 gene is on the X chromosome. And that made me think of the infamous “variability hypothesis”. This is the notion that men are generally more variable in many characteristics than women, so will be overrepresented among the extremes at both ends of the distribution. Of course, as with any scientific fact about men and women, this has been primarily used to argue that women should mind their knitting and cannot be comparable to male geniuses like (most notoriously) Larry Summers.

Such essentialist arguments are wrong-headed and demeaning when applied to characteristics as complex as intelligence, much less to something as vaguely defined as “commitment” (Summers’s word). But this phenomenon is well grounded where it is applied to genetic traits linked to the X chromosome. Female bodies are mosaics, with each cell randomly expressing one of the two inherited versions of the X chromosome. It is extremely well known and uncontroversial in the context of X-linked Mendelian disorders, such as haemophilia and red-green colour-blindness. A single defective gene on the X chromosome causes these conditions, where a mixture of defective and healthy alleles produces a phenotype essentially indistinguishable from the healthy version alone.

The general phenomenon is superlinearity of effect: Where an average of two scores produces a markedly different outcome from one or the other extreme. Might that be the case for Covid-19? There is some evidence from Italian populations that variants in the ACE2 gene may affect individual susceptibility to severe Covid-19. Indeed, this study suggests that rare variants may play a significant role. If a mixture of half highly susceptible cells and half less susceptible is much more robust than a tissue where all cells are highly susceptible — a plausible situation, though not inevitable — then we would expect to see more severe disease in males.

Looking around, I found this paper that points out that men have generally higher levels of ACE2 circulating in blood, and argues “due to the high expression of ACE2, male patients may be more likely to develop severe symptoms and die of SARS-CoV-2”. So that’s another possibility.

Fluctuations in reports of daily Covid deaths

As we are all watching the daily reports of Covid-19 cases and deaths, one conspicuous fact is that there are enormous fluctuations from day to day. It’s clear that we shouldn’t be too focused on the counts from an individual day to understand the trend. But why do they fluctuate so much?

Here are the numbers for the UK since the start of April (data from the European Centre for Disease Prevention and Control):

DateNo. casesNo. deaths
17/04/20204617861
16/04/20204603761
15/04/20205252778
14/04/20204342717
13/04/20205288737
12/04/20208719917
11/04/20205195980
10/04/20204344881
09/04/20205491938
08/04/20203634786
07/04/20203802439
06/04/20205903621
05/04/20203735708
04/04/20204450684
03/04/20204244389
02/04/20204324743
01/04/20203009381

We have 381 deaths one day, then 743 deaths the next day, then 389, then 684. A natural explanation is that it has to do with delays in reporting weekends and/or holidays. But the 381 and 389 were on Wednesday and Friday, and the next peak came on a Saturday and Sunday. I don’t have an explanation, but I wanted to investigate whether this is a real issue. Are the day-to-day fluctuations more than you would expect purely by chance?

It’s not immediately obvious how many we should expect, since there is clearly a trend. I decided to estimate it in two different ways:

  1. Choose a range s of days — most naturally s=3 — and estimate the predicted cumulative number of events C[x] on day x as the geometric average of the cumulative number of events to day x+s, and the cumulative number to day x-s. This also gives us a natural estimate of the growth rate r over the period, r=(C[x+s]/C[x-s])^(1/2s) -1, and we calculate the Predicted number of events on day x as rC[x]/(1+r). We call this the geometric average method.
  2. As above, but we estimate r by the maximum likelihood estimator, assuming that the number events on day x is Poisson distributed with parameter rC[x-1], where C[x-1] is the observed cumulative number on day x-1. We call this the cumulative MLE method.
  3. An approach that addresses more directly the fluctuations in daily events computes the maximum likelihood estimators for a two-parameter model with parameters λ and r, where the number of events on day x+k is Poisson(λr^k), taking as observations the numbers on days x-s,x-s+1,…,x-1,x+1,…,x+s. We call this the single-day MLE method.

Deaths

Here are the results for the UK. The discrepancy between observed and predicted has been standardised to standard deviation 1, so we should not expect to see results outside of the range (-2.5,+2.5) by chance.

There’s no obvious pattern. The geometric-average increase-rate is always larger, hence the discrepancies are smaller. Not surprisingly, the growth rates estimated from single-day counts are more variable, but the discrepancies are, surprisingly, less extreme. With any method the discrepancies are extreme in both directions, and don’t seem to have an obvious pattern with respect to the weekends. We see extreme positive deviations on weekends, and extreme negative deviations on weekdays. (I have shaded Sunday and Monday as weekends, supposing that most of the deaths in hospitals on Saturday and Sunday would appear in national totals the following day. According to some estimates, the most common delay is 2 days, suggesting that Monday and Tuesday counts should be expected to be low.)

Here is Germany:

These show, perhaps, a more consistent weekly pattern, with negative deviations on Sundays and Mondays.

The US:

Canada:

Italy shows an odd alternating pattern:

Cases

The dynamics of new cases are expected to be different. Here is the equivalent plot for new cases in the UK

The fluctuations seem to be increasing, with huge excess number of cases reported on Easter, for some reason — 8719 new cases, where the largest reported on any other day was 5903.

The US also shows this intensification of fluctuations

Germany, Italy, and Canada don’t seem to show this pattern. The plots for Germany and Canada each is dominated by a single very extreme day, possibly corresponding to a shift in counting criteria.

What is the UK government trying to do with COVID-19?

[Cross-posted with Common Infirmities blog. More commentary on the COVID-19 pandemic here.] It would be a drastic understatement to say that people are confused by the official advice coming with respect to social-distancing measures to prevent the spread of SARS-CoV-2. Some are angry. Some are appalled. And that includes some very smart people who understand the relevant science better than I do, and probably at least as well as the experts who are advising th government. Why are they not closing schools and restaurants, or banning sporting events — until the Football Association decided to ban themselves — while at the same time signalling that they will be taking such measures in the future? I’m inclined to start from the presumption that there’s a coherent and sensible — though possibly ultimately misguided (or well guided but to-be-proved-retrospectively wrong) — strategy, and I find it hard to piece together what they’re talking about when alluding to “herd immunity” and “nudge theory”.

Why, in particular, are they talking about holding the extreme social-distancing measures in reserve until later? Intuitively you would think that slowing the progress of the epidemic can be done at any stage, and the sooner you start the more effective it will be.

Here’s my best guess about what’s behind it, which has the advantage of also providing an explanation why the government’s communication has been so ineffective: Unlike most other countries, which are taking the general approach that the goal is to slow the spread of the virus as much as possible (though they may disagree about what is possible), the UK government wants to slow the virus, but not too much.

The simplest model for the evolution of the number of infected individuals (x) is a differential equation

Here A is the fraction immune at which R0 (the number that each infected person infects) reaches 1, so growth enters a slower phase. The solution is

Basically, if you dial down the level of social interaction you change k, slowing the growth of the cumulative rate parameter K(t). There’s a path that you can run through, at varying rates, until you reach the target level A. So, assuming the government can steer k as they like, they can stretch out the process as they like, but they can’t change the ultimate destination. The corresponding rate of new infections — the key thing that they need to hold down, to prevent collapse of the NHS — is kx(Ax). (It’s more complicated because of the time delay between infection, symptoms, and recovery, raising the question of whether such a strategy based on determining the timing of epidemic spread is feasible in practice. A more careful analysis would use the three-variable SIR model.)

Suppose now you think that you can reduce k by a certain amount for a certain amount of time. You want to concentrate your effort in the time period where x is around A/2. But you don’t want to push k too far down, because that slows the whole process down, and uses up the influence. The basic idea is, there’s nothing we can do to change the endpoint (x=A); all you can do is steer the rate so that

  1. The maximum rate of new infections (or rather, of total cases in need of hospitalisation) is as low as possible;
  2. The peak is not happening next winter, when the NHS is in its annual flu-season near-collapse;
  3. The fraction A of the population that is ultimately infected — generally taken to be about 60% in most renditions — includes as few as possible of the most at-risk members of the public. That also requires that k not be too small, because keeping the old and the infirm segregated from the young and the healthy can only be done for a limited time. (This isn’t Florida!)

Hence the messaging problem: It’s hard to say, we want to reduce the rate of spread of the infection, but not too much, without it sounding like “We want some people to die.”

There’s no politic way to say, we’re intentionally letting some people get sick, because only their immunity will stop the infection. Imagine the strategy were: Rather than close the schools, we will send the children off to a fun camp where they will be encouraged to practice bad hygiene for a few weeks until they’ve all had CoViD-19. A crude version of school-based vaccination. If it were presented that way, parents would pull their children out in horror.

It’s hard enough getting across the message that people need to take efforts to remain healthy to protect others. You can appeal to their sense of solidarity. Telling people they need to get sick so that other people can remain healthy is another order of bewildering, and people are going to rebel against being instrumentalised.

Of course, if this virus doesn’t produce long-term immunity — and there’s no reason necessarily to expect that it will — then this strategy will fail. As will every other strategy.

Putting Covid-19 mortality into context

[Cross-posted from Common Infirmities blog.]

The age-specific estimates of fatality rates for Covid-19 produced by Riou et al. in Bern have gotten a lot of attention:

0-910-1920-2930-3940-4950-5960-6970-7980+Total
.094.22.911.84.013469818016
Estimated fatality in deaths per thousand cases (symptomatic and asymptomatic)

These numbers looked somewhat familiar to me, having just lectured a course on life tables and survival analysis. Recent one-year mortality rates in the UK are in the table below:

0-910-1920-2930-3940-4950-5960-6970-7980-89
.12.17.43.801.84.2102885
One-year mortality probabilities in the UK, in deaths per thousand population. Neonatal mortality has been excluded from the 0-9 class, and the over-80 class has been cut off at 89.

Depending on how you look at it, the Covid-19 mortality is shifted by a decade, or about double the usual one-year mortality probability for an average UK resident (corresponding to the fact that mortality rates double about every 9 years). If you accept the estimates that around half of the population in most of the world will eventually be infected, and if these mortality rates remain unchanged, this means that effectively everyone will get a double dose of mortality risk this year. Somewhat lower (as may be seen in the plots below) for the younger folk, whereas the over-50s get more like a triple dose.