Comment by beestmode361 on 02/09/2021 at 13:26 UTC

3 upvotes, 1 direct replies (showing 1)

View submission: COVID denialism and policy clarifications

View parent comment

Here is a nice article explaining how to do this analysis. There are actually clear guidelines on how to analyze this data in a genuine way. The article walks through the commonly misconstrued Israeli data and explains why the vaccines are still effective despite the initial concern brought by the data.

“”” Adjusting for Vaccination Rate It is true that nearly 60% of active serious cases are vaccinated, but such an analysis based on raw counts can be misleading since it is heavily influenced by the vaccination rates.

When vaccination rates are low, use of raw counts can exaggerate the vaccine effectiveness, and when vaccination rates are high, use of raw counts like this can attenuate the vaccine effectiveness, making it seem lower than it in fact is.

Note that a high proportion (nearly 80%) of all Israeli residents >=12yr have been vaccinated.

To adjust for vaccination rates, one should normalize the counts, of severe cases in our setting, for example by computing number "per 100,000"

“””

https://www.covid-datascience.com/post/israeli-data-how-can-efficacy-vs-severe-disease-be-strong-when-60-of-hospitalized-are-vaccinated

Replies

Comment by Aussierotica at 02/09/2021 at 13:51 UTC

0 upvotes, 2 direct replies

I don't disagree that there's plenty of ways to go about doing the figures. One of the things I had fun doing once I discovered VAERS was trying my hardest to generate a data set that could support Dr Wakefield's claims about MMR (hint, you can't).

I also had fun looking at not just overall reporting of COVID-19 adverse vaccination events, but how they broke down compared to manufacturer. Then taking those figures and comparing them against what the media / government was saying the safety of the vaccines were (pertinent for something like AstraZenica which was withdrawn from use in a lot of countries due to adverse results).

I'll admit not having looked at your link, but the more hands data passes through, the wider the errors propagate. For example, I was reading a meta study about efficacy claims of Pfizer's vaccine. The study I was reading cited the NEJM as the source of their data.

Immediately I was a little worried that this meta study being waved around as a gold standard for efficacy had only cited a single other study. I went and read the NEJM article and found that there was 0 (ZERO) data within it that allowed for the meta study conclusions to be drawn.

Where the meta study drew their definitive statement from was from a balanced opinion in the conclusion which stated that despite the NEJM study only being single site (the data was getting worse by the second), and the result showing much lower efficacy, the authors were confident that the sort of results quoted from Israel were appropriate and they'd go with that (Baby meet bathwater and out the door).

I started banging my head on the table when the authors admitted that the "gold standard" Israeli study they were alluding to was also a single site study with a limited population...