Until there is a vaccine or a cure, information is the most powerful weapon to fight a pandemic infection. Effective communication is therefore recognized as a critical element of successfully managing a pandemic response – for the disease spread to be contained, the public must comply with public health recommendations. The first step in compliance is an understanding of those recommendations, so it is important to understand public knowledge about them. Assessing public pandemic knowledge in real time early in a pandemic has seldom been done – virtually all prior pandemic messaging studies are retrospective studies. The coronavirus pandemic is seeing some of the first attempts to overcome the challenges of pandemic knowledge assessments. We created an exploratory mixed-methods survey of Central Pennsylvania residents to generate actionable data during the early stages of the COVID-19 pandemic, and have since modified it to a convergent design mixed-methods survey for the global population. In this post we’ll talk about some of the challenges of these efforts and our early results.
Knowledge of what? Public health recommendations? But whose? The United States is a federated republic, so although we have a federal Surgeon General, a Public Health Service, and a federal agency dedicated to disease control and prevention (CDC), absent federal martial law, individual States make and enforce public health policy. Further, given that a pandemic is a global event, public health recommendations by international sources – like the World Health Organization or the European Commission – may be as important to understand as a local recommendation. Moreover, public health recommendations may lag behind significant changes in knowledge that drive public behavior. For example, using hydroxycholorquine to treat COVID-19 became a news item leading to large public demand, changes in prescribing patterns, and depletion of the medication (preventing chronic users’ access to it) within days of its first news report – long before the CDC added commentary to their official recommendations. And in an age of “alternate facts,” we must consider the source of COVID-19 facts.
Whose facts? Early predictions and policies regarding COVID-19 were driven by information reported from other governments. Much of that data was not accurate. As the pandemic has continued, more and more data is circulated as “fact” that is not “fact” in the scientific sense of the term. This is true even in peer reviewed literature. In an effort to get scientific data available as soon as possible, many journals implemented “rapid review” of COVID-19 papers, and accepted small case series and observational retrospective studies. In the scientific community this is appropriate – we need the data and we understand its limitations. These papers were easily accessible to those outside the scientific community, who often ignored study limitations to proclaim a new “fact.” These were endlessly recycled and cross-referenced in the 24-hour news cycle, leading many in the public to think these “facts” were being corroborated by other sources.
By what measure? Measuring knowledge is also a challenge. Rigorous knowledge testing requires validated instruments, ideally that incorporate qualitative assessments to better interpret quantitative results. (See the process described here.) This takes 3-5 years to complete, and can cost hundreds of thousands of dollars. Not only do we not have 3-5 years, it takes months to identify a funding source, apply for a grant, and be awarded the funds. Which elements of this process can be shortened without limiting the value of the results? Further, is it enough to measure understanding of the public health recommendations? What if people understand them, but cannot or will not follow them?
The Penn State Solution. By early March, the COVID-19 outbreak had become alarming enough that Penn State became the first US institution to join an international consortium of higher education institutions to release internal funds for rapid development of COVID research, through the Huck Institutes of Life Sciences. The Department of Family and Community Medicine joined the Qualitative Mixed-Methods Core at Penn State Hershey and, with funding provided by the Social Science Research Institute, developed a survey instrument to explore public and healthcare worker knowledge, perceptions, and preferred information sources regarding COVID-19. Within hours of receiving notice that we had been awarded funding, the World Health Organization declared COVID-19 a pandemic, and two days later the President declared a National Emergency. This urgency led to several innovations to decrease survey development time without losing value.
Knowledge. Multiple choice questions take a long time to validate, so we developed true/false questions. We added a confidence scale for each question so that we would help ensure distribution even with very high raw scores. We considered knowledge across several domains – transmission, severity, treatment – and asked questions felt to be of easy, moderate and hard difficulty in each. As media coverage on COVID-19 escalated, developing knowledge questions became increasingly difficult – a nearly impossible question on Monday was by Thursday so easy it couldn’t be used.
Facts. We considered PubMed literature recommendations to assess COVID-19 facts important to healthcare workers; however, because there were mostly small case studies or retrospective chart reviews, often in conflict, and several medical and specialty organizations had not yet made recommendations, we ultimately limited facts to those reported as accurate by the CDC. We also limited public health recommendation understanding to CDC recommendations.
Measures. With the raw knowledge and confidence scores we generated a weighted knowledge score. We also measured perceived likelihood of diagnosis of COVID-19 and other diseases, concern about diagnosis, trust in information sources, and free text responses inviting detailed descriptions of concerns and beliefs.
Results. Our survey was well received – people were eager to respond and actually temporarily overwhelmed the server. We collected 5,984 usable (adequate completion for analysis) surveys between March 25 and March 31 – one of the largest pandemic messaging surveys ever completed. Our results are currently under review at a scientific publication; however, we can report that healthcare workers and the public in our sample – drawn from a Penn State Health marketing list with a target catchment area of central Pennsylvania, generally show good understanding of public health recommendations, disproportionate fear of COVID-19, and most intend to follow public health recommendations. Most respondents identified reputable sources (CDC website, television news) as their primary sources of information. There were significant knowledge differences between high and low education groups and between races. Our demographic distribution was limited, so the differences noted between races in our study must be taken with caution. However, differences in knowledge by race was also noted in a smaller Chicago study (n=630), suggesting that our demographic data may be generalizable. Knowledge differences by education were particularly illustrative for the down-stream impact of misinformation; lower education groups had less understanding of self-care options for mild COVID-19 symptoms, less understanding of personal protective equipment, and less understanding of appropriate treatments for COVID-19, all of which may be expected to increase healthcare utilization and inappropriately divert equipment and medications from high need to low need areas.
We are currently undertaking an even broader study of COVID-19 knowledge, perceptions, and information sources. We have modified our survey to reach an international audience, translated it into 23 languages, and are currently collecting data. We have over 8,900 completed surveys, from all 50 States and 70 countries.
The data is again showing demographic discrepancies in knowledge and intent to comply with recommendations. We are collecting zip code prefixes, and have recently completed a preliminary analysis showing that there are marked differences between major metro areas (for example, Atlanta and Dallas) across the United States in intent to follow public health recommendations. This has significant implications for re-opening – those areas with less intent to follow public health recommendations are at higher risk for a COVID-19 surge in cases once external stay-at-home orders are lifted. Once our data goes under peer review we will release the information so that the highest risk municipalities can address these behavior issues before re-opening.
We’re also getting very interesting information source data. We’re able to see that most Americans are going to change how they consume information because of COVID-19 – and have from them where they are going to get news they trust. We’ll be able to evaluate which news sources are correlated with higher knowledge and lower fear.
We’re also getting an amazingly rich qualitative dataset. We have several free-text answer choices encouraging people to write about their perceptions. We have these in many languages, and many people are writing hundreds of words in response. We’re pursuing grant funding to develop natural language processing applications to help conduct qualitative analysis on these responses.
However, we could still use your help to boost our response – especially in minority, underserved, and vulnerable populations. Their voices matter, and with your help with can give them a megaphone. With a robust global response, our qualitative data will help us write a story of COVID in the words and languages of people from around the world.
Take the survey here: http://covidsurvey.psu.edu
Please ask your social and professional networks to take it, and encourage others to take it. Our most popular video is here: https://youtu.be/R4U7xxa84TA
Videos in several other languages are available here for sharing: https://www.youtube.com/channel/UC1Gvryc6VrZOeoExxNt0DjQ