METHODOLOGY Stopping an Epidemic of Misinformation: Leveraging the K-12 Science Education System to Respond to Ebola (EBOLA 2015)
METHODOLOGY
The methodology for this study involved the development of a teacher questionnaire, recruitment of participants, data collection, sample reduction, and data analysis. This section provides a description of each of these components of the methodology, as well as important information on interpreting the findings of the study while reading the report.
Instrument Development
The teacher questionnaire developed by HRI for the study is included in the Appendix. Some survey items asked teachers where they acquired information about Ebola and what factors affected their teaching of the topic. CSSS, NSELA, and NSTA reviewed these items to ensure that they accounted for likely information sources, instructional activities, and influential factors. Other survey items, reviewed by scientists with Ebola-specific expertise, were intended to measure teachers’ understanding of the Ebola virus specifically—for example, how the virus is transmitted and how to prevent transmission. HRI conducted cognitive interviews (Desimone & Le Floch, 2004) with teachers on all survey items to ensure that the items were interpreted as intended. The revised items were then programmed in an online survey platform for administration.
Participant Recruitment
In collaboration with the study partners (CSSS, NSELA, and NSTA), HRI wrote a brief overview of the study and an invitation to participate. NSTA distributed the study announcement (including a link to the registration page) to its extensive mailing list using multiple email blasts. In addition, NSTA included the announcement in various newsletters. CSSS and NSELA also sent the announcement to their members. As a result of these efforts, approximately 3,500 K–12 teachers of science registered for the study.
Data Collection & Sample Reduction
After removing ineligible registrants (e.g., teachers in other countries), HRI administered the web-based questionnaire to 3,442 K–12 teachers in May 2015. To encourage response, completers were entered into drawings for 10 $100 cash prizes. Survey data collection closed at the end of June 2015 with a response rate of 70 percent. The study timeline and budget precluded drawing a nationally representative sample for the teacher survey. Instead, HRI attempted to register and survey enough teachers that a representative group could be constructed from respondents for analysis purposes. HRI used demographic data from the 2012 National Survey of Science and Mathematics Education (Banilower et al., 2013) to specify the target sample characteristics. For example, survey respondents were removed from the sample until it closely resembled population parameters for race/ethnicity. Ultimately, roughly half of the survey respondents were excluded from the analysis in order to achieve this goal. HRI also segmented the respondents sample into elementary, middle, and high school teachers. This approach allowed researchers to make claims about these categories of teachers separately. The final analysis sample sizes are:
- Elementary school teachers, N = 244
- Middle school teachers, N = 445
- High school teachers, N = 566
Reading this Report
The results of the study, like those from any survey based on a sample of a population (rather than on the entire population), are subject to sampling variability. The sampling error (or standard error) provides a measure of the range within which a sample estimate can be expected to fall a certain proportion of the time. For example, survey findings may indicate that 36 percent of high school teacher respondents gave a lecture when they addressed Ebola with their students. If the sampling error for this estimate was 3 percent, then, according to the Central Limit Theorem, 95 percent of all possible samples of that same size selected in the same way would yield estimates between 30 percent and 42 percent (that is, 36 percent ± 2 standard error units). The standard errors for the estimates presented in this report are included in parentheses
in the tables (see Figure 1).
In many tables, results for middle and high school teachers are reported separately for life science and non-life science teachers. This distinction was not appropriate for elementary teachers, who typically teach Earth, life, and physical science. A summary of each table highlighting or interpreting the results precedes the table. The summary points out only those differences that are substantial as well as statistically significant at the 0.05 level. 1
A description of the survey sample for each grade range is in Appendix A.
1Given the preliminary and exploratory nature of this report, all tests of significance were conducted without controlling the Type 1 error rate.