METHODOLOGY Responding to a Global Pandemic: The Role of K-12 Science Teachers (COVID 2020)
Methodology
The methodology for this study involved developing a teacher questionnaire and interview protocol, recruiting participants, collecting data, reducing the sample, and analyzing data. This section provides a description of each of these components of the methodology, as well as important information on interpreting the findings of the study while reading the report.
Instrument Development
Questionnaire
The teacher questionnaire covered a broad range of topics, including instructional activities used to teach about COVID (e.g., lecture, group discussion, investigations), specific topics addressed (e.g., how COVID is diagnosed, how COVID is transmitted), and where teachers acquired information about COVID (e.g., websites, television news stations, print media). Because most school buildings closed for a period of time as a result of the pandemic, the survey collected information about teachers’ response both before and after school buildings closed.
The survey also included items aligned with the TPB, which gathered information about factors that affected teachers’ decision to teach or not teach about COVID. Consistent with guidance provided by Francis et. al (2004), we administered a set of open-ended items to a sample of teachers to determine (1) the most frequently perceived advantages and disadvantages of teaching about COVID, (2) the most important people or groups of people who would approve or disapprove of teaching about COVID, and (3) the perceived factors that could make it easier or more difficult to teach about COVID. Teachers’ responses informed the development of these questionnaire items.
Once all survey items had been drafted, cognitive interviews (Desimone & Le Floch, 2004) were conducted with a sample of teachers to ensure that (1) the items were being interpreted as intended2 and (2) the online questionnaire functioned according to design specifications. Information from the interviews was used to revise the questionnaire, which is included in Appendix A.
Interview Protocol
The teacher interview protocol focused on many of the same topics as the teacher questionnaire and was intended to elicit additional information about the varied contexts in which teachers worked, factors that influenced their teaching about COVID, and how they are likely to respond to similar situations in the future. The interview protocol was piloted with a small number of teachers prior to broader use to ensure that the questions were clear and interpreted as intended.
2For example, the survey asked about several possible topics of COVID instruction. One topic originally read “Survival rates of coronavirus victims.” However, interviewees found the term “victims” ambiguous. Consequently, the item was revise to “Survival rates of those infected with coronavirus.”
Study Recruitment
HRI recruited teachers for the study from two sources. First, we used email lists from MCH Strategic Data. MCH maintains a database of email addresses for almost five million school and district personnel, from which we constructed a sample of teachers with science in their teaching assignment (including teachers in self-contained classrooms). MCH sent the sampled teachers a link to the study registration form. We also enlisted the help of the National Science Teaching Association (NSTA), which has a membership of over 55,000 teachers and a mailing list of over 200,000. NSTA sent a description of the study and link to the study registration form to a substantial portion of their members. Between the two recruiting strategies, we registered just over 3,500 K–12 science teachers for the study.
Data Collection and Sample Reduction
Questionnaire
Administering the questionnaire to teachers before the end of the 2020–21 school year was important both for the validity of responses and achieving an adequate response rate. The questionnaire was launched in June 2020 and closed at the end of July 2020 with a response rate of 67 percent.3
The study timeline and budget precluded drawing a nationally representative sample for the teacher questionnaire. Instead, HRI attempted to register and survey enough teachers that a representative group could be constructed from respondents for analysis purposes. We used demographic data from the 2018 National Survey of Science and Mathematics Education (Banilower et al., 2018) to specify the target sample characteristics. For example, survey respondents were removed from the sample until it closely resembled population parameters for race/ethnicity. Ultimately, 36 percent of respondents were excluded from the analysis to achieve this goal.
HRI segmented the sample into elementary, middle, and high school teachers. In addition, middle and high school teachers were split into life science and non-life science teaching assignments. The final sample sizes are:
More detailed information about the sample is included in Appendix B.
Interviews
Teachers who completed the questionnaire were asked if they were willing to participate in a follow-up interview. HRI drew a purposive sample from those who agreed to participate, with the goal of balancing the sample in terms of teachers’ grade range (elementary, middle, high), life science/non-life science teaching assignment, community type (rural, urban, suburban), and whether or not they addressed COVID in their instruction. The initial sample consisted of 40 teachers and 80 matched backups. When a teacher in the original sample declined or did not respond, their matched backup was contacted as a replacement. Using this approach, we were able to interview our targeted 40 teachers, 30 from the original sample and 10 backups.
3Teachers who registered for the study received an initial email with instructions for accessing and completing the questionnaire. Up to three email reminders were sent to those who had not yet completed the questionnaire.
Data Analysis
Questionnaire To facilitate the reporting of large amounts of survey data, and because individual questionnaire items are potentially unreliable, HRI used factor analysis to identify survey items that could be combined into “composites.” Each composite represents an important construct related to COVID in science education and is reported on a scale from 0 to 100. A detailed description of the composite creation and composite definitions are included in Appendix C. Although not designed primarily as an equity study, the survey also provides some data about the extent to which students across the nation had equitable opportunities to learn about COVID. Data were analyzed by four factors4 historically associated with differences in educational opportunities:
- Percentage of students in the school eligible for free/reduced-price lunch (FRL) Classes were grouped into 1 of 4 categories based on the percentage of students in the school eligible for FRL. The categories were defined as quartiles within groups of schools serving the same grades (e.g., schools with grades K–5, schools with grades 6–8). Cut points for these quartiles are included in Appendix C.
- Percentage of students in the school from historically underrepresented minority (URM) groups Classes were grouped into 1 of 4 quartiles based on the percentage of students in the school from race/ethnicity groups historically underrepresented in STEM (i.e., American Indian or Alaskan Native, Black or African American, Hispanic or Latino, Native Hawaiian or Other Pacific Islander, multi-racial). Cut points for these quartiles are included in Appendix C.
- Community type Classes were coded into 1 of 3 types of communities:
- Urban: central city;
- Suburban: area surrounding a central city, but still located within the counties constituting a Metropolitan Statistical Area (MSA); or
- Rural: area outside any MSA.
- Political leaning Classes were coded into 1 of 2 categories based on whether the majority of voters in the county voted for the Democratic presidential candidate or Republican presidential candidate in the 2020 election.
Equity analyses of selected survey items and composites include all teachers (grades K–12) and cover instruction both before and after school buildings closed. While the TPB survey items were written with three specific constructs in mind (attitude toward the behavior, subjective norm, and perceived behavioral control), the factor analysis revealed four intention-related composites: attitude toward the behavior, subjective norms, self-efficacy, and control. These composites, along with several other reporting variables, were analyzed using path modeling a form of regression analysis that estimates both direct and indirect effects (i.e., through intermediary to examine relationships between teacher, classroom, and school factors, and how often teachers taught about COVID. The results are discussed later in this report.
Interviews
Interview data were used to write brief vignettes, which provide illustrative examples of the interplay among numerous factors that influenced teachers’ response to COVID. Teacher quotes from the vignettes are also interspersed throughout the report to supplement the survey findings.
4Three factors—percentage of students eligible for FRL, percentage of students from URM groups, and community type—are school-level factors. The fourth—political leaning—is a county-level factor. For analysis purposes, all factors were assigned to individual teachers’ responses about the class for which they responded.
Organization of This Report
The results of the study, like those from any survey based on a sample of a population (rather than on the entire population), are subject to sampling variability. The sampling error (or standard error) provides a measure of the range within which a sample estimate can be expected to fall a certain proportion of the time. For example, survey findings may indicate that 15 percent of elementary teachers gave a lecture when they addressed COVID with their students. If the sampling error for this estimate was 3 percent, then, according to the Central Limit Theorem, 95 percent of all possible samples of that same size selected in the same way would yield estimates between 9 percent and 21 percent (that is, 15 percent ± 2 standard error units). The standard errors for the estimates presented in this report are included in parentheses in the tables (see Figure 2).
In most tables, results for middle and high school teachers are reported separately for life science and non-life science teachers. This distinction was not appropriate for elementary teachers, who typically teach Earth, life, and physical science. When the data are similar before and after school buildings closed, they are combined into a single table. When there are notable differences between the two timepoints, those data are reported separately. A summary of each table highlighting or interpreting the results precedes the table. The summary points out only those differences that are substantial as well as statistically significant at the 0.05 level.5
Comparisons were made between groups within each equity factor. For FRL and URM, comparisons were made between the highest and lowest quartiles. For community type, comparisons were made among all three locales (urban vs. suburban, urban vs. rural, and rural vs. suburban). For political leaning, comparisons were made between Democratic- and Republican-leaning counties.
5Given the exploratory nature of this report, all tests of significance were conducted without controlling the Type 1 error rate.