Developing the Views about Effective Science Instruction Questionnaire

Background

The importance of teacher attitudes and beliefs about science instruction is evident in the number of attempts to capture different dimensions of the construct.  Several well-documented measures exist to measure teacher self-efficacy (e.g., Southerland, Sowell, Kahveci, Granger, & Gaede, 2006; Riggs & Enochs, 1990), teacher attitudes toward science (e.g., Fraser, 1978; Cobern, 2002), beliefs about science teaching environment (Lumpe et al., 2000), beliefs about the nature of science (e.g., Lederman, Abd-El-Khalick, Bell, Schwartz, & Akerson, 2002; Schwartz, Lederman, & Lederman, 2008), and beliefs about science teaching and learning (e.g., Sampson & Benton, 2006; Luft & Roehrig, 2007).  Of course teacher beliefs are of interest not just in themselves but, more importantly, in relation to science instruction.  For example, teacher beliefs and attitudes regarding science as a discipline have been shown to affect lessons on the nature of science (Brickhouse, 1990).  Epistemological beliefs influence teacher choices about instructional strategies and the implementation of curricula (Cronin-Jones, 1991).

Despite the large number of existing instruments, none explicitly reflects current cognitive science literature such as that summarized in How People Learn:  Brain, Mind, Experience, and School (Bransford, Brown, & Cocking, 1999).  An interest in how teachers’ views about science teaching and learning align to contemporary research led us to develop a new instrument.

Design/Procedure

Defining the construct

Using the cognitive science research synthesized in How People Learn (Bransford, Brown, & Cocking, 1999) and How Students Learn: Science in the Classroom (Donovan & Bransford, 2005), Banilower, Cohen, Pasley, and Weiss (2008) proposed five “elements” of effective science instruction:

  • Motivating the learner;
  • Eliciting the learner’s initial ideas about the targeted content;
  • Intellectually engaging the learner with phenomena that yield data and evidence related to the targeted content;
  • Using evidence to make and critique claims about the targeted content; and
  • Making sense of ideas about the targeted content.

The survey development process closely followed an assessment development model.  The five elements listed above defined the boundaries of the “content domain,” but obviously needed further specification in order to guide item writing.  A group of science education researchers deconstructed each element into more fine-grained statements.  An example is shown in Table 1.

Writing questionnaire items

The fully specified content domain was used by researchers to generate questionnaire items.  Collaborative item editing meetings provoked spirited discussions among the research team.  These discussions revolved around two themes:  practicality and appropriateness.  Researchers frequently expressed the concern that if teachers’ instruction aligned closely with all elements of effective instruction, teachers would be unable to “cover the curriculum” (a contradiction inherent in some national standards documents as well).  Additionally, some phenomena do not lend themselves to first-hand investigation because they are inaccessible (for example, convection in Earth’s mantle).

Table 1—Deconstructing an Element of Effective Science Instruction Into Discrete Statements

Element of instruction: Intellectually engaging the learner with phenomena that yield data and evidence related to the targeted content.
Statements
Students should have opportunities to engage with phenomena that provide data that are relevant to the targeted content.
Students should have opportunities to engage with phenomena that are appropriate in terms of the students’ life experiences.
Students should have opportunities to engage with data that are sufficiently precise to form the science concept.
Students should have opportunities to engage with phenomena for which they can collect their own data.

 

Concerns over appropriateness stemmed from the fact that research on children’s thinking in science (and implications for instruction) has been shaped substantially by studies in the physical sciences, in which students tend to have deeply held misconceptions.  For instance, one popular compilation of such research, Children’s Ideas in Science (Driver, Guesne, & Tiberghien, 1985) discusses children’s ideas in eight areas, seven of which are clearly situated in the physical sciences.  In these areas, research suggests that students need to experience all elements of effective instruction in order to form a conceptual understanding that aligns with current scientific thinking.  However, it is not clear that students need to experience all of the elements when they do not have strongly held misconceptions.

Writing items that reflected practical and content-specific constraints proved fruitless.  Instead, the questionnaire asks respondents to set these constraints aside, focusing on their views of effective science instruction in general.  The questionnaire introduction states:

We recognize that teachers have to make trade-offs when they are responsible for teaching many concepts in one year.  Teachers may not always be able to use the instructional strategies they believe are effective and still address the entire curriculum.  When you respond to the statements, try to put those trade-offs aside.  Imagine that you are not constrained by state/district standards, available time/resources, or feasibility issues.  What do you think effective science instruction looks like, without all of the constraints that limit what you can do in the classroom? 

Questionnaire statements are preceded by the general prompt, “Practical constraints aside, do you agree that doing what is described in each statement would help most students learn science?”   Although situating the entire questionnaire in an ideal context avoided practicality concerns, we felt compelled to address a vulnerability typical of belief inventories; social response bias.  To increase the likelihood of observing a continuum of responses, a 6-point agreement scale was used, ranging from “strongly disagree” to “strongly agree.”  The scale does not include a neutral point.  Garland (1991) points out that despite a number of studies on the topic of whether to include a neutral point, the decision ultimately boils down to researcher preference.  There is some evidence that not including a neutral point minimizes the effects of social response bias (Garland, 1991).  It seemed reasonable to assume that all respondents, given their profession and the survey topic, would have an opinion about every statement (that is, they would not need a neutral point).  We intentionally wrote some items with reversed polarity to increase the likelihood that respondents would engage intellectually.

After writing multiple items for each element of instruction, we conducted cognitive interviews with 17 middle grades science teachers nationally to ensure that the statements were being interpreted as intended.  With one exception, the results were reassuring.  Several science education reform documents (for example, AAAS, 1993; NRC, 1996) make frequent use of the term “phenomena” to represent naturally occurring events with which students should engage as they construct meaning.  Many of the questionnaire items used this term as well.  Teachers, however, largely interpreted the term quite differently in interviews, thinking instead of supernatural events.  Based on this finding, all occurrences of the term were replaced by alternatives.  The cognitive interviews suggested other edits but none as pervasive as this one.

Piloting the instrument

The questionnaire was piloted in three stages.  An initial pool of 53 items was piloted with approximately 950 middle grades teachers nationally.  In this first pilot, the agreement scale used only four points, in contrast to the six-point scale mentioned above.  Analyses of these data convinced us of the need for a broader response scale.  The item pool was reduced from 53 to 23 by eliminating items that showed very little variation in response.

The second pilot consisted of 23 items on a 6-point agreement scale administered to approximately 300 middle grades science teachers.  An exploratory factor analysis suggested a three-factor solution; however, some factors had a small number of items and low internal reliability.  Also, some items were eliminated because they did not load on any of the three factors.

Prior to the third pilot, additional items were written in an attempt to shore up the factors with few items and low reliabilities.  This pilot served the additional purpose of testing the equivalence of web-based and paper-based questionnaires.  Just over 600 middle grades science teachers were randomly assigned to receive one of the forms.  Because the analysis established equivalence of the forms, data from all respondents were combined for the final factor analysis, described below.

Findings

Exploratory factor analysis (EFA) was used to investigate constructs underlying teachers’ responses to the questionnaire.  Although items were initially written to align with the five elements of effective instruction described above, the EFA suggested a three-factor solution that mixed items across the five elements.  These factors are named and illustrated with a sample item in Table 2.  Also included are the internal reliabilities of the factors.

The internal reliabilities suggest that the items in each factor cohere relatively well.  Furthermore, the correlations among factors are quite low (see Table 3), suggesting that the factors are distinct from one another.  It is somewhat surprising that factor 1 (learning-theory-aligned science instruction) and factor 2 (confirmatory science instruction) are not more strongly negatively correlated.

Table 2—Three-factor Solution for the Teacher Views about Effective Science Instruction Questionnaire

Factor name and illustrative item

Cronbach’s alpha

1.   Learning-theory-aligned science instruction (14 items)Students should rely on evidence from classroom activities, labs, or observations to form conclusions about the science concept they are studying.

0.782

2.   Confirmatory science instruction (7 items)Hands-on activities and/or laboratory activities should be used primarily to reinforce a science concept that the students have already learned.

0.776

3.   Activity for activity’s sake (3 items)Students should do hands-on or laboratory activities, even if they do not have opportunities to reflect on what they learned by doing the activities.

0.779

 

Table 3—Factor Correlation Matrix

Factor

1

2

3

1 1.000

2

-0.061

1.000

3

0.137

0.227

1.000

 

For more information about the instrument and to request a review copy, contact Sean Smith at ssmith62@horizon-research.com

References

Bransford, J., Brown, A. L., & Cocking, R. R. (1999). How people learn: brain, mind, experience, and school. Washington, D.C.: National Academy Press.

Brickhouse, N. W. (1990). Teachers’ beliefs about the nature of science and their relationship to classroom practice. Journal of Teacher Education, 41(3), 53-62.

Cobern, W. W., & Loving, C. C. (2002). Investigation of preservice elementary teachers’ thinking about science. Journal of Research in Science Teaching, 39(10), 1016-31.

Cronin-Jones, L. L. (1991). Science teacher beliefs and their influence on curriculum implementation: two case studies. Journal of Research in Science Teaching, 28(3), 235-50.

Donovan, M. S. & Bransford, J. D., Editors; Committee on How People Learn: A Targeted Report for Teachers; National Research Council. (2005). How Students Learn: Science in the Classroom. Washington, D.C.: The National Academies Press.

Driver, R., Guesne, E., & Tiberghien, A. (1985). Children’s ideas in science.; Philadelphia: Open University Press,.

Fraser, B. J. (1978). Development of a test of science-related attitudes. Science Education, 62(4), 509-515.

Lederman, N. G., Abd?El?Khalick, F., Bell, R. L., & Schwartz, R. S. (2002). Views of nature of science questionnaire: Toward valid and meaningful assessment of learners’ conceptions of nature of science. Journal of Research in Science Teaching, 39(6), 497-521.

Luft, J. A., Roehrig, G. H. (2007). Capturing science teachers’ epistemological beliefs: the development of the teacher beliefs interview. Electronic Journal of Science Education, 11(2), 38-62.  Retrieved August, 13, 2009 from http://ejse.southwestern.edu/volumes/v11n2/v11n2_list.html

Lumpe, A. T., Haney, J. J., & Czerniak, C. M. (2000). Assessing teachers’ beliefs about their science teaching context. Journal of Research in Science Teaching, 37(3), 275-92.

National Research Council. (1996). National Science Education Standards?: observe, interact, change, learn. Washington, DC: National Academy Press.

Project 2061  (American Association for the Advancement of Science). (1993). Benchmarks for science literacy. New York: Oxford University Press.

Riggs, I. M., & Enochs, L. G. (1989). Toward the development of an elementary teacher’s science teaching efficacy belief instrument. Retrieved August 14, 2009, from http://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED308068.

Sampson, V., & Benton, A. (2006). Development and validation of the beliefs about reformed science teaching and learning (BARSTL) questionnaire. Presented at the Annual Conference of the Association of Science Teacher Education, Portland, OR.

Schwartz, R. S., Lederman, N. G., & Lederman, J. S. (March, 2008). An instrument to assess views of scientific inquiry: the VOSI questionnaire. Paper presented at the Annual Meeting of the National Association for Research in Science Teaching, Baltimore, MD

Southerland, S., Sowell, S., Kahveci, M., Granger, D. E., & Gaede, O. (April, 2006). Working to measure the impact of professional development activities: developing an instrument to quantify pedagogical discontentment. Presented at the Annual Meeting of the National Association for Research in Science Teaching, San Francisco, CA.