School Improvement in Maryland

Toolkit for a Data-Driven Culture

This section includes tools for school leaders and team members to use in preparing teachers for the most effective use of the Classroom-Focused Improvement Process. It includes a PowerPoint presentation, handouts, and needs assessment tools for use in:

  • Determining staff beliefs about the assessment process and the value of using data as a major driver of school progress
  • Introducing to the staff the rationale for and the background of the Classroom-Focused Improvement Process (CFIP)
  • Determining your school's readiness to implement CFIP effectively
  • Analyzing your school's capacity to improve and to act on your findings
  • Identifying sources of data to use with CFIP
  • Sharing sample templates that teams might produce as a result of CFIP meetings
  • Answering frequently-asked questions about CFIP
  • Assessing your school's progress in the implementation of CFIP after a year or two of its use

ASSESSMENT BELIEFS WORKSHEET

DIRECTIONS:
Read and react to each of these statements by placing marks on the continua. Consider what evidence you have to justify your views.

1. Data analysis for instructional decision making is mostly numbers crunching using sophisticated statistical formulas such as for standard deviation, regression, and measures of variance.

2. We can't draw meaningful conclusions from different types of data, such as standardized tests on one hand and classroom tests and observations on the other, because they vary so much in validity and reliability.

3. The School Improvement Team or Data Committee are the best groups to analyze data because they see the big picture and have the needed experience and expertise.

4. Periodic common assessment data and ongoing analyses of student work are better sources to use than MSA or HSA scores to impact instruction and increase achievement.

5. Most school staffs have not had the training they need to conduct effective data analyses.

6. In general, left alone, teachers know enough about their students' progress that they don't need to analyze data.

ASSESSMENT BELIEFS WORKSHEET: DISCUSSION OF RESULTS

  1. Data analysis for instructional decision making is mostly numbers crunching using sophisticated statistical formulas such as for standard deviation, regression, and measures of variance.

    This is a myth that may be traced back to graduate level statistics courses, seen by some educators as too theoretical and disconnected to the real world of teaching.

    The academic world of standard deviations, f scores, and ANOVAS is far from the data analysis needed by teachers in the today's classrooms. Instead, data analysis is, in the words of Boudette in Data Wise, "working together to analyze trends and solve problems."1 The most advanced math that is usually needed is an understanding of percentiles.

    Integral to an effective data analysis process are:

    • Clarity and transparency of learning targets for teachers and students
    • Instruction with a laser-like focus on the targets
    • Assessments that are the "best fit" to the targets so that instructional teams can make valid inferences from the results
    • Acting on the results of the assessments by putting in place needed enrichments, interventions, and instructional changes

    Data analysis is therefore just as appropriate for the numbers phobic as for the numbers wonk. All teachers, with the proper guidance, can participate in data analysis.

  2. We can't draw meaningful conclusions from different types of data, such as standardized tests on one hand and classroom tests and observations on the other, because they vary so much in validity and reliability.

    Educators are often warned that it is not possible to mix data types, such as external assessments featuring high validity and reliability with classroom assessments that lack a similar rigor in their design and implementation.

    However, data expert Stephen White has noted that "the problem of data in the public schools is the fact that the data available are seldom perfect. . . . The triangulation process has been employed for centuries for the very purpose of gleaning meaning from imperfect and incomplete data,. . .data that are varied, unrelated, and collected at different times for different purposes."2

    White continues that, "though there are certainly instances in which data should not be compared in the statistical sense, the complexity of education compels us to look for patterns and trends in the practical sense that lead us to decisions that improve student achievement, regardless of the type of data."3

    He concludes that "triangulation is a messy process. It requires teams to make assumptions, draw inferences, and come to conclusions without total certainty. When data are triangulated, each point serves as a check on the other dimensions, with the desired outcome . . . always being the realization of new insights that are not available from examining only one type of data or one perspective."4

  3. The School Improvement Team or Data Committee are the best groups to analyze data because they see the big picture and have the needed experience and expertise.

    In many schools, data are analyzed by school improvement team (SIT) members, data committee members, or one or two people who are comfortable with numbers. The results are presented at faculty or department meetings, and staff members brainstorm ideas for what to do to increase student performance.

    The SIT is usually the wrong team to do the data analysis because their membership is usually too diverse (often including parents), and it meets too infrequently (usually only once a month). Most importantly, because it looks at school-wide issues, the SIT's work cannot be responsive to individual classroom needs that can vary significantly. Initiatives placed in school improvement plans are usually too global to address the diverse needs of individual classes or students, are aimed at "average" performance increases to meet AYP, and are constantly looking for the "magic bullet" that will have school-wide impact. Data analysis is far too important to be done by a few people on a sporadic basis in school improvement teams or faculty committees. This critical work must occur, instead, on a regular basis by classroom teachers in grade-level, vertical, and content teams and be embedded into their ongoing instructional planning efforts.

  4. Periodic common assessment data and ongoing analyses of student work are better sources to use than MSA or HSA scores to impact instruction and increase achievement.

    Data such as MSA and HSA scores are usually too general and, according to testing expert James Popham, are "instructionally insensitive."5 That is, they are not intended by the state to provide information to drive daily instruction. Their purpose is to produce an accountability score and to provide very general guidance about schoolwide priority areas. In addition, the timelines required of a statewide school accountability system mean that the state data are usually out of date well before they are returned to schools and are often from students who have moved on to a new grade and, perhaps, to a new school.

    Improvement plans based on state test data alone do not consider the wide variations that usually exist within and between grade levels and subject areas. State test data are best used to identify very broad strategies designed to increase the overall performance of groups needed to meet adequate yearly progress (AYP). The needs of individual students and teachers may be lost in this search for comprehensive initiatives that will make the difference at the school level.

    Ongoing assessment data collected by teachers, including the analysis of student work, are much better sources to use to impact daily instruction and increase achievement, but only if curriculum and instruction are aligned with state standards and if assessments reflect the levels of cognitive demand expected on state assessments.

  5. Most school staffs have not had the training they need to conduct effective data analyses.

    Assessment expert Richard Stiggins has noted, "Given the critically important roles of assessment, it is no surprise that Americans believe that teachers are thoroughly trained to assess accurately and use results effectively. In fact, teachers typically have not been given the opportunity to learn these things during pre-service preparation or while they are teaching. This has been the case for decades. And lest we believe that teachers can turn to their principals or other district leaders for help in learning about sound assessment practices, let it be known that relevant, helpful assessment training is rarely included in leadership-preparation programs either."6

    He continues, "Over the decades, very few educational leaders have been trained to understand what standardized tests measure, how they relate to the local curriculum, what the scores mean, how to use them, or, indeed, whether better instruction can influence scores. Beyond this, we in the measurement community have narrowed our role to maximizing the efficiency and accuracy of high-stakes testing, paying little attention to the day-to-day impact of test scores on teachers and learners in the classroom."7

    Stiggins concludes, "We must build a long-missing foundation of assessment literacy at all levels of the system, so that we all know how to assess accurately and use results productively. This will require an unprecedented investment in professional learning, both at the pre-service and in-service levels for teachers and administrators, and for policymakers as well."8

  6. In general, left alone, teachers know enough about their students' progress that they don't need to analyze data.

    "Just leave me alone and let me teach." According to keynoter Douglas Reeves, this is a guaranteed applause line at educational conferences.9 Yet, as Reeves notes, education is a collaborative profession. We need each other.

    DuFour, DuFour, and Eaker cite extensive research that has repeatedly concluded that teacher isolation has adverse consequences for teachers and for any effort to improve schools: "The research has been clear and consistent, professional organizations for teachers and administrators at all grade levels have advised us, and our direct observations in schools confirm it: Isolation is the enemy of school improvement."

  • 1 Boudett, K. et al., ed. (2005). Data wise. Cambridge, MA: Harvard Education Press.
  • 2 White, S. (2006). Beyond the numbers: Making data work for teachers and school leaders. Englewood, CA: Advanced Learning Press, pp. 112-113.
  • 3 Ibid.
  • 4 Ibid., p. 113.
  • 5 Popham, J. (2007, October). Instructional insensitivity of tests: Accountability's dire drawback. Phi Delta Kappan, 89 (2), 146-150.
  • 6 Stiggins, R. Five assessment myths and their consequences. Education Week, (27) 8, October 17, 2007, pp. 28-29.
  • 7 Ibid.
  • 8 Ibid.
  • 9 Reeves, D. (2004). Video series: From the bell curve to the mountain: A new vision for leadership and achievement. Center for Performance Assessment.