About Me

My name is Matt - studying to be a Middle School Teacher in Language Arts and Social Studies.

Thursday, February 12, 2009

Leadership in Education

School Leadership

Center for Leadership in Education

Adequate Yearly Progress

A statewide accountability system mandated by the No Child Left Behind Act of 2001 which requires each state to ensure that all schools and districts make Adequate Yearly Progress.


Adequate Yearly Progress, or AYP, is a measurement defined by the United States federal No Child Left Behind Act that allows the U.S. Department of Education to determine how every public school and school district in the country is performing academically according to results on standardized tests. AYP has been identified as one of the sources of controversy surrounding George W. Bush administration's Elementary and Secondary Education Act.[1] Private schools do not have to make AYP.

According to the Department of Education, AYP is a diagnostic tool that determines how schools need to improve and where financial resources should be allocated. Former U.S. Secretary of Education Rod Paige wrote, "The statute gives States and local educational agencies significant flexibility in how they direct resources and tailor interventions to the needs of individual schools identified for improvement... schools are held accountable for the achievement of all students, not just average student performance."


What is AYP?

  • It encourages schools to raise the achievement of all students, not just the subset of students whose improvement will satisfy AYP goals.
  • It focuses attention on individual classrooms. Under NCLB, schools - rather than teachers and administrators - are held directly accountable for student achievement, and there are no rewards for success, only sanctions for failure. If the focus is on struggling students rather than on the teachers who are providing ineffective instruction, scarce resources will be devoted to the symptoms rather than their underlying causes. When used at the classroom level, value-added assessment gives individual teachers and administrators specific data describing two key patterns - the focus and impact - of their instruction, allowing them to target interventions where they are needed.
  • It is a better measure of school improvement. Under NCLB, school progress is an all-or-nothing affair - either the school makes AYP or it doesn't. However, value-added assessment shows any amount of progress that a school has made, even if it falls short of the AYP threshold. It does not sugarcoat low-achievement, but it does acknowledge the actual steps - both small and large - that schools make.

Value-Added Assessment

Value-added assessment (AYP) gives educators a powerful diagnostic tool for measuring the effect of pedagogy, curricula and professional development on academic achievement, and provides all K-12 stakeholders a fair and accurate foundation on which to build a new system of accountability. AYP is a measurement defined by the United States federal No Child Left Behind Act that allows the U.S. Department of Education to determine how every public school and school district in the country is performing academically according to results on standardized tests.


What is Value-Added Assessment?

Value-added assessment is a way of analyzing test data that can measure teaching and learning. Based on a review of students' test score gains from previous grades, researchers can predict the amount of growth those students are likely to make in a given year. Thus, value-added assessment can show whether particular students have made the expected amount of progress, have made less progress than expected, or have been stretched beyond what they could reasonably be expected to achieve. Using the same methods, one can look back over several years to measure the long-term impact that a particular teacher or school had on student achievement.


How is value-added assessment different from traditional measures of student performance?

Student performance on assessments can be measured in two very different ways, both of which are important. Achievement describes the absolute levels attained by students in their end-of-year tests. Growth, in contrast, describes the progress in test scores made over the school year.

In the past, students and schools have been ranked solely according to achievement. The problem with this method is that achievement is highly linked to the socioeconomic status of a student's family. For example, according to Educational Testing Service, SAT scores rise with every $10,000 of family income. This should not be surprising since all the variables that contribute to high-test scores correlate strongly with family income: good jobs, years of schooling, positive attitudes about education, the capacity to expose one's children to books and travel, and the development of considerable social and intellectual capital that wealthy students bring with them when they enter school.

In contrast, value-added assessment measures growth and answers the question: how much value did the school staff add to the students who live in its community? How, in effect, did they do with the hand society dealt them? If schools are to be judged fairly, it is important to understand this significant difference.


How does value-added assessment sort out the teachers' contributions from the students' contributions?

Because individual students rather than cohorts are traced over time, each student serves as his or her own "baseline" or control, which removes virtually all of the influence of the unvarying characteristics of the student, such as race or socioeconomic factors.

Test scores are projected for students and then compared to the scores they actually achieve at the end of the school year. Classroom scores that equal or exceed projected values suggest that instruction was highly effective. Conversely, scores that are mostly below projections suggest that the instruction was ineffective.

At the same time, this approach does recognize student-related factors and other extenuating circumstances. For instance, imagine that a student's performance falls far below projected scores, while other students in the same class, with comparable academic records, do make the progress they were expected to make. This would be taken as evidence of an external effect, related to the student's home environment or some other variable lying outside the range of a teacher's influence.

Program Evaluation

The concept of evaluation has been in existence since 2000 B.C. when the Chinese created a system of evaluation for their civil servants. Many definitions have been developed over the years but a comprehensive definition presented by the Joint Committee on Standards for Educational Evaluation (1994) defines it as "systematic investigation of the worth or merit of an object." Evaluations should be conducted for action-related reasons, and the information provided should be used to deciding a course of action. Evaluation provides information to help improve the project, reveals information that are essential to the Continuing Improvement Process, and may provide new insights or information that was not anticipated. The current view of evaluation stresses the interrelationships between evaluation and program implementation.


A solid evaluation plan is critical to ensure a successful program. Evaluation is not just a useful tool, it is a requirement for No child Left Behind. If a grant is needed to support a program, a solid evaluation plan can ensure a higher competitive rating by reviewers. The Center for Leadership in Education will help you write a rigorous and effective evaluation plan for your project ensuring that you not only meet, but exceed evaluation requirements. An evaluation should be useful, and user friendly.


Evaluations are designed to measure results and provide meaningful data. The Center has experience measuring the impact and effectiveness of grants submitted to the Ohio Department of Education, the US Department of Education, and foundations. Evaluation results provide far more than a thumbs up or thumbs down to a program. Evaluation identifies the multifaceted effects of a program on students and teachers, it documents what works and what components work best, and can assist in improving and replicating results.


The Center staff helps design evaluations to include the following components:

  • Pre and Post Test Survey: Teacher knowledge, classroom practices, and appreciation and understanding of the project are measured through a pre and post survey which also measures class room practices.
  • Student Measures: Student knowledge, appreciation and understanding are measured together with data collection on classroom practices. In addition, data is collected from standardized tests, achievement tests, report cards and the assessment of student work.
  • Useful and User-Friendly Reports: The Center provides ongoing evaluation reports in a graphic and user-friendly format. Project administrators and teams can adapt their program based upon evaluation results.


Educators look at two kinds of evaluation - formative and summative. The purpose of the former is to assess initial and ongoing project activities while summative evaluation is to assess the quality and impact of a fully implemented project. Evaluation as a process rather than an event, should be to provide an ongoing source of information that can aid decision making at various steps along the way.


Evaluations can be thought of as having six phases:

  • Development of a conceptual model of the program and identify key evaluation points
  • Development of evaluation questions and define measurable outcomes
  • Development of an evaluation design
  • Collection of data
  • Analysis of the data
  • Dissemination of information to interested audiences


The Center for Leadership in Education uses a logic model that defines the project. The three elements are of the Logic Model are:

  • Inputs
  • Outputs (activities and participants)
  • Outcomes - Impact (short term, medium term , long term)


Steps in developing the design of an evaluation include:

  • Selecting the Methodological Approach - qualitative (numbers) or quantitative (words)
  • Determine who and/or what will be studied
  • Comparison Groups
  • Timing, sequencing, frequency of data collection and cost


Steps in the evaluation process:

  1. Data Collection
  2. Analyzing the data
  3. Reporting the results
  4. Conclusions (and recommendations)
  5. Disseminating the information


The staff of the Center for Leadership in Education has worked with several school districts in Northeast Ohio.

No comments:

Post a Comment