In September of 1991, Lamar Alexander as U.S. Secretary of Education, Diane Ravitch as Assistant Secretary of the Office of Educational Research and Improvement, and Emerson J. Elliott as Acting Commissioner of the National Center for Education Statistics (NCES) received a report by the Special Study Panel on Education Indicators.
Knowing that “In the information age we face the paradox of having more and more data and less and less certainty about what they mean,” the panel aimed at producing a relevant, “reliable means of charting progress” in education. To appreciate the report findings, it is important to understand that “An indicator provides evidence that a certain condition exists or certain results have or have not been achieved.”
The vision portrayed in the report was that of an information system capable of being specific enough for researchers while being easily understood by the general public. The panel felt that such an information system was necessary “to guide education policymakers' decisions” and to help avoid the mistake of valuing only what can easily be measured.
The report stated: “If the panel's framework is implemented, it will step on some powerful toes.” No ones' toes got stepped on.
Today in the Senate HELP Committee headed by Senator Lamar Alexander, “Every Child College or Career Ready” is being considered as a replacement for “No Child Left Behind.” The findings of the Panel on Education Indicators are needed to set the record straight on test-reliant, outcome-based education policiesbefore this important decision is made final.
Two basic messages were clear:
Statistical indicators are powerful tools for identifying problems and galvanizing public support to address them, but a limited set of indicators has the potential to mislead.
The Panel rejected two indicator models:
The information system needed to develop education indicators should be organized around major issue areas of enduring educational importance.
A model focusing on a triumvirate of "educational inputs-educational processes-educational outputs" is viewed as flawed because it encourages the view that the education system produces "products."
BELIEFS and ASSUMPTIONS were stated:
An indicator model developed around “general goals-specific objectives-measurement” would largely be oriented toward policies subject to change. An indicator system organized around today' s goals cannot respond to tomorrow's.
We should assess what we think is important, not settle for what we can easily measure.
Guiding PRINCIPLES were outlined:
High-quality, reliable indicators can improve the public’s understanding of education.
An effective indicator system must monitor education outcomes and processes wherever they occur knowing that learning takes place in a context much broader than the schools’ day.
The panel recommended creation of an issues-oriented indicator system developed around properly chosen major issues with information to serve the needs of educators, or of policymakers, or of the research, analysis, or business communities.
Major WARNINGS were issued:
This “education indicator information system” would provide "clusters of indicators" around major issues and concepts affecting American schools, colleges, and students.
An indicator system must respect the complexity of the educational process and the internal operations of schools and colleges understanding that higher education and the nation's schools can no longer be permitted to go their separate ways.
An indicator system built solely around achievement tests will mislead the American people.
CONCERNS about indicator systems were expressed:
It is critical that considerations of educational equity be designed into the indicator information system from the outset; they cannot work well as afterthoughts.
Our people and our policymakers must never lose sight of the human ends of education and the social nature of the institutions.
The integrity of data collection and analysis must be protected from political intrusion.
If it is concluded that the panel's recommendations add too much to a system that is already overburdened, the panel recommends major revisions in the scope of current assessments.
The search for a limited number of key education indicators is misguided. Because no limited set of indices can do justice to the complexity of the educational enterprise, a limited set would not only reflect an educational agenda, they would define an educational agenda.
Major RECOMMENDATIONS of the Panel on Education Indicators:
Indicators cannot, by themselves, identify causes or solutions and should not be used to draw conclusions without other evidence. Diagnosis is not the function of an indicator just as it is not the function of a temperature gauge in an automobile.
When the stakes involved with an indicator system are high, involving perhaps financial rewards or state sanctions, the local pressure to produce the desired statistical outcomes is enormous. Under these conditions, corruptibility of indicators is a concern.
And the panel believes that indicators will fail if they do not fulfill their potential to inform the general public about the quality of the educational enterprise.
Indicators must be comprehensive, yet disciplined enough to be manageable. And they must be presented regularly to the public by interpretive reports that place data and analyses within the context of accessible written essays.
The panel suggested six “issue areas” as a place to start:
(1) Learner Outcomes: Acquisition of Knowledge, Skills, and Dispositions,
(2) Quality of Educational Institutions,
(3) Readiness for School,
(4) Societal Support for Learning,
(5) Education and Economic Productivity,
(6) Equity: Resources, Demographics, and Students at Risk.
Subsets of indicators for each area were suggested. For example, under Quality of Educational Institutions the panel detailed Learning Opportunities, Teachers, Condition of Teachers’ Work, Places of Purpose and Character, and School Resources.
Example of a subset of educational indicators.
Suggestions for organization of the National Center on Education Statistics (NCES) around the “issue areas” were given and Congress was urged to support further improvements.
The NCES would be charged with developing biennial interpretive reports on each of the six issue areas defined by the panel so as to produce reports on three of the issue areas each year.
Each report was to carry the message that single indicators, even with perfect measurement, cast a very narrow beam of light on a very large picture.
NCES and the Department of Education would report meaningful data, including state-by-state comparisons, for each of the six issue areas.
And that’s the short course on this 120 page report from 1991
. What has followed in the way of information on the quality of our education system and the “results” of our “reform” policies
is our sad history of ignoring sound advice in favor of a political agenda. The outcome-based education reform theory
, basing multiple decisions on standardized achievement test scores from all variations of the same type of tests, dominated law and classroom practices for over two decades
No one stepped on the toes of the influential and scores of children were left behind because of a narrowed test-based curriculum produced by the singular mistake of making a diagnosis based on one indicator in a complex “enterprise.”
Outcome-based education reform failed, predictably.
The outcome-based experiment needs to end. That’s what we need to get straight before policy-makers set the course for another decade.