>
Volume: 8 7 6 5 4 3 2 1



A peer-reviewed electronic journal. ISSN 1531-7714 
Search:
Copyright 1991, EdResearch.org.

Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. Please notify the editor if an article is to be used in a newsletter.


Find similar papers in
    ERICAE Full Text Library
Pract Assess, Res & Eval
ERIC RIE & CIJE 1990-
ERIC On-Demand Docs
 
Find articles in ERIC written by
     Shavelson, Richard J. & McDonnell
Shavelson, Richard J. , McDonnell, L. & J. Oakes (1991). Steps in designing an indicator system. Practical Assessment, Research & Evaluation, 2(12). Retrieved August 18, 2006 from http://edresearch.org/pare/getvn.asp?v=2&n=12 . This paper has been viewed 8,259 times since 11/13/99.

Steps in Designing an Indicator System

Shavelson, Richard J. , Lorraine M. McDonnell & Jeannie Oakes
RAND

The development of even a single indicator is an iterative process that de Neufville (1975) estimates takes about ten years to complete. The process is time-consuming because indicators are developed in a policy context; thus, their interpretation goes beyond the traditional canons of science and enters the realm of politics (cf. de Neufville, 1978-79). With this caveat, we can enumerate some steps to identify an initial set of indicators and to develop alternative indicator systems.

CONCEPTUALIZE POTENTIAL INDICATORS

A reasonable first step is to determine which components (construc ts) and their indicators adequately specify a comprehensive monitoring system. In our National Science Foundation (NSF) project, based on an extensive review of literature about social indicators and education research, we formulated a model of the education system and the potential indicators for measuring each component. The model contains 

  • inputs (the human and financial resources available to the education system),
  • processes (a set of nested systems that create the educational environment that children experience in school, e.g. school organization, curriculum quality), and 
  • outputs (the consequences of schooling for students from different backgrounds).

For each of these components, we identified a large potential pool of constructs for which indicators might be developed. Each construct appeared to be either an important enabling condition (e.g., it moderated the link between an input or process indicator and an outcome indicator) or to have a direct link to the desired outcomes of mathematics and science education.

REFINE THE INDICATOR POOL

No indicator system could accommodate all of the potentially important indicators identified by such a comprehensive process and still remain manageable. The second step, then, is to develop a valid, useful, and parsimonious set of indicators. The purposes the indicator system serves (e.g., description of trends, information for accountability purposes) constitute one criterion for reducing the initial pool of potential indicators. System designers need to consult potential users to determine what those purposes should be, because the purposes will dictate the type of information that must be collected and the level to which it should be disaggregated.

We applied eight criteria derived from our working definition of indicators. We assumed that indicators should: 

  1. reflect the central features of mathematics and science education, 
  2. provide information pertinent to current or potential problems, 
  3. measure factors that policy can influence, 
  4. measure observed behavior rather than perceptions, 
  5. be reliable and valid, 
  6. provide analytical links, 
  7. be feasible to implement, and 
  8. address a broad range of audiences.

These criteria were used to select indicators that reflect the major components of schooling, are reliable and valid (to some minimal extent), and meet basic standards of usefulness to the policy community. These measures then became the core around which different indicator system options were generated.

Applying these criteria may produce some casualties. For example, some highly desirable indicators may have to be eliminated because they cannot be measured reliably. This exercise suggests that some potential indicators which are not sufficiently developed to be included in an indicator system at this time are critical to a better understanding of mathematics and science education and should be part of a developmental research agenda. After these indicators meet our criteria, they can be incorporated into the indicator system.

DESIGN ALTERNATIVE INDICATOR SYSTEM OPTIONS

Once a model of the education system is defined and indicators are selected, the next step is to identify alternative data collection strategies that could be used to build the system. In the NSF project, we surveyed existing databases to determine what information was already being collected, and we identified areas where new indicator data were needed. In addition, we costed out each data point in an "ideal" indicator system to estimate costs for implementing alternative indicator systems. We were thereby able to generate alternatives, assess their likely utility, and provide cost estimates for each. We identified five generic options that range from simply relying on whatever data are available at the time a report is produced or policy issue is considered (status quo) to developing and fielding a comprehensive data collection system that spans the major components of education (independent).

EVALUATE THE OPTIONS

If indicator system alternatives are to be considered seriously by educators and policymakers, they need to be evaluated on a number of criteria. We evaluated each option according to its utility, feasibility, and cost. We asked whether each option could: 

  1. describe national trends (e.g., in achievement, teacher quality, and curriculum quality), 
  2. describe those trends state by state, 
  3. identify problems emerging on the horizon, 
  4. link teacher and curriculum quality to achievement, thus enabling policymakers to target reforms, 
  5. enable the sponsor to provide leadership by monitoring curricular and achievement areas that are currently ignored.

BEGIN DEVELOPING OR REFINING INDIVIDUAL INDICATORS

After one of the alternative indicator systems is selected, the process of developing or refining the individual indicators begins with an evaluation of the technical adequacy and usefulness of existing indicators.

The advantages and disadvantages of each major potential indicator in the model must be evaluated, using currently available data and analyses. Systematically synthesizing and contrasting information from a variety of databases will allow the usefulness of current indicators to be assessed and will lay the groundwork for developing and implementing new indicators.

Many data collection efforts and analyses will fall short of indicator requirements. Some of the most important potential indicators may not be measured at all, and well-known difficulties with existing datasets are likely to constrain the analyses that indicators require. In many cases, sample sizes or designs will not be adequate for disaggregating data by groups of interest; some will not permit relational analyses among various components of the system. It is important to identify the shortcomings in existing data and analyses, and where these gaps and inconsistencies exist, to specify what work is needed to obtain reliable, valid, and useful indicators.

SOME IMPLICATIONS

In reviewing research that might help us identify the key components and indicators of mathematics and science education, we became acutely aware of how little we know about schooling and how primitive much current measurement technology is. For example, multiple-choice tests of verbal and quantitative ability and of achievement in specific subject matters are well-understood, yet there is overwhelming evidence that these tests do not adequately reflect the erroneous "mental models" many students (and adults) have of everyday phenomena such as electricity, gravity, and force. And, to date, no technology has been developed that would enable large-scale testing of this qualitative understanding. Each component of an indicator system may suffer from similar shortcomings.

It is therefore necessary to identify a research agenda directed toward improving an indicator system. This agenda should become a research component of the indicator system itself that enables researchers to piggyback on monitoring activities and test alternatives to indicators currently in use. With increasing confidence in research findings, new indicator technologies can be incorporated into the system.

REFERENCES

de Neufville, J.I. (1975). Social Indicators and public policy: Interactive processes of design and application. New York: Elsevier Scientific Publishing Company.

de Neufville, J.I. (1978-79). Validating policy indicators, Policy Sciences, 10, 171-188.

Shavelson, R.J., L.M. McDonnell, J. Oakes (eds, 1989). Indicators for Monitoring Mathematics and Science Education: A sourcebook. Santa Monica: RAND Corporation. This article was adapted from material appearing in the sourcebook.

Descriptors: *Data Collection; Educational Assessment; Educational Policy; Elementary Secondary Education; *Evaluation Criteria; Evaluation Methods; Formative Evaluation; *Management Information Systems; *Mathematics Education; Research Methodology; Research Needs; *S

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemape 5 - Sitemap 6