>
Clearinghouse on Assessment and Evaluation

Library | SearchERIC | Test Locator | ERIC System | Resources | Calls for papers | About us

 

 

From the ERIC database

Assessing Student Performance in Science. ERIC CSMEE Digest.

Haury, David L.

With increased student achievement in science being a national goal (AMERICA 2000, 1991), do we know what students are learning? Given the emerging national standards in science education National Committee on Science Education Standards and Assessment (NCSESA), 1993 , how will we determine whether students measure up to the standards? Radical changes are underway for school science curricula Rutherford & Ahlgren, 1990; The National Science Teachers Association (NSTA), 1992 , but are complementary changes in assessment in progress? States are developing student assessments based on science frameworks or guides (Blank & Engler, 1992; Davis & Armstrong, 1991), but do we know how to assess student performance in all the domains of interest and concern? Assessment of student performance is emerging as a crucial ingredient in the recipe for ongoing improvement of school science. As programmatic change is occurring, there is a need to align student assessment practices with curricular aims, instructional practices, and performance standards. In short, "What we teach must be valued; what we test is what must be taught" (Iris Carl as quoted in McKinney, 1993). ^FUNCTIONS ON ASSESSMENT ^^Before considering alternative approaches to assessing student performance, it is important to consider the various functions that assessment serves. Various reasons for assessing student performance have been described in both specific terms (Kober, 1993, pp. 57-58; Raizen & Kaser, 1989) and general terms (Meng & Doran, 1993), with distinctions being made between assessment for reporting purposes and for purposes of diagnosis and program evaluation. Eventually, as National Science Education Standards are developed, guidelines for assessing both student achievement and program effectiveness will be developed (NCSESA, 1993). ^In this digest, the focus is on assessment in the service of instruction, for helping students, teachers, and parents monitor learning. Assessment in this context must be unobtrusive and tailored to measure specific learning outcomes, not necessarily norm-referenced and generalizable across schools, states, and countries (Haertel, 1991). What are the issues and methods of assessment in the context of classroom instruction? ^NEW FORMS OF ASSESSMENT ^^Jorgensen (1993) cited the plethora of new labels for assessment strategies in contrast to the scarcity of expertise, procedures, and guidelines as evidence of a paradigm shift in assessment. There is an urgency to develop innovative forms of assessment that provide valid indicators of student understanding and other learning outcomes, but there is widely divided opinion as to how to proceed. Even though teachers know that students who are most knowledgeable in science are not necessarily the ones that get the highest grades, most continue to depend on multiple-choice test scores to determine grades (Baron, 1990). Furthermore, there is evidence that standardized testing strongly influences instructional planning among teachers (Herman & Golan, 1992). So, "one thing must be made clear from the outset: assessment encompasses more than testing, and much more than standardized testing. It includes such techniques as systematic teacher observation and so-called 'authentic' assessment, in which the tasks assessed more closely parallel the learning activities and outcomes that are desirable in the science classroom" (Kober, 1993). ^Among the new labels being used today is performance-based assessment. Though there are a variety of definitions, it is clear that performance-based assessment does not include multiple-choice testing or related paper-and-pencil approaches. According to Jorgensen (1993), "performance-based assessment requires that the student complete, demonstrate, or perform the actual behavior of interest. There is a minimal degree of inference involved." Baron (1991) has provided a list of characteristics of performance assessment tasks, with a notable blending of content with process, and major concepts with specific problems. As Kober (1993) has mentioned, "in this type of assessment, students may work together or separately, using the equipment, materials, and procedures they would use in good, hands-on science instruction." ^GUIDELINES FOR SCHOOLS AND CLASSROOMS ^^Today's assessment strategies must be aligned with the emerging vision of "science for all," with all students engaged in science experiences that "teach the nature and process of science as well as the subject matter" (NCSESA, 1993). A first step in considering assessment methods, then, is to become familiar with the wide range of student outcomes that are being endorsed by science teachers (NSTA, 1992), scientists (Rutherford & Ahlgren, 1990), and the National Research Council (NCSESA, 1993). It is also necessary to consider the diverse needs, interests, and abilities of students, particularly girls, minorities, students with disabilities, and those with limited English proficiency. "Assessment should be context dependent; reflect the nature of the subject matter; and address the unique cultural aspects of class, school, and community among culturally diverse populations" (Tippins & Dana, 1992). As White and Gunstone (1992) have stated, "a limited range of tests promotes limited forms of understanding. ^As an example of how alternative assessment strategies can enable students to show what they know in a variety of knowledge domains, consider the approach taken in one urban school (Dana, Lorsbach, Hook, & Briscoe, 1991). Concept mapping and journal writing techniques are used to document conceptual change among students, and student presentation and interview techniques allow learners to communicate their understanding in ways that rely less on reading and writing skills. For additional samples of techniques being used, see the Appendix of Kulm and Malcom (1991). ^Among the promising alternative assessment techniques are the use of scoring rubrics to monitor skill development and the use of portfolios to assemble evidence of skill attainment. Scoring rubrics can be used to clarify for both students and teachers how valued skills are being measured (Nott, Reeve, & Reeve, 1992). Portfolios documenting student accomplishments can take a variety of forms, with student products, collected data, or other evidence of performance being used as information for self, peer, or teacher evaluation (Collins, 1992). ^It should be acknowledged that there are drawbacks to performance assessments. Staff development will be required, performance assessments take more time than conventional methods, standardization is difficult, and the results may not be generalizable from one context to another. These problems reinforce the importance of practitioners, assessment specialists, and assessment "consumers" being clear on the purposes of specific assessment activities. There is no one approach to assessment that will best serve all functions, knowledge domains, and learners. ^RESOURCES ^^Educational Leadership, 49(8). (This special issue of May, 1992 includes sections on Using Performance Assessment and Using Portfolios. A synthesis of what research tells us about good assessment is also included. One article, "What we've learned about assessing hands-on science," pp20-25, addresses the specific concerns related to assessing inquiry-oriented teaching outcomes.) ^Hein, G. (Ed.). (1990). The assessment of hands-on elementary science programs. Grand Forks, ND: Center for teaching and learning, North Dakota University. ED 327 379 (This document examines a wide variety of issues related to assessment, including a section on new approaches to science assessment.) ^Herman, J. L., Aschbacher, P. R., & Winters, L. (1992). A practical guide to alternative assessment. Alexandria, VA: Association for Supervision and Curriculum Development. (This resource addresses several key assessment issues and provides concrete guidelines for linking assessment and instruction, and for assessment design.) ^Kulm, G., & Malcom, S. M. (Eds.). (1991). Science assessment in the service of reform. Washington, DC: American Association for the Advancement of Science. (This is a compilation of contributed chapters that treat policy issues and the relationships between assessment and curriculum reform, and between assessment and instruction. Several practical examples from the field are also included.) ED 342 652 ^Meng, E., & Doran, R. L. (1993). Improving instruction and learning through evaluation: Elementary school science. Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental Education. (This is a practical guide for teachers and anyone else involved in assessing student performance in elementary school science. Separate sections focus on assessing science process skills, concepts, and problem-solving.) ED 359 066 ^Raizen, S., & others. (1990). Assessment in science: The middle years. Andover, MA: The NETWORK, Inc. ED 347 045 (This document is part of a set of reports that focus on science and mathematics education for young adolescents. Practical guidelines for assessment are provided for policymakers and practitioners on the basis of research findings and recommendations gleaned from the literature. New directions in assessment are discussed.) ED 347 045 ^Science Scope, 15(6). (This issue of March, 1992 includes a special supplement on alternative assessment methods in science, including sections on performance- based assessment, the use of portfolios, group assessments, concept mapping, and scoring rubrics.) ^Semple, B. M. (1992). Performance assessment: An international experiment. Princeton, NJ: Educational Testing Service. (This document describes an attempt to supplement the pencil-and-paper approach of the International Assessment of Educational Progress in mathematics and science with a performance component. Both the results of the experiment and full descriptions of the performance tasks are provided, including tasks that focus on problem solving, the nature of science, and physical science concepts.) ^White, R., & Gunstone, R. (1992). Probing understanding. New York: Falmer Press. (A practical but theoretically sound guide to alternative approaches to assessing understanding through application of nine types of PROBES: concept mapping, prediction-observation-explanation, interviews about instances and events, interviews about concepts, drawings, fortune lines, relational diagrams, word associations, and question production.) ^REFERENCES ^^AMERICA 2000: An education strategy. (1991). Washington, DC: U.S. Department of Education. ED 332 380 ^Baron, J. B. (1990). How science is tested and taught in elementary school science classrooms: A study of classroom observations and interviews. Paper presented at the annual meeting of the American Educational Research Association, Boston, April. ^Baron, J. B. (1991). Performance assessment: Blurring the edges of assessment, curriculum, and instruction. In G. Kulm & S. M. Malcom, (Eds.), Science assessment in the service of reform (pp. 247-266). Washington, DC: American Association for the Advancement of Science. ED 342 652 ^Blank, R. K., & Engler, P. (1992). Has science and mathematics education improved since "A nation at risk?" Washington, DC: Council of State School Officers. ^Collins, A. (1992). Portfolios: Questions for design. Science Scope, 15(6), 25- 27. ^Dana, T. M., Lorsbach, A. W., Hook, K., & Briscoe, C. (1991). Students showing what they know: A look at alternative assessments. In G. Kulm & S. M. Malcom, (Eds.), Science assessment in the service of reform (pp. 331-337). Washington, DC: American Association for the Advancement of Science. ^Davis, A., & Armstrong, J. (1991). State initiatives in assessing science education. In G. Kulm & S. M. Malcom, (Eds.), Science assessment in the service of reform (pp 127-147). Washington, DC: American Association for the Advancement of Science. ED 342 652 ^Haertel, E. H. (1991). Form and function in assessing science education. In G. Kulm & S. M. Malcom, (Eds.), Science assessment in the service of reform (pp 233- 245). Washington, DC: American Association for the Advancement of Science. ED 342 652 ^Herman, J. L., & Golan, S. (1992). Effects of standardized testing on teachers and learning--Another look (CSE Technical Report 334). Los Angeles: National Center for Research on Evaluation, Standards and Student Testing, University of California. ED 341 738 ^Jorgensen, M. (1993). Assessing habits of mind: Performance-based assessment in science and mathematics. Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Science Education. ^Kober, N. (1993). What we know about science teaching and learning. Washington, DC: Council for Educational Development and Research. ^McKinney, K. (1993). Improving math and science teaching. Washington, DC: Office of Educational Research and Improvement, U.S. Department of Education. SE 053 492 ^Meng, E., & Doran, R. L. (1993). Improving instruction and learning through evaluation: Elementary school science. Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental Education. ED 359 066 ^National Committee on Science Education Standards and Assessment. (1993). National science education standards: An enhanced sampler. Washington, DC: National Research Council. SE 053 554 ^National Science Teachers Association. (1992). The content core. Volume 1 in Scope, Sequence and Coordination of Secondary School Science. Washington, DC: Author. ^Nott, L., Reeve, C., & Reeve, R. (1992). Scoring rubrics: An assessment option. Science Scope, 15(6), 44-45. ^Raizen, S., & Kaser, J. (1989). Assessing science learning in elementary school: Why? What? and How? Phi Delta Kappan, 70, 9. ^Rutherford, F. J., & Ahlgren, A. (1990). Science for all Americans. New York: Oxford University Press. ^Tippins, D. J., & Dana, N. F. (1992). Culturally relevant alternative assessment. Science Scope, 15(6), 50-53. ^----- ^David Haury is an Associate Professor of Science Education at The Ohio State


Title: Assessing Student Performance in Science. ERIC CSMEE Digest.
Author: Haury, David L.
Publication Year: Jul 1993
Document Identifier: ERIC Document Reproduction Service No ED359068
Document Type: Eric Product (071); Eric Digests (selected) (073)
Target Audience: Teachers and Administrators and Practitioners

Descriptors: Competency Based Education; Educational Change; Elementary School Science; Elementary Secondary Education; Evaluation Criteria; High Schools; Portfolios [Background Materials]; Science Curriculum; * Science Education; Secondary School Science; * Student Evaluation

Identifiers: Alternative Assessment; Concept Mapping; *Performance Based Evaluation; Performance Based Objectives; Science Process Skills



Degree Articles

School Articles

Lesson Plans

Learning Articles

Education Articles

 

 Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6

©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at ericae.net.

Under new ownership