>
Volume: 8 7 6 5 4 3 2 1



A peer-reviewed electronic journal. ISSN 1531-7714 
Search:
Copyright 2000, EdResearch.org.

Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. Please notify the editor if an article is to be used in a newsletter.


Find similar papers in
    ERICAE Full Text Library
Pract Assess, Res & Eval
ERIC RIE & CIJE 1990-
ERIC On-Demand Docs
 
Find articles in ERIC written by
     Brem, Sarah K.
 Andrea J. Boyes
Brem, Sarah K. & Andrea J. Boyes (2000). Using critical thinking to conduct effective searches of online resources. Practical Assessment, Research & Evaluation, 7(7). Retrieved August 18, 2006 from http://edresearch.org/pare/getvn.asp?v=7&n=7 . This paper has been viewed 19,105 times since 8/28/00.

Using critical thinking to conduct effective searches of online resources

Sarah K. Brem, Arizona State University
Andrea J. Boyes, Jasper Creek Education, Inc.

While the number of online databases and other resources continues to rise, the quality and effectiveness of database searches does not. Over 80% of academic, public and school libraries offer some form of Internet access (American Library Association, 2000); thousands of full-text electronic journals and serials are available online. However, Hertzberg & Rudner (1999) found that most searches are cursory and ineffective, and they provide extensive recommendations regarding the mechanics of searching. A firm grounding in the mechanics of searching is vital, but an effective search is also an exercise in inquiry and critical thinking. We begin searching a topic with certain questions; as we collect information, we form hypotheses about the topic. These hypotheses in turn guide further searching, and are elaborated, discarded or modified as we learn more.

This document complements guidelines addressing the mechanics of online searching by considering how treating searching as exercises in critical thinking can improve our use of online resources. We address the use of metacognition, hypothesis-testing, and argumentation, providing illustrative examples, and links to tools that can facilitate the process.

METACOGNITION

Metacognition is thinking about thinking (Butler & Wynne, 1995): What do I know? What do I not know? Will I ever find an answer? Knowing what we don't know helps us focus our questions, and how long and hard we look for an answer depends on how likely it seems that we'll find a answer. In the context of online inquiry, it is important to assess how well we're equipped to conduct an inquiry, as well as what's out there to find.

Suppose we want to assess the wisdom of high stakes testing, but are unfamiliar with the issue. We simply enter the phrase "high stakes testing" into ERIC. Doing so retrieves 56 articles. If we quit there, we miss items that would be retrieved by combining terms such as "Accountability-" with "Test-Validity," or "Educational-Testing." These searches would produce an additional 178 articles, enriching our inquiry. At the other end of the spectrum, we may waste time looking for information that no one has, such as how a small subset of the population performs on a particular test. In short, we need to be able to assess the quality of our search.

Once they locate information, people often overlook inconsistencies or conflicts. Searches typically produce a loosely-connected cluster of articles of varying relevance and contrasting opinions. Inquiries are often weakened because disconnected knowledge allows conflicts between articles to go undetected; positions are not explicitly compared. In addition, inconsistencies within a text may be overlooked because readers tend to form a framework early on--we think we know what the article is about, and miss anything that doesn't fit our framework (Otero & Kintsch, 1992).

How can we improve metacognition in online searching?

Improving metacognition means improving our ability to monitor what we know and how we know it. Here are some ways to accomplish this:

Put the project aside for a brief time. Taking a break helps in several ways. When immersed in the process, people often feel they've learned more than they really have. Nelson and Dunlosky (1991) found that a short break improves the ability to accurately assess what's been learned. Also, returning to a problem repeatedly over time improves memory and comprehension, and allows us to take a slightly different perspective each time.

Talk it out. Chi, deLeeuw, Chiu & LaVancher (1994) find that keeping up a running dialogue with oneself is effective in highlighting inconsistencies and gaps in knowledge. Suppose we read a paper on testing and come across the claim that "passing cutoffs are set arbitrarily." As we attempt to tell ourselves what arbitrary cutoffs means, we realize we don't really know. We can then reread looking for this information, or ferret out additional sources.

Once we've collected a substantial body of knowledge, we can lay out the pros and cons to ourselves or a live audience. Concept mapping can also improve metacognition, and its use is discussed below.

Develop content knowledge. Brem & Rips (in press) found that people who are capable of critical thinking nevertheless fall for weaker arguments when they lack relevant information. Thus, to a certain extent, metacognition and an effective inquiry depend upon building expertise. Nevertheless, we can compensate in the early stages by taking advantage of the content support afforded by online resources.

Many databases provide thesauri--lists of alternative ways of accessing a content area. For example, the ERIC Wizard (http://ericae.net/scripts/ewiz/) uses a thesaurus for widening and narrowing searches. We can construct our own thesauri as well. Examining our initial 56 hits on "high stakes testing," we find other descriptors and keywords associated with these articles--some relevant (Accountability-), some not (Copyrights-); the most relevant become our thesaurus and guide additional searches.

When you don't know, find someone who does. We're often reluctant to admit ignorance, but if we've already tried the strategies above, it's likely that the remaining questions are good, hard questions. Reference librarians, instructors and colleagues can help in locating additional sources and perspectives. Expertise is also available on demand through ERIC Digests and ERIC FAQs (http://ericae.net/nav-lib.htm), which consolidate and synthesize existing information. These documents also help in developing a sense of the overall quality and quantity of evidence available about a topic. For the testing example, ERIC has ten FAQs related to assessment, and nine digests are retrieved by the phrase "high stakes testing." The syntheses of others cannot substitute for working through the issue; in fact, our preparation will help us read these documents with a critical eye and extract relevant information.

HYPOTHESIS TESTING

Searching the literature should be an exercise in hypothesis testing. We hold a certain position on an issue, or construct a position along the way. As we proceed, we need to test and modify this position. The problem is that hypothesis testing is often self-fulfilling. Once we form an opinion, we tend to focus on sources that that support our position, and distort data to make the strongest case (Koehler, 1991). Fortunately, we can combat this process:

How can we improve hypothesis testing in online searching?

Actively pursue alternative hypotheses. We need to fight the tendency to consider only one side of a debate. One of the easiest ways to do this is simply to consider the opposite. Suppose we uncover evidence supporting high-stakes testing. Formulate the opposite opinion--high-stakes testing is a bad idea--and actively work to support this claim. Once we've made an earnest attempt to explore both claims, we can weigh the positions side-by-side.

Develop an evaluativist stance. People frequently fall into an absolutist or multiplist perspective. They see the world in black-and-white, with clear right and wrong answers (Absolutist), or as filled with myriad possibilities, all of which are more or less equally valid (Multiplist). In contrast, adopting an evaluative viewpoint involves recognizing that while there are no right answers, there are better and worse answers, and we can identify them by weighing the evidence. Evaluative approaches are associated with more effective reasoning (Kuhn, 1991), and the strategies described in the next section can aid in the process.

ARGUMENTATION

As we encounter different perspectives, we need a way to decide among them. Which position does the evidence best support? Which sources of evidence and opinions are most reliable? Once we adopt an evaluativist stance, argumentation strategies help us carry out our evaluation.

How can we improve argumentation in online searching?

Consider the structure and reliability of a source. For example, ERIC is a self-contained resource; all information accessed within ERIC meets ERIC standards. In contrast, Web sites often link multiple sources--some more reliable, some less reliable than the site we came from. We need to assess the reliability of every source before we include it in our analysis. Critical thinking guidelines (e.g., Harris, 1997; Kirk, 2000) provide criteria for assessing reliability.

Remember that even reputable sources are fallible. Even the most trusted resource is the work of many people who have different ideas regarding what an article is about and how to describe it. They can make typographical errors. These inconsistencies and mistakes can compromise an inquiry, so it's important to ask whether the results of a search are accurate and complete. The initial goal should be to collect as much relevant information as possible, as it is always possible to narrow the search later.

First, don't initially limit the terms of a search; a broad range of keywords and descriptors increases the likelihood of hitting on the terms chosen by the person entering the data. Second, don't limit which fields are searched. For example, ERIC has "major descriptors" and "minor descriptors;" searching on both maximizes the number of hits. Another example is limiting searches on an author's name to the author field. This seems reasonable, but it misses items with the author's name in the abstract or text; these often present the arguments of opponents and supporters, key pieces of the puzzle. Finally, consider searching on common misspellings, or truncating a term using wildcards to include variations.

Use systematic analysis for a comprehensive (though time-consuming) evaluation. Systematically analyzing an issue takes some time and effort, but generally provides the most complete and accurate evaluation. Systematic analysis involves identifying each claim and asking whether each piece of evidence really supports or refutes it. One popular aid to systematic analysis is using concept maps to visualize the relationship between claims and evidence.

For example, suppose we are searching to see whether we should accept the claim that testing improves student outcomes. We place this claim on the map (Figure 1). When our searches produce a piece of information that supports or attacks this claim, we place a brief description of the evidence on the map and draw lines connecting evidence to claims, choosing lines of different colors or styles to distinguish between supporting and refuting evidence. We also connect pieces of evidence when they attack one another or back each other up. Font size is one way to indicate source reliability (e.g., bigger means more reliable). A map can be made for each alternative viewpoint.

Figure 1. Beginnings of a concept map.
Larger fonts indicate stronger evidence; line color indicates the nature of the relationship.

In the resulting visual representation of the debate, a dense web of supporting evidence gives us a solid basis for accepting a claim, and a dense web of refutations provides us with reason to reject. If the evidence seems evenly mixed, or if two alternatives produce equally strong maps, we can continue looking, or we may simply decide that there is no consensus on this issue. In addition, maps support metacognition; holes and smaller text mean holes and weaknesses in the argument, telling us where more information is needed.

Mapping software can facilitate the process (commercial and shareware packages are reviewed at http://www.ozemail.com.au/~caveman/Creative/Software/swindex.htm), but paper and pencil will do. If mapping proves too time-consuming, even a simple list of points for and against a claim is useful. For important decisions, though, mapping is preferred because it includes how claims and evidence are interconnected.

Heuristics are useful when we need to make quick decisions, when there is not enough information for systematic analysis, or to complement systematic approaches. Heuristic evaluation involves making a calculated guess about the quality of an argument. It's usually easy, but not always accurate. For example, deciding to trust someone's argument because they hold a position at a prestigious university is a heuristic--we haven't actually taken the argument apart. It's often a good guess, but even Nobel prize winners have been known to hold a crackpot theory or two. The critical thinking guides mentioned above discuss signs of reliability, and incorporating these into concept maps can enrich our evaluation.

Perhaps the biggest challenge in using heuristics is remembering that a guess is only a guess. This is a metacognitive issue of remembering how we know what we know. Talking out inquiries will help highlight the assumptions underlying heuristics, and using a special color for heuristic contributions to concept maps keeps their status clear.

CONCLUSION

Searching for information online is an exercise in critical thinking, and becoming an expert in critical inquiry takes practice. The guidelines provided above can help in directing and channeling this practice, and providing scaffolding while we gain expertise.

References

American Library Association (2000). LARC Fact Sheet No. 26: How many libraries are on the Internet? [Online] Available: http://www.ala.org/library/fact26.html

Brem, S. K. & Rips, L. J. (in press) Explanation and evidence in informal argument. Cognitive Science.

Butler, D., & Winne, P. (1995). Feedback and self-regulated learning: A theoretical synthesis Review of Educational Research, 65, 245-281.

Chi, M. T. H., deLeeuw, N., Chiu, M., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477.

Harris, Robert (1997). Evaluating Internet research sources. [Online] Available: http://www.sccu.edu/faculty/R_Harris/evalu8it.htm

Hertzberg, S. & Rudner, L. (1999). The Quality of Researchers’ Searches of the ERIC Database. Education Policy Analysis Archives. [Online] Available: http://olam.ed.asu.edu/epaa/v7n25.html

Kirk, E. E. (2000). Evaluating information found on the Internet. [Online] Available: http://milton.mse.jhu.edu:8001/research/education/net.html

Koehler, D. (1991). Explanation, imagination, and confidence in judgment. Psychological Bulletin, 110, 499-519.

Kuhn, D. (1991). The skills of argument. Cambridge: Cambridge University Press.

Nelson, T. O. & Dunlosky, J. (1991). When people's judgments of learning (JOLs) are extremely accurate at predicting subsequent recall: The 'delayed-JOL effect.' Psychological Science, 2, 267-270.

Otero, J. & Kintsch, W. (1992). Failures to detect contradictions in a text: What readers believe versus what they read. Psychological Science, 3, 229-235.

Descriptors: *Critical Thinking; higher order; searching

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemape 5 - Sitemap 6