>
Volume: 8 7 6 5 4 3 2 1



A peer-reviewed electronic journal. ISSN 1531-7714 
Search:
Copyright 2002, EdResearch.org.

Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. Please notify the editor if an article is to be used in a newsletter.


Find similar papers in
    ERICAE Full Text Library
Pract Assess, Res & Eval
ERIC RIE & CIJE 1990-
ERIC On-Demand Docs
 
Find articles in ERIC written by
     Shannon, David M.
 Johnson,Todd E.
 Searcy, Shelby
 Alan Lott
Shannon, David M., Johnson,Todd E., Searcy, Shelby , Alan Lott (2002). Using electronic surveys: advice from survey professionals. Practical Assessment, Research & Evaluation, 8(1). Retrieved August 18, 2006 from http://edresearch.org/pare/getvn.asp?v=8&n=1 . This paper has been viewed 7,042 times since 1/2/02.

Using Electronic Surveys: Advice from Survey Professionals

David M. Shannon, Auburn University
Todd E. Johnson, Auburn University
Shelby Searcy, Huntington College
Alan Lott, Auburn University

Abstract

The study reports the perceptions and recommendations of sixty-two experienced survey researchers from the American Educational Research Association regarding the use of electronic surveys.  The most positive aspects cited for the use of electronic surveys were reduction of costs (i.e., postage, phone charges), the use of electronic mail for pre-notification or follow-up purposes, and the compatibility of data with existing software programs.  These professionals expressed limitations in using electronic surveys pertaining to the limited sampling frame as well as issues of confidentiality, privacy, and the credibility of the sample. They advised that electronic surveys designed with the varied technological background and capabilities of the respondent in mind, follow sound principles of survey construction, and be administered to pre-notified, targeted populations with published email addresses.

There has been an extensive amount of research focused on principles of survey design and factors influencing response to mail and telephone surveys (Babbie, 1990; Baruch, 1999; Dillman, 1978; Herberlein & Baumgartner, 1978; Fowler, 1993; Lavrakas, 1993; Linsky, 1975; Sudman and Bradburn, 1982;Yu & Cooper, 1983). From the efforts of survey researchers, we have discovered important considerations when designing survey instruments including the importance of the first question, grouping and sequencing of questions, establishing a respondent-pleasing vertical flow of items in the survey, and having clear specific directions. We have also learned the importance of implementation components like pre-notification of respondents, personalized cover letters, incentives, return postage, and multiple contacts to reach respondents and generate higher response rates.

The Internet has greatly impacted the field of survey research as the number of electronically–administered surveys continue to grow.  Unlike traditional mail and telephone surveys, it’s not certain what principles should guide the construction and implementation of electronic surveys. Preliminary efforts have suggested many of the same principles apply to electronic surveys (Cook, Heath & Thompson, 2000; Dillman, 2000; Dillman & Bowker, 2000; Dillman, Tortora, & Bowker, 1998; Schaeffer & Dillman, 1998; Shannon & Bradshaw, in press).  Additional research is needed to refine these principles and use them most effectively with the design and implementation of electronic surveys, especially given the wide variety of formats used to conduct electronic surveys.  We will discuss three common forms of electronic surveys below.

Electronic surveys have taken on a variety of forms from simple email surveys to sophisticated web survey systems.  Early forms of electronic surveys existed in the form of the disk-by-mail format (Couper & Nichols, 1998).  Using this approach, a disk that contained the survey is mailed to respondents, who are instructed to open the file, complete the survey, and mail the disk back to the researcher. Bowers (1999) describes these surveys as having the capability of guiding the respondent interactively through the survey and including very complex skip patterns or rotation logic. This approach can offer many innovative features beyond traditional mail and telephone surveys, but it does require costs and time in terms of programming and distribution of the survey.   However, this approach is restricted by the technological capacity of the respondent’s computer. In addition, Bowers (1999) warns that respondents may be reluctant to download files in fear that they may contain viruses.

A second type of electronic survey is the e-mail survey.  These surveys are typically contained within an e-mail message or as an attached file (Bradley, 1999; Ramos, Sedivi, & Sweet, 1998; Sproull, 1986). These surveys are fast and require little technological skill to develop as they are displayed in a basic-text format.  Respondents are asked to reply to the email and indicate their responses in the reply message or as part of the attached file.   These surveys require little technological skill on the part of the respondent, but researchers (Couper, Blair, & Triplett, 1997; Tse, et., al., 1995; Schaeffer & Dillman, 1998) have found that respondents experience some difficulties such as remembering they must reply to the message before answering the survey questions and having trouble converting an attachment. Additionally, these surveys raise concerns regarding privacy and anonymity as the respondent’s e-mail address is generally included with his/her responses.

A third type of electronic survey is posted on the World Wide Web (WWW).  Respondents are usually sent an e-mail message with a link to the URL address for the survey. Web-based surveys can be designed to include a wide variety of response options (e.g., check boxes, Likert scales, pull-down menus) as well as skip patterns, graphics and sound (Bowers, 1999; Bradley, 1999; Dillman, 2000; Watt, 1997).  These surveys also offer great advantages in terms of data analysis as responses can easily be downloaded into a spreadsheet or statistical analysis software program, but respondents should also be concerned with the privacy as their responses are transferred over the WWW.  Of the three types of electronic surveys we just discussed, these surveys require the greatest amount of technological knowledge and skill of the researcher(s) and respondents.

Due to the technological knowledge and skill required to develop electronic surveys, especially web-based surveys, the leadership in terms of their development has come in large part from technology specialists or individuals with a background in technology.  Survey methodology professionals have not been the driving force behind the use of electronic surveys.  The challenge for survey methodologists is to tailor sound principles of survey design and implementation to the use of electronic surveys (Dillman, 2000; Dillman & Bowker, 2000).  However, to harness the potential of using the Internet for designing and implementing surveys, professionals knowledgeable about survey methodology must provide a more visible presence.  Are survey professionals ready to accept and use electronic surveys as part of their methodological repertoire?  Before electronic surveys are widely accepted and used on a regular basis, input must be gathered from survey professionals.

Purpose

The purpose of this study was to gather the perceptions and recommendations of survey researchers regarding the use of electronic surveys. These researchers were asked to respond to specific issues that pertain to the use of electronic surveys.   In addition, these researchers were asked to describe conditions under which the use of e-mail or web-based surveys would be most appropriate, define appropriate samples, identify the major weaknesses, and offer recommendations for other researchers that plan to use email or the Internet to assist their survey research projects.

Methods

Instrumentation

The survey instrument consisted of three sections. First, a four-point Likert-scale instrument was developed to address issues regarding the use of electronic mail or the Internet in survey research.  These items were written to reflect issues such as sampling frame, privacy, technology, and response rate raised in the literature discussed earlier. The second section consisted of four open-ended questions to solicit feedback regarding the uses of electronic surveys in survey research, the limitations of such surveys, the types of samples for which such surveys would be appropriate, and suggestions for those interested in using electronic mail or the Internet for survey research.  Finally, the third section was included to gather information about the participants in this study.  Items in this section specifically addressed participant’s background and confidence in using technology (i.e., electronic mail and the Internet), their current professional position, and their involvement in their profession.

Procedures

The participants were identified on a published membership list of the Survey Research SIG from the American Educational Research Association (AERA).  This list was obtained from and used with the permission of the Director of the Survey Research Special Interest Group.    This list included 163 members for which complete mailing information was available.  Each subject received a packet that included the survey instrument and a postcard.  In order to assure anonymity, they were asked to return the postcard separately indicating whether they responded to the survey. A total of 63 responses were received.  An additional 35 surveys were returned as undeliverable as members may have changed their place of employment or retired. After subtracting these 35 from the overall sample, a response rate of 49% was obtained (i.e., 63 out of 128).   A total of 64 postcards were received.  Of these 64, 56 indicated that they returned the survey and 8 stated that they did not return the survey.  Three reasons were expressed from the group of eight non-responding individuals. Three (3) indicated that they were just too busy, 3 indicated that they were no longer active in survey research, and 2 indicated that they were retired.

Sample

The majority of these respondents (53%) were employed at a college or university.   An additional 13% were working as consultants while 10% worked for testing organizations, 8% for school systems, and 8% for research and development organizations.  The remaining 8% were employed by state or federal agencies or private industry.  Respondents indicated a wide range of years in their current position, from 1 to 30 years, with an average of 13.23 years.  The number of years in their profession ranged from 1 to 45, with an average of 17.7 years. Membership in AERA ranged from 1 year to 35, with an average of 12.1 years.  Forty-three percent of the respondents identified AERA as their primary professional organization and had been AERA members for an average of 15.2 years.

Results

Use of Electronic Mail and the Internet

Overall, the sample participants reported frequent use and a high level of confidence in using electronic mail and the Internet.   Ninety (90) percent reported using email everyday and 57% described themselves as everyday Internet users, with 78% reporting use of the Internet at least 5 days per week.  Participants were also asked to describe their confidence in using electronic mail and the Internet.  In general, they reported being very confident in their ability to use email (e.g., composing and responding to messages, sending messages to more than one person and sending attachments). They were also confident in their ability to use the Internet to do things like find a web address, use a search engine, and download information.  The only area in which these participants expressed a concern was creating and maintaining a web page.

General Perceptions of Electronic Surveys

Each participant was asked to respond to 33 Likert-scale items pertaining to the use of email or web-based surveys.  Six of these items were reverse-coded so that a higher score would consistently reflect a more favorable attitude toward the use of email or web-based surveys.  Internal consistency reliability (Cronbach’s alpha) was estimated at .83. Overall, participants responded favorably to statements regarding the use of email or web-based surveys.  Table 1 provides a summary of means, standard deviations, and frequencies for the survey items.  These items are displayed in descending order by mean response.

Table 1
Summary of Perceptions of Electronic Surveys

Survey Item

Na

Mean (SD)

Strongly Disagree Or Disagree 
N (%)

Strongly Agree Or Agree
N (%)

Electronic surveys reduce research costs. (e.g., postage, phone)

60

3.42 (.56)

 2 (3.3%)

58 (96.7%)

Respondents to electronic surveys would be more comfortable with technology than non-respondents

62

3.32 (.59)

 4 (6.5%)

58 (93.5%)

Electronic mail messages would be an effective way to pre-notify individuals regarding a survey they are about to receive

61

3.28 (.61)

 3 (4.9%)

58 (95.1%)

Researchers would use electronic surveys if they yielded data ready to be imported into a statistical analysis program such as SAS or SPSS.

59

3.12 (.70)

 9 (15.3%)

50 (84.7%)

Electronic mail messages would be effective as a follow-up technique to encourage response to a mail survey.

61

3.12 (.61)

 6 (9.8%)

55 (90.2%)

I have considered the use of electronic mail or Internet in my research.

61

3.03 (.60)

 8 (13.1%)

53 (86.9%)

I would respond to a web-based survey if I simply had to click on the URL address the researcher placed in an e-mail message.

61

3.02 (.62)

 9 (14.7%)

52 (85.3%)

Electronic surveys will be returned more rapidly than traditional pencil-and-paper surveys.

61

2.98 (.76)

12 (19.7%)

49 (80.3%)

Individuals would respond to a web-based survey if they simply had to click on the URL address the researcher placed in an e-mail message.

59

2.98 (.51)

 8 (13.6%)

51 (86.4%)

Electronic surveys reduce the time and labor required to prepare data for analysis.

59

2.97 (.69)

13 (22.0%)

46 (78.0%)

Electronic surveys eliminate the need to transcribe responses to open-ended questions.

60

2.95 (.77)

15 (25.0%)

45 (75.0%)

Electronic surveys should allow for text editing capabilities

57

2.95 (.72)

12 (21.1%)

45 (78.9%)

Electronic surveys would be useful for alumni surveys.

57

2.95 (.66)

12 (21.1%)

45 (78.9%)

E-mail surveys would require too much time and effort for respondents.

61

2.90 (.37)

53 (86.9%)

 8 (13.1%)

I would access a web page to respond to a survey that interested me.

61

2.89 (.71)

15 (24.6%)

46 (75.4%)

In general, people would access a web page to respond to a survey if the topic was of interest.

58

2.85 (.56)

14 (24.1%)

44 (75.9%)

I would use electronic surveys if responses could be directly imported into a file for data analysis.

56

2.79 (.62)

16 (28.6%)

40 (71.4%)

Electronic surveys and pencil-and-paper surveys yield comparable information.     

52

2.72 (.57)

14 (26.9%)

38 (73.1%)

The use of electronic surveys would make it more difficult to obtain Institutional Review Board (IRB) approval.

48

2.60 (.75)

33 (68.8%)

15 (31.2%)

Potential respondents would find electronic surveys more interesting than pencil-and-paper surveys.

60

2.50 (.57)

32 (53.3%)

28 (46.7%)

People would not respond to electronic surveys because they would get lost along with junk mail received from listservs and newsgroups.

56

2.50 (.57)

28 (50.0%)

28 (50.0%)

Electronic surveys are better suited for an Internet web page compared to being included as part of an e-mail message.

58

2.48 (.57)

32 (55.2%)

26 (44.8%)

Electronic surveys would be useful for political polls.

59

2.48 (.94)

27 (45.8%)

32 (54.2%)

In general, people prefer hard copies of surveys.

53

2.45 (.67)

27 (50.9%)

26 (49.1%)

The reliability of electronic surveys is equal to or stronger than that estimated for paper-and-pencil surveys.

51

2.45 (.67)

25 (49.0%)

26 (51.0%)

In general, I would expect a greater response to electronic surveys.

60

2.43 (.75)

31 (51.7%)

29 (48.3%)

Using an electronic survey would communicate more urgency than traditional mail surveys

61

2.41 (.59)

37 (60.7%)

24 (39.3%)

I would be more likely to respond to a an electronic survey than a pencil-and-paper survey.

60

2.40 (.81)

38 (63.3%)

22 (36.7%)

Individuals would not respond to electronic surveys because of issues related to anonymity.

57

2.39 (.68)

24 (42.1%)

33 (57.9%)

In general, individuals would be more likely to respond to an electronic survey.

56

2.36 (.62)

34 (60.7%)

22 (39.3%)

Electronic surveys do not allow for anonymity.

60

2.30 (.83)

26 (43.3%)

34 (56.7%)

Respondents would complete more items on an electronic survey compared to a pencil-and-paper survey.

61

2.23 (.62)

43 (70.5%)

18 (29.5%)

Responses to electronic surveys would be less likely to be influenced by social desirability compared to traditional paper surveys.

59

2.22 (.59)

45 (76.3%)

14 (23.7%)

People would make fewer mistakes when responding to questions in electronic surveys.

59

2.17 (.46)

47 (79.7%)

12 (20.3%)

Receiving a survey through e-mail would be more personalized than through traditional mail.

62

2.11 (.55)

49 (79.0%)

13 (21.0%)

NOTE:  Response scale – 1=Strongly Disagree, 2 = Disagree, 3 = Agree, 4 = Strongly Agree

These survey professionals were most positive in terms of the reduction of costs (i.e., postage, phone charges) associated with electronic surveys, the use of electronic mail for pre-notification or follow-up purposes as a complement to other survey delivery methods, and the compatibility of data with existing software programs.  They also indicated that the lack of a tangible reward would not prevent individuals from responding and that they would respond to a web-based survey if all they had to do was click on the HTML address from an email message.

The bulk of the less favorable responses pertained to respondents’ knowledge and experience with technology. They believed that individuals who were not comfortable with technology would not respond.  In addition, they indicated that electronic surveys are less personalized than traditional mail surveys, people will make more mistakes when responding, their responses will be influenced by issues of social desirability, and they will not complete as many items as they might have in a pencil-and-paper survey. Finally, these survey researchers expressed a need for passwords to access web-based surveys, a concern that respondents would not be as likely to respond to sensitive issues, or not respond at all due to a concern for their anonymity. 

There were also a few areas in which these survey professional were very uncertain.  In other words, they were very balanced in terms of their agreement and disagreement regarding several items.  These items regarded the comparability of response rate and reliability estimates for electronic and mail surveys, the extent to which people prefer hard copies of surveys or find electronic surveys more interesting, and the appropriateness of listserves as a sampling source for electronic surveys.

Advice from survey professionals

In addition to general perceptions, specific advice was solicited regarding the most effective use of electronic surveys, appropriate samples, limitations, and recommendations for researchers considering the use of electronic surveys.  This advice was gathered using four open-ended questions.

Effective use of electronic surveys. Thirty-seven (37) respondents provided guidance regarding the effective use of electronic surveys in survey research. Nearly half (48%, n=18) of the respondents indicated that such surveys could be used most effectively for targeted populations such as professional or business groups with published email addresses or as  “in-house” surveys.  Twenty-seven percent (n=10) simply indicated that email or web-based surveys would be more efficient, obtaining responses faster and produce data that could be directly stored in a format suitable for analysis and 16% (n=6) described specific uses of email or web-based surveys, including pre-notification of subjects, follow-up of mail surveys, marketing research, needs assessments, and longitudinal studies.  The remaining three respondents indicated that such surveys must be carried out under specific conditions, keeping the surveys short and simple to respond to and have some mechanism such as a password to maintain anonymity.

Appropriate Samples for Electronic Surveys. A total of 35 respondents offered recommendations regarding samples that would be appropriate for electronic surveys. These suggestions primarily focused on samples that have access to and the ability to use technology. The majority of these professionals’ responses (n=32, 91.5%) described specific types of groups that have access to technology.  Specific samples identified included listservs, professional memberships, alumni groups, “in house” employee groups, and University professors.  The remaining three respondents simply indicated that samples had to be small and clearly defined.  

Limitations of electronic surveys. Forty-eight (48) participants offered comments regarding the limitation of email or web-based surveys.  The majority (n=25, 52%) of these responses described sampling limitations.  More specifically, these sampling concerns pertained to the restricted nature of such samples in that respondents must have access to and be comfortable using technology and that such samples would not accurately represent the general population.

A second concern expressed regarded confidentiality and a lack of privacy, expressed by 15 respondents (31.3%).  Concerns were voiced that the invitation to respond to email or web-based surveys might be perceived as junk mail and mass mailings to published email lists might be perceived as “spam.”  Furthermore, there were concerns regarding the security of the information posted and submitted through email or web-based surveys, raising questions about the invasion of the privacy of respondents and security of information on the Internet. Several researchers used the phase “Big Brother” to describe their concern with privacy of information.

A third group of concerns (n=12, 25%) pertained to the credibility and authenticity of the results from electronic surveys. Many of these surveys are open to responses from individuals outside the targeted sample. Specific recommendations were made to have safeguards in place to verify the authenticity of respondents.  Such safeguards might take the form of passwords that only allow those who were invited to complete the survey.  Without such safeguards, the credibility of the data received from respondents is questionable.  

A final group of limitations (n=6, 12.5%) were methodological in nature.  Such surveys require a great deal of time and technological skill to develop and implement.  Several respondents raised questions about the compatibility with traditional pencil and paper surveys, commenting on the difficulty in formatting surveys to fit in web pages and the limited number of incentives that could be provided for potential respondents.

Suggestions for Others Interested in Using Electronic Surveys. Finally, 23 respondents made suggestions for others. These suggestions primarily regarded issues of  sampling, survey format, and procedures.  Ten suggestions (43.5%) made reference to sampling issues. Specifically, five recommendations were made to pre-sample the population to determine their interest in participating.  The remaining sample-related comments were offered as cautions to the survey researcher in that he/she should be aware that the sample will be limited and that technology will not be uniform among members of the sample.

Eight respondents (34.8%) made recommendations regarding design and format.  Three recommended a simple, short survey and three simply advised that close attention be paid to sound survey design principles while the remaining two specifically indicated a preference for graphically-pleasing web-based surveys.

The remaining five suggestions  (21.7%) were categorized as procedural. Two respondents recommended that the time is now to use electronic surveys, before such surveys become too common.  Another researcher suggested that respondents be given an option to respond using a hard copy while one recommended the use of email as a follow-up technique.  The final comment simply stated ‘be skeptical.”

Discussion and Recommendations

Consistent with prior literature (Bowers, 1999; Crawford, Couper & Lamias, 2001; Eaton, 1997; Kaye & Johnson, 1999; Kiessler & Sproull, 1986; Weissbach, 1997), we found that the primary concerns expressed by survey professionals in this study regarded sampling issues.  These concerns regarded sample’s access and ability to use the required technology, their authenticity and their privacy.  Advice from this group of professionals specifically focused on the recognition of limitations of electronic survey samples and precautions that should be taken to establish credible samples and protect respondents’ privacy.  

First of all, it is clear that the sampling frame is still somewhat limited when using electronic surveys and survey professionals must acknowledge these limitations when conducting their research.  Samples with access to the Internet have not typically represented the general population (GVU, 1998; Sheehan & Hoy, 1999).  For this reason, professional or business groups with published e-mail addresses have often been targeted as samples.  However, the Internet is exploding and becoming increasing more accessible to the general population as approximately 41.5% of US households now have access, an increase of 58% in less than two years (Department of Commerce, 2000).  Access is still more frequent among those who live in urban areas, with higher incomes and higher levels of education.  However, the most rapid increases in access are occurring in rural areas, among individuals with some college experience, and individuals over 50 (Department of Commerce, 2000).  Such increases will continue and the gaps between Internet users and the general population will continue to close.  The increase in Internet access and reliable e-mail addresses will allow for a greater range in samples for future electronic surveys.

Researchers must also recognize that samples will vary a great deal in terms of their technological capability, both in terms of equipment and respondent knowledge and skill. This variation must be kept in mind when designing electronic surveys.  Although web-based surveys allow for much more innovative features than plain text e-mail surveys, respondents may have difficulty accessing the survey and will not be able to respond.  Furthermore, most people are not accustomed to the process used to respond to an electronic survey (e.g., selecting from a pull-down menu, clicking a radial button, scrolling from screen to screen) and will need specific instructions that guide them through each questions and the manner in which they should respond.

Based on the advice of survey professionals, we recommend that samples be pre-notified using an e-mail message to determine the technological capacity of the sample and their willingness to participate in the study. This will help ensure that the survey will be accessible to members in the sample and help prevent the perceptions of “spamming” that might occur due to continued unsolicited e-mail messages (Mehta & Sivadas, 1995; Sheehan & Hoy, 1999). This communication should be personalized and provide for the essential elements of mailed cover letters, including provide a clear overview of the study’s purpose, motivation to respond, assurances of confidentiality and privacy and who they contact should they have questions.  This advice was reinforced by a recent meta-analysis of electronic survey studies which found personalized pre-notification and number of contacts to influence response rate (Cook, Heath, & Thompson, 2000). 

Once samples are identified and pre-notified, they need to be protected in terms of their authenticity, confidentiality, and privacy.  Measures should be taken to reduce sampling error. Access to web-based surveys must be limited to the targeted sample. Unrestricted sample surveys that allow anyone access are unacceptable.  Whereas many unscientific online polls boast large samples, there is often little or no attempt to ensure the quality and validity of such samples.

Samples must be clearly defined and authenticated.  Researchers should consider using passwords or PIN numbers to control for sampling error and establish credible samples (Bowers, 1999; Bradley, 1999; Dillman, Tortora, & Bowker, 1998). In the case that passwords or PIN numbers are not used, responding samples should be carefully examined and those that are not eligible should be eliminated to maintain consistency with the sampling plan and yield credible results. 

Additional precautions must be taken to protect respondents’ privacy and ensure the confidentiality of their responses. Several researchers have experienced negative feedback from respondents regarding privacy issues (Couper, Blair, & Triplett, 1997; Mehta & Sivadas, 1995; Sheehan & Hoy, 1999).   In analyzing server logs from electronic surveys, Jeavons (1998) found that individuals stopped completing surveys when their email address was requested.  Respondents must feel comfortable when responding to electronic surveys and trust researchers have taken precautions to guard their privacy.  Minimally, researchers should make assurances of confidentiality in the pre-notification e-mail (Couper, Blair, & Triplett, 1997; Kieslerr & Sproull, 1986; Schaeffer & Dillman, 1998).  Further protection of respondents’ privacy can be provided by separating e-mail addresses upon receipt of the completed surveys (Sheehan & Hoy, 1999) or programming the return to include the researcher’s e-mail address, not that of the respondent (Shannon & Bradshaw, 2000).  Using secure servers and encryption methods should also be employed as an additional protection of respondents’ privacy

In conclusion, electronic surveys web-based must utilize principles of sound survey design.  Research studies must also focus on the adaptability of such principles for electronic survey formats so that survey professionals can take full advantage of the benefits of such surveys without sacrificing the integrity of their data and placing respondents at risk in terms of confidentiality and privacy. As methods pertaining to the design and implementation of electronic surveys are refined, they will be used more frequently to conduct scholarly research.  This also means that Institutional Review Boards (IRB) will encounter increasing numbers of proposals and the issues of confidentiality and privacy will become increasingly important and policies pertaining to the protection of human subjects as participants in electronic surveys and other types of research using the Internet will need to be developed.  

References

Babbie, E. (1990). Survey Research Methods (2nd ed.). Belmont, CA: Wadsworth.

Baruch, Y. (1999). Response rates in academic studies: A comparative analysis.  Human Relations, 52, 421-434.  

Bowers, D. K. (1999). FAQs on online research.  Marketing Research, 10(1), 45-48.

Bradley, N. (1999).  Sampling for Internet surveys: An examination of respondent selection for Internet research. Journal of the Market Research Society, 41(4), 387-395.

Cook, C., Heath, F., & Thomson, R. (2000). A meta-analysis of response rates in web- or Internet-based surveys. Educational & Psychological Measurement, 60(6), 821-826.

Couper, M. P. Blair, J. & Triplett, T. (1997). A comparison of mail versus email for surveys of employees in federal statistical agencies. Paper presented at the annual meeting of the American Association for Public Opinion Research, Norfolk, VA.

Couper, M. P. & Nichols, W. L. (1998). The history and development of computer assisted survey information collection methods.   In M. P. Couper, R. P. Baker, J. Bethlehem, C.Z.E. Clark, J. Martin, W.L. Nichols, & J. M. O’Reilly (Eds.).   Computer assisted survey information collection (pp. 1-22). New York:  John Wiley & Sons, Inc.

Crawford, S. D., Couper, M. P. & Lamias, M. J. (2001). Web surveys: Perception of burden.  Social Science Computer Review, 19, 146-162.

Department of Commerce (2000, October). Falling through the net: Toward digital inclusion. Washington, DC: Author.

Dillman, D. A. (2000).  Mail and Internet surveys: The tailored design method. New York: John Wiley and Sons, Inc.

Dillman, D. A. (1978). ).  Mail and telephone surveys: The total design method. New York: John Wiley and Sons, Inc.

Dillman, D. A. & Bowker, D. K. (2000).  The web questionnaire challenge to survey methodologists.  [Online]. Available: http://sesrc.wsu.edu/dillman/papers.htm.

Dillman, D. A., Tortora, R. D., Bowker, D. (1998).  Principles for constructing web surveys: An initial statement. (Technical Report No. 98-50). Pullman, WA: Washington State University Social and Economic Sciences Research Center.

Eaton, B. (1997).  Internet surveys: Does WWW stand for “why waste the work?” Marketing Research Review, June/July, Article 0244.  Available:http://www.Quirks.com

Fowler, F. J. (1993). Survey Research Methods (2nd ed.) Newbury Park: Sage Publications.

GVU’s 10th WWW User Survey (October, 1998).  General Demographic Summary [On-line]. Available http://www.gvu.gatech.edu/user_surveys/survey-1998-10/reports/

Herberlein, T. A. & Baumgartner, R. (1978).  Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature. American Sociological Review, 43, 447-462.

Jeavons, A. Ethology and the Web: Observing respondent behavior in Web surveys. Proceedings of the Worldwide Internet Conference, Amsterdam: ESOMAR, 1998. Available: http://w3.one.net/~andrewje/ethology.html.

Kaye, B. K. & Johnson, T. J. (1999). Research methodology: Taming the cyber frontier. Social Science Computer Review, 17, 323-337.

Kiesler, S. & Sproull, L. S. (1986).  Responses effects in the electronic survey.  Public Opinion Quarterly, 50, 402-413.

Lavrakas, P. J. (1993). Telephone Survey Methods: Sampling, Selection, and Supervision (2nd ed.)  Newbury Park: Sage Publications.

Linsky, A. S. (1975). Stimulating responses to mailed questionnaires: A review.  Public Opinion Quarterly, 39, 82-101.

Mehta, R. & Sivadas, E. (1995).  Comparing response rates and responses content in mail versus electronic mail surveys. Journal of the Market Research Society, 37(4), 429-439.

Ramos, M., Sedivi, B. M., & Sweet, E. M. (1998).  Computerixed self-adnministered questionnaires. .  In M. P. Couper, R. P. Baker, J. Bethlehem, C.Z.E. Clark, J. Martin, W.L. Nichols, & J. M. O’Reilly (Eds.).  Computer assisted survey information collection (pp. 389-408). New York:  John Wiley & Sons, Inc.

Schaeffer, D. R. & Dillman, D. A. (1998). Development of standard e-mail methodology: Results on an experiment.  Public Opinion Quarterly,62(3), 378-397.

Shannon, D. M.  & Bradshaw, C. C. (2002). A comparison of response rate, speed and costs of mail and electronic surveys. Journal of Experimental Education, 70(2), in press.

Sheehan, K. B. & Hoy, M. G. (1999).  Using e-mail to survey Internet users in the United States: Methodology and Assessment.  Journal of Computer Mediated Communication, 4(3). Available: http://www.ascusc.org/jcmc/vol4/issue3/sheehan.html.

Solomon, David J. (2001).  Conducting web-based surveys. Practical Assessment, Research & Evaluation, 7 (19). Available online: http://ericae.net/pare/getvn.asp?v=7&n=19.

Sproull, L. S. (1986). Using electronic mail for data collection in organizational research.  Academy of Management Journal, 29(1), 156-169.

Sudman, S. & Bradburn, N. M. (1982). Asking Questions: A practical guide to questionnaire design. San Francisco: Jossey-Bass Publishers.

Tse, A. C. B., Tse, K. C., Yin, C. H., Ting, C. B., Yi, K. W., Yee, K. P., & Hong, W. C. (1995).  Comparing two methods of sending out questionnaires: E-mail versus snail mail. Journal of the Market Research Society, 37(4), 441-446.

Watt, J. H. (1997). Using the Internet  for quantitative survey research. Marketing Research Review, June, Article 0248. Available: http://www.Quirks.com

Weissbach, S. (1997) Internet Research: Still a few hurdles to clear. Marketing Research Review, June/July. Article 0249. Available: http://www.Quirks.com

Yu, J. & Cooper, H. (1983). A quantitative review of research deign effects on response rates to questionnaires.  Journal of Marketing Research, 20(1), 36-44.

Contact

Please direct all correspondence to the first author at:

David Shannon
4036 Haley Center – EFLT
Auburn University
Auburn, Alabama 36839-5221

(334) 844-3071, FAX: (334) 844-3072
shanndm@auburn.edu

 

Descriptors: *World Wide Web; *Survey Methods; Response Rates [Questionnaires]; *Surveys; Electronic Mail

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemape 5 - Sitemap 6