Self Disclosure on Computer Forms:
Meta-Analysis and Implications
Suzanne Weisband
Department of Management Information Systems
University of Arizona
Tucson, Arizona 85721 USA
Tel: +1-520-621-8303
E-mail: sweisband@bpa.arizona.edu
Sara Kiesler
HCI Institute & Department of Social and Decision Sciences
Carnegie Mellon University
Pittsburgh, PA 15213 USA
Tel: +1-412-268-2848
E-mail: kiesler@andrew.cmu.edu
ABSTRACT
Do people disclose more on a computer form than they do in an
interview or on a paper form? We report a statistical meta-analysis
of the literature from 1969 to 1994. Across 39 studies using 100
measures, computer administration increased self-disclosure. Effect
sizes were larger comparing computer administration with face-to-face
interviews, when forms solicited sensitive information, and when
medical or psychiatric patients were the subjects. Effect sizes were
smaller but had not disappeared in recent studies, which we attribute
in part to changes in computer interfaces. We discuss research,
ethical, policy, and design implications.
Keywords
computer forms, computer interviews, electronic surveys,
measurement, disclosure, response bias, electronic communication.
INTRODUCTION
As computer and computer-based telecommunications
technologies improve, assessment increasingly is being accomplished
using these technologies. Previously existing mental health
questionnaires, personality scales, job attitude scales, cognitive
selection tests such as the Graduate Record Examination, and training
inventories are among the many kinds of forms that have been converted
to computerized administration [24]. Computer-administered
employment, medical intake, and blood donor forms are being developed
to replace face-to-face interviews, and electronic surveys
administered from remote sites already are being used to gather
personnel, medical, and consumer information, as well as in social
science research [20, 29].
The possibility that people would tell an impartial machine
personal or embarrassing things about themselves, without fear of
negative evaluation, has been raised since the first uses of computers
for communication [27]. One of the first applications of the computer
for assessment was the computerized psychiatric interview [32].
Researchers reported that patients not only responded positively to
computer interviews but also gave honest answers [13]. Subsequently,
medical, marketing, personnel, and social science researchers have
explored computer administration as a means for reducing social
desirability biases [9] and obtaining more sensitive information from
respondents than could be obtained using more traditional formats. A
belief that computer administration encourages self-disclosure has led
to the development of important applications, such as computer
interviews to detect risk conditions and behaviors of blood donors [2,
23].
If existing interviews and forms are converted to computer forms, or
if forms are administered by computer and also in other formats, then
increased self-disclosure by those using the computer version raises
psychometric and ethical questions. Increased self-disclosure might
lead to nonequivalence of scores on measures in which respondents are
asked to reveal personal or sensitive information. The computerized
form might even be measuring a different underlying construct.
Among the reasons offered as to why people disclose when using a
computer are that computer interfaces as compared with traditional
formats create in respondents an inattention to audience, immersion in
the immediate task along with a sense of invulnerability to criticism,
an illusion of privacy, the impression that responses "disappear" into
the computer, or other misattributions that cause respondents to be
careless about their responses. For example, people can be induced to
behave as if computers are human, suggesting that human-computer
interaction is fundamentally social [25]. If so, perhaps informed
consent statements should warn respondents about these
misattributions. Perhaps computer forms that simulate a face-to-face
interview should include a representation of the interviewer. These
speculations are moot if respondents, in fact, do not in fact respond
differently to computers than to other formats, as some researchers
have concluded [6].
META-ANALYSIS
Researchers' mixed conclusions about self-disclosure in computer forms
led to our examination of the literature using statistical
meta-analysis. The main hypothesis we tested was that responses to a
computer form, as compared with its face-to-face or paper-and-pencil
counterpart, would be more self-disclosing. We did not use the
meta-analytic procedure to examine the reasons for this difference.
However, we derived plausible predictions based on two arguments: (1)
that computer interfaces lack social context cues, which in turn
causes reduced evaluation anxiety, feelings of safety or
invulnerability, and less concern with looking good, and (2) that
people lack experience with computers and therefore are not aware of
the risks of self-disclosure of personal information to a computer.
We explored these hypotheses indirectly by comparing characteristics
across and within studies that should predict self-disclosure if these
arguments were valid. For example, with respect to the impact of
computer experience, since the general public has become more computer
literate in each decade, effect sizes should decline with year of
study. Predictors such as the year of publication were assessed in
models of effect sizes [15].
Sample
Our sample consisted of 39 published experimental studies of
self-disclosure in standardized and unstandardized interviews,
questionnaires, tests, and scales that solicited socially undesirable,
personally relevant, or sensitive information. 1
We included studies
in the social science, computer science, and medical literature. We
adopted a broad definition of self-disclosure in forms in order to
consider all relevant studies. For example, we not only included
studies of questionnaires that solicited highly sensitive information
such as a person's criminal record but also studies of forms on which
consumers were asked to disclose their complaints. However, for a
study to be included it had to investigate a form; we did not include
studies of free group discussion by computer [19], studies of
computerized skill or cognitive ability or achievement, such as the
GRE [24], or any studies in which some kind of self-disclosure was not
measured. Our sample does not include several published studies of
disclosure that employed no comparison or control group or otherwise
seriously violated normal standards of experimental design.
Variables Coded
The following variables were coded from each study: (a) experimental
design (between subjects, within subjects); (b) comparison form
(face-to-face interview, paper-and-pencil form, test, or
questionnaire); (c) sensitivity of information presented to subjects;
(d) subject population (student, psychiatric or medical patient, other
adult); (e) sex of sample (male, female, both) based on the percentage
of males; (f) presence of others (e.g., other subjects) during
computer administration (alone, not alone); (g) whether subjects
responded to a standardized form; and (h) date of publication.
Operationalization of these variables was straightforward except in
the case of categorizing measures as sensitive. We coded a measure as
sensitive if it seemed to ask for information not normally discussed
among casual acquaintances or colleagues, such as about one's medical
status, mental health, criminal record, use of drugs, and so forth.
Two raters discussed disagreements and resolved them. Many of the
measures coded as sensitive were psychiatric tests. We assume such
measures are more sensitive than measures such as those assessing
product or job satisfaction, for instance.
Calculation of Effect Size
The effect size (g ) for each study was calculated as the mean of
disclosure in the computer-administered form condition minus the mean
of disclosure in the "traditional" or comparison condition
(face-to-face interview or paper-and-pencil form), divided by the
pooled standard deviation. Effect sizes reported are positive when
there was more disclosure with the computer form and negative when
there was more disclosure with the traditional format. We used
Johnson's DSTAT upgrade computer program to analyze the results [17].
The g s were converted to d s by correcting them for sample size bias
in small studies. Each d was weighed by the reciprocal of its
variance to put greater weight on effect sizes estimated from large
sample sizes. We performed two sets of meta-analyses. One analysis
was based on effect sizes that were collapsed across repeated measures
so that each study could be represented by one effect size. We did
this because meta analytic techniques assume independent effect sizes.
The second analysis was based on multiple effect sizes when needed to
test particular hypotheses. For example, Koson et al. [21]
administered two comparison instruments, one, a face-to-face
interview, and the other, a paper-and-pencil questionnaire. We
hypothesized that disclosure in computer instruments as compared with
face-to-face interviews versus paper surveys. In cases like this, a
separate effect size was computed for each relevant instrument.
RESULTS
With each study contributing a single effect size (n = 39), the mean
was .33, indicating greater self-disclosure in computer administered
tests. The 95% confidence interval for this mean was, CI = .28 to
.39, which differs significantly from 0, Z =11.76, p
<.001. The hypothesis of homogeneity was rejected, Q (38) = 339.73,
p < .001,
which indicates that the effect size derived in this analysis does not
describe the entire dataset. We performed an outlier analysis to
obtain homogeneity of variance. Six outliers (15%) were removed such
that Q (32) = 48.90, p = .06. The resulting mean effect size was .21
(CI = .15 to .27), Z = 6.85 (p < .001).
Categorical Models
We tested a series of hypotheses by applying categorical models to the
39 studies plus any within-study effect sizes that were present after
the studies were partitioned on the basis of the attribute under
investigation. For example, 33 of the 39 studies in our dataset used
either a face-to-face or paper-and-pencil comparison condition and 6
studies used both. Hence, we computed 45 (33+(6*2)) separate effect
sizes in the categorical model comparing face-to-face and
paper-and-pencil cases. A summary of the categorical models are
presented in Table 1.
Tests of Social Context Cue Hypotheses
Interviews versus Questionnaires
An absence of social context cues can increase perceived privacy, or
reduce evaluation anxiety or perceived risk. We hypothesized that
studies comparing computer forms with face-to-face interviews (which
have many social context cues) will show a larger effect size
(increased self-disclosure in the computer condition) than studies
comparing computer forms with paper-and-pencil forms. The 15 computer
vs. face-to-face cases showed a much stronger effect size (d = .62)
than the 30 computer vs. paper-and-pencil cases (d =.20), QB(1) =
49.4, p <.0001. The mean effect size for each comparison deviated
significantly from zero, indicating that self disclosure was higher in
the computer condition, albeit less so in comparison to
paper-and-pencil questionnaires.2
Sensitive Information
The presence or absence of social context cues will matter more when
the information being elicited from respondents is sensitive.
Therefore, we hypothesized that studies comparing computer forms with
other formats will show a larger effect size (increase in
self-disclosure in the computer condition) when the measure elicits
sensitive, personal, or otherwise risky information than when the
measure elicits more impersonal information. We divided study measures
according to whether or not they solicited sensitive information
(e.g., mental health measures, measures of illegal activity). As
seen in Table 1, the effect size in favor of computer administration
when the measure elicited sensitive information was significantly
stronger (d = .35) than when the measure did not elicit sensitive
information (d = .16), Q B(1) = 14.55, p < .001.
Vulnerable Populations
Prisoners, patients, and others whose lives are heavily influenced by
others' decisions may feel particularly vulnerable to the consequences
of self-disclosure, and therefore might be more sensitive to
differences in administration. We hypothesized that studies in which
the subjects were medical or psychiatric patients would show larger
effect sizes. The comparisons shown in Table 1 are of 39 cases in
which the subjects were students, patients, or other adults.
Consistent with the significant between-classes effect, the mean for
the patient effect size differed significantly from the means of both
students and other adults, X2(1)=25.8 and X2(1)=42.4,
respectively, ps <.001. This result gives additional support
to the argument that computer responding increases the sense of
privacy, although many alternative explanations can be generated
(e.g., patients probably have lower socio-economic status, less
knowledge of computers, and more experience with traditional tests).
Gender
Since females are reputed to be more sensitive to social context and
tend to be more disclosing [10], we hypothesized that studies
comparing computer forms with other formats will show a larger effect
size (increased self-disclosure in the computer condition) when the
subjects are female. Table 1 shows studies
categorized by whether the investigators' sample was all male, all
female, or a mixed sample.
The effect size for the 3 studies with all-female samples (d =.47) was
much larger than the effect size for the 9 studies with all-male
samples (d = .22), though due to the small number of studies with
all-female samples, the difference did not reach significance, X2(2) =
3.3, p =.19. The effect size for mixed sample studies
(d = .40) was not different from the effect size for all-female
sample studies (X2(2) = 0.3, p =.84), but it was significantly
different from the
effect size for all-male sample studies (X2(2) = 5.8, p
<.05). We also ran a continuous model using the percentage of males in the
sample to predict effect size. Seven cases were removed because the
authors did not say what percent of the subjects were male. The
effect (in the direction of more disclosure when the sample had fewer
males) was not significant (Z = -1.43, p =.15).
Presence of Others
The presence of other persons when one is completing a form provides
social context cues to the nature of the test environment. We
hypothesized that studies comparing computer forms with other formats
would show a larger effect size (increased self-disclosure in the
computer condition) when respondents in the computer condition were
alone while they completed the form. Of the 29 studies where
information was available, 19 of them (66%) were those in which
subjects were alone at the computer when completing the questionnaire.
The effect size and confidence intervals show that self disclosure was
increased in the studies where subjects were alone (d =.26) as
compared to when subjects were not alone (d =.15), Q B(1) = 2.67, p
=.10, though the difference is marginal.
Tests of Technological Change Hypotheses
When a technology is first introduced, people might not be aware of
its risks. Lack of computer experience and knowledge might lead
people to be careless about the information they give to a computer.
Also, the technology may have design deficiencies that are corrected
later. For example, early computer-administered questionnaires often
prevented respondents from editing or undoing their responses as they
could with paper questionnaires. This constraint might have led them
to accidentally disclose information they would have deleted if this
were possible.
Precomputer Standardized Tests
Standardized tests, such as the MMPI, were developed for traditional
forms of communication. People are used to seeing them in traditional
formats and may not pay them as much respect, or realize their
personal consequences, when these instruments are displayed on a
computer screen. Therefore, we hypothesized that studies comparing
computer administration with other forms of administration might show
larger effect sizes (increased self-disclosure in the computer
condition) in studies using standardized tests as compared with
unstandardized formats (some of which were developed specifically for
computer administration). The hypothesis was supported marginally.
The effect size for cases where the researcher used standardized tests
was somewhat higher (d =.39) than the effect size for cases in which
the measure was not a standardized test (d = .28), QB(1) =
3.24, p =.07.
Students
Since high school students and undergraduates are likely to be more
familiar with computers than today's adults, and more knowledgeable
about computers than others, we hypothesized that studies comparing
computer forms with other formats would show larger effect sizes
(increased self-disclosure in the computer condition) in studies using
subjects other than students. There were 19 cases in which the
subjects were students, and as we showed in Table 2, the effect size
for computer administration in these cases (d=.29) was not smaller
than the effect size in studies using other adults as subjects (d
=.16), X2(2) = 3.47, p =.18. So, there is not strong
support for this hypothesis.
Years
We hypothesized that effect sizes would decrease over the years, as
people became more familiar with computers and what could be done with
them. We first ran a continuous model on the year of publication for
the 39 studies. The model showed a strong effect for year, in that
effect sizes did get smaller over the years (Z = 2.6, p
< .01.) We next divided the 39 studies into quartiles. Consistent
with the continuous model, Table 1 shows that
studies published in the most
recent years (1992-1994) had a significantly lower effect size than
studies published in two of the earlier periods, 1969-1982 (d =.63,
X2(3) = 26.9, p <.001), and 1987-1991 (d =.44,
X2(3) = 15.5, p <.001).
Also, studies published in the earliest period (1969-1982) had a
significantly higher effect size (d =.63) as compared to the studies
published in the next period (1983-1986; d =.28), X2(3)
= 14.1, p <.01.
The year effect might be an artifact due to a change investigators use
of study characteristics associated with disclosure (such as whether
sensitive information was solicited). We evaluated the impact of any
study characteristic used differently over time on the effect size
within quartile periods. One significant change in the studies over
the years was that in earlier years investigators used more measures
we had coded as soliciting sensitive information. However,
considering only studies that used sensitive measures, there remained
an effect of year (Q B(3) = 18.8, p < .01). Another manner in which
studies changed over the years was in the use of patients as subjects,
another variable that strongly predicts disclosure in a computer
instrument. Within studies using patients as subjects, however, there
remained a significant effect of year of study (Q B(3) = 101.8, p <
.01). These analyses indicate that the impact of computer
administration has declined over the years and that this decline is
not explained fully by changes in the use of various study
characteristics in evaluation studies. Despite this decline, the
effect of computer administration did not disappear.
DISCUSSION
Research Issues
Our meta-analysis gives support to the main hypothesis that computer
administration elicits more self-disclosure than traditional forms of
administration do. In recent years, the disclosure effect has
declined significantly due, perhaps, to increasing public knowledge of
computers, increasing public computer literacy, or even people's
reduced awe of the computer. (The credibility of the instrument is
important because it affects the degree to which respondents believe
the questions have a legitimate purpose and are not simply voyeuristic
or commercial, or have criminal intent [7].) Our indirect analysis,
however (for example, comparing students to other adults), did not
support explanations related to public knowledge of computers.
An unresolved research issue is that changes in computer technology
have made it possible for a computer instrument to have the "look and
feel" of a paper-and-pencil questionnaire, typed form, or printed
test. Forms now look more like paper questionnaires, forms, or tests
than they did in earlier years, and allow for more stereotypical
questionnaire-type responses using radio buttons and fill in blanks
(as compared with typed commands). These changes might have increased
respondents' sense of the computer interaction as a evaluation or test
situation and consequently reduced their disclosure. Unfortunately,
we were unable to evaluate this idea in the meta-analysis because few
investigators described their computer interfaces in sufficient
detail. Possibly researchers did not realize computer interfaces would
change so much. In any case, the idea that differences in the
interface can affect disclosure has not been investigated yet.
A related issue is the belief by many that answering questions on a
computer changes respondents' perceptions of the test environment.
For example, working on a computer could create a sense of privacy or
anonymity. Some investigators have reported a strong relationship
between anonymous and identified computer responding [6, 8]. However,
research is needed to examine directly whether perceptual and
motivational changes mediate the linkage between the administration of
computer forms and responses to those forms.
Ethical Issues
If people have a false illusion of privacy or otherwise let down their
guard when they respond to a computer, the world has discovered an
easy, cheap way to obtain sensitive information from people. Other
methods of eliciting sensitive information, such as the polygraph [4],
bogus pipeline [26], or telephone interviews [22] are difficult or
expensive. Researchers, therapists, marketers, developers of World
Wide Web sites and others will be drawn to use computers to obtain
sensitive information, and increasingly they will do this remotely, so
their instructions and consent forms (if any) also are completed by
computer. Currently, the American Psychological Association
Guidelines for Research on Human Participants, as well as most
research organizations' codes of research conduct are silent with
regard to such topics as how to obtain informed consent electronically
(and whether it is legitimate to do so), how much to reveal about
remote sites of data collection, and about electronic forms in
general.
Stronger admonishments in instructions, policy statements, and
informed consent statements might be required for computer forms to
obtain the same level of risk as traditional formats. However, simply
changing the wording or format of conventional warnings might not be
effective. An unpublished study by Kiesler, Sieff, and Geary 3 showed
that a small picture of the interviewer on the screen did not inhibit
disclosure in a computer interview as did a real interviewer's
presence. Moreover, that computer scientists disclose even more than
others in a computer instrument [18] suggests that responses to known
risks and warnings can habituate.
Policy Issues
To our knowledge, the use of computer forms has not proved to be a
source of social disagreement in the way computerized monitoring and
informal electronic communications like email have [30]. Our review
suggests that more cases will arise like that of the six William
Morris assistants who were fired when their candid email
correspondence about their bosses and how to avoid additional duties
was mistakenly sent to an administrator.4 Many
companies and universities are embedding forms in their World Wide Web pages to take
surveys, orders, and collect personal information about potential
applicants.
The legal situation is presently unstable. Many organizations
consider any information sent through, or held on, company computers
to be in the organization's domain. Few people know that they can be
held legally accountable for information they type into a computer and
send to others, or that they might be disciplined by a company for
information that they access on a computer form. Unlike telephone
calls, electronic forms (and forms sent by email) can be treated in
the courts as documents that can be used as legal evidence without a
court order. Many public and private organizations take the position
that they have a right to randomly monitor communications on "their"
networks, and to use computers to collect data about employees or
customers, on the grounds that such access is necessary to properly
administer the computer system, or to implement business goals [1].
To the degree that people believe or perceive their communications
through computers to be safe, current organizational and legal
policies may be inappropriate.
Design Issues
As the power and speed of computers continues to increase, researchers
and technologists have responded by improving the readability and ease
of response in computer forms, interviews, surveys, and tests. For
example, respondents can use radio buttons to select choices from a
table rather than type commands. Advances in computer interfaces also
have increased the variety, credibility, and salience of social
information in forms, for example, through animated characters or
icons. Will new computer interface designs induce people to act more
like they would using a paper questionnaire or in an interview with
another person? A study by Sproull et al. [28] compared the
impression management concerns of subjects who answered questions in
one of two computer interview conditions. In one condition, subjects
interacted with a computer "counselor" embodied in a talking face on
the screen; in other condition, subjects interacted with the counselor
by typing and reading text on the screen. The subjects revealed less
to the talking face than to the text and evaluated her less well.
This study suggests that social interfaces will reduce self disclosure
in computer forms, perhaps because they remind the user of a
face-to-face interview.
However, increasing the amount of social information in a computer
display or device does not guarantee that the distribution of social
information will be like that in an ordinary social setting.
Weisband, Schneider, and Connolly [31] have shown that when a narrow
set of social information about members' relative group status is
salient in electronic communication, this information overdetermines
people's responses to others. Computer interfaces that partially
mimic social situations may offer a distorted version of these
situations, and change people's responses unexpectedly.
An important point that emerges from consideration of how computer
design is evolving is that "computer interview" or "computer survey"
should no longer be treated as a single, unidimensional category of
administration by either researchers or practicioners. Computer forms
are multidimensional, increasingly so as interfaces incorporate speech
and speech recognition [16], auditory and kinesthetic feedback [12],
social intelligence [5], emotional response [11], directed animation
[3, 14], talking to people on the screen [28] or even virtual reality
[33]. To understand how design affects disclosure in new computer
instruments, we will have to investigate which features of computer
forms affect people's perceptions and responses. It will be
interesting to see how much users disclose when the computer form is
delivered by an animated cartoon character.
ACKNOWLEDGMENTS
This work was supported in part by NSF grant #IRI-9309133 to the first
author, and a NIMH scientist development award and grant from the
Carnegie Mellon University Information Networking Institute to the
second author. The authors thank Melissa Wingert, who assisted in
collecting, organizing, and coding the studies.
REFERENCES
1. Aiello, J. R. (1993). Computer-based work monitoring: Electronic
surveillance and its effects. Journal of Applied Social Psychology,
23, 499-507.
2. American Institutes for Research. (1993, July 30). Increasing the
safety of the blood supply by screening donors more effectively.
Final Report, Vol I. AIR, 3333 K St. Washington, DC 20007.
3. Ball, E., Ling, D. T., Pugh, D., Skelly, T., Stankosky, Thiel, D.
(1994). ReActor: A system for real-time, reactive
animations. Conference Companion: Demonstrations, CHI '94, Boston,
Mass., April 24-28.
4. Bashore, T. R., & Rapp, P. E. Are there alternatives to
traditional polygraph procedures? Psychological Bulletin, 113, 3-22.
5. Binik, Y. M., Servan-Schreiber, D., Freiwald, S., & Hall,
K. S. (1988). Intelligent computer-based assessment and
psychotherapy: An expert system for sexual dysfunction. Journal of
Nervous and Mental Disease, 178, 387-400.
6. Booth-Kewely, S., Edwards, J. E., & Rosenfeld, P. (1992).
Impression management, social desirability, and computer
administration of attitude questionnaires: Does the computer make a
difference? Journal of Applied Psychology, 77, 562-566.
7. Catania, J. A., Gibson, D. R., Chitwood, D. D. &
Coates. T. J. (1990). Methodological problems in AIDS behavioral
research: Influences on measurement error and participation bias in
studies of sexual behavior. Psychological Bulletin, 108, 339-362.
8. Connolly, T., Jessup, L. M., & Valacich, J. S. (1990). Effects of
anonymity and evaluative tone on idea generation in computer-mediated
groups. Management Science, 36, 689-703.
9. Crowne, D., & Marlowe, D. (1964). The approval motive. New York:
Wiley.
10. Dindia, K. & Allen, M. (1992). Sex differences in
self-disclosure: A meta-analysis. Psychological Bulletin, 112,
106-124.
11. Elliott, C. (1994). Research problems in the use of a shallow
artificial intelligence model of personality and emotion (pp. 9-15),
Proceedings of the Twelfth National Conference on Artificial
Intelligence.
12. Gaver, W. W. (1986). Auditory icons: Using sound in computer
interfaces. Human-Computer Interaction, 2, 167-177.
13. Greist, J. H., Gustafson, D. H., Stauss, F F., Rowse, G. L.,
Laughren, T. P & Chiles, J. A. (1973). Computer interview for
suicide-risk prediction. American Journal of Psychiatry, 130,
1327-1332.
14. Hayes-Roth, B., Sincoff, E., Brownston, L., Huard, R., & Lent, B.
(1995). Directed improvisation with animated puppets (pp. 79-80). In
Human Factors in Computing Systems: CHI '95 Conference Companion
(Proceedings document). May 7-10, Denver.
15. Hedges, L V., & Olkin, I. (1985). Statistical methods for
meta-analysis. San Diego: Academic Press
16. Itou, K. S., Hayamizu, S. & Tanaka, H. (1992). Continuous speech
recognition by context-dependent phonetic HMM and an efficient
algorithm for finding N-best sentence hypotheses. Proceedings of
ICASSP. IEEE Press.
17. Johnson, B. (1989). Software for the meta-analytic review of
research literatures. Hillsdale, NJ: Erlbaum. (Upgrade
documentation, 1993)
18. Kiesler, S., & Sproull, L. S. (1986). Response effects in the
electronic survey. Public Opinion Quarterly, 50, 402-413.
19. Kiesler, S. & Sproull, L. (1992). Group decision making and
communication technology. Organizational Behavior and Human Decision
Processes, 96-123.
20. Kiesler, S., Walsh, J., & Sproull, L. (1992). Computer networks in
field research. In F. B. Bryant , J. Edwards, S. Tindale, E. Posavac,
L. Heath, E. Henderson, & Y. Suarez-Balcazar, (Eds.), Methodological
Issues in Applied Social Research (pp. 239-268). New York: Plenum.
21. Koson, D., Kitchen, M., Kochen, M, & Stodolsky, D. (1970).
Psychological testing by computer: Effect on response bias.
Educational and Psychological Measurement, 30, 803-810.
22. Locander, W., Sudman, S. & Bradburn, N. (1976). An investigation
of interview method, threat and response distortion. Journal of the
American Statistical Association, 71, 267-275.
23. Locke, S. E., Kowaloff, H. B., Hoff, R. G., Safran, C., Popovsky,
M. A., Cotton, D. J., Finkelstein, D. M., Page, P. L., & Slack, W. V.
(1992). Computer-based interview for screening blood donors for risk
of HIV infection. The Journal of the American Medical Association,
268, 1301-1305.
24. Mead, A. D. & Drasgow, F. (1993). Equivalence of computerized and
paper-and-pencil cognitive ability tsts: A meta-analysis.
Psychological Bulletin, 114, 440-458.
25. Nass, C., Moon, Y., Fogg, B. J., Reeves, B., & Dryer,
D. C. (1995). Can computer personalities be human personalities?
International Journal of Human-Computer Studies, 43(2), 223-239.
26. Roese, N. J. & Jamieson, D. W. (1993). Twenty years of bogus
pipeline research: A critical review and meta-analysis.
Psychological Bulletin, 114, 363-375.
27. Smith, R E. (1963). Examination by computer. Behavioral Science,
8, 76-79.
28. Sproull, L., Walker, J., Subramani, R., Kiesler, S., & Waters,
K. (in press). When the interface is a face. Human-Computer
Interaction.
29. Synodinos, N. E., & Brennan, J. M. (1988). Computer interactive
interviewing in survey research. Psychology and Marketing, 5,
117-137.
30. Weisband, S. P. & Reinig, B. A. (1995). Understanding users'
perceptions of electronic mail privacy. Communications of the ACM,
38(12), 40-47.
31. Weisband, S. P., Schneider, S. K., & Connolly, T. (1995).
Computer-mediated communication and social information: Status
salience and status awareness. Academy of Management Journal, 38,
1124-1151.
32. Weizenbaum, J. (1976). Computer power and human reason. San
Francisco: Freeman.
33. Welch, R. B., Blackmon, T. T., Liu, A., Mellers, B. A., & Stark,
L. W. (in press). The effects of pictorial realism, delay of visual
feedback, and observer interactivity on the subjective sense of
presence. Presence: Teleoperators And Virtual Environments.
NOTES
1. A full
reference list of the studies included in the meta-analysis is
available on this web site.
2. Outlier analyses were performed when cases within
categories were not homogeneous. The procedure involves
removing outliers until homogenity is obtained
(when p > .05 for the QW statistic). Due to space
limitations, these analyses are not reported but can be
obtained from the authors. None of these analyses change
the main results of this study.
3. Manuscript available from S. Kiesler, Carnegie
Mellon, Pittsburgh, PA, 15213.
4. Thompson, Anne. (1992). Forget doing lunch-
Hollywood's on E-mail. The New York Times, September 6. F-23.