The Cameron Group's Survey Studies:
A Methodological Critique
See Dr. Herek's blog
for updates.

 

 

Error #1:
Mischaracterized
Sample

Error #2:
Unacceptably Low
Response Rate

Error #3:
Unreliable Analyses
Due to Small
Subsamples

Error #4:
Questionable
Validity

Error #5:
Biased
Interview Procedures?

Error #6:
Researcher's Biases
Publicized During
Data Collection

Conclusion
 

Most of the Cameron group's academic publications in the past 15 years have been based on a survey study conducted in 1983 and 1984.

The main survey was completed in seven U.S. cities and towns in 1983. Data were later added from a 1984 Dallas (TX) sample. Most of the Cameron group's papers have reported data from the combined samples.

A critical review of the Cameron group's sampling techniques, survey methodology, and interpretation of results reveals at least six serious errors in their study. The presence of even one of these flaws would be sufficient to cast serious doubts on the legitimacy of any study's results. In combination, they make the data virtually meaningless.
 

Note. The following critique uses technical terms in several places. Readers who do not have a background in survey research may want to read a brief introduction to sampling terminology before going further. Links are also provided throughout the critique to that document when specific technical words or phrases are used. The document should open in a separate browser window. If not, use your browser's BACK button to return to the critique after checking the meaning of a term.
 

Error #1 Despite their characterizations of it, the sample was not national in scope.

Although the Cameron group has claimed that theirs was a "national" sample and have repeatedly used their data to make generalizations about the entire population, the initial sampling frame consisted only of 7 municipalities: Bennett (NE), Denver (CO), Los Angeles (CA), Louisville (KY), Omaha (NE), Rochester (NY), and Washington (DC). Data from an eighth city (Dallas, TX) were added later.
 

A brief introduction to sampling terminology By sampling only this small set of cities and towns, they systematically excluded all US adults who resided elsewhere when the study was conducted. Even if the study were otherwise flawless, therefore, valid generalizations about the entire US adult population could not be drawn from this sample. At best, the findings could be generalized only to the populations of the 8 municipalities.
 
Error #2 The response rate was unacceptably low.

 

 

 

Top of Page

Error #1

Error #2

Error #3

Error #4

Error #5

Error #6

Conclusion

Although not representative of the entire US population, an accurate description of sexual attitudes and behaviors in 8 municipalities in the early 1980s might have been useful in its own right. However, the Cameron group's results cannot be considered representative of even the specific municipalities because the vast majority of their sample did not complete the survey.

The Cameron group's published papers never reported their response rate – that is, the proportion of the entire sample that completed the questionnaire. Instead, they reported a compliance rate, which apparently was the percentage of completed questionnaires obtained from those respondents who were successfully contacted and given a survey form. They reported compliance rates of 43.5% (which they later corrected to 47.5%) for their initial, 7-municipality sample, and 57.7% for the Dallas sample.

(Although these figures are no substitute for the true response rate, they indicate that a majority of people contacted in the eight municipalities never completed a questionnaire. Those who refused to participate differed from those who completed the questionnaire in important ways. They tended to be older and male whereas people who returned the questionnaire were disproportionately young, highly educated, and White.)

Using the compliance rate is misleading, however, because it excludes the large number of households in the original sample that were never successfully contacted – the "not-at-homes." In order to evaluate how representative a sample is, survey researchers compare the completed interviews to the entire sample, which includes all of the households initially targeted for inclusion.

Using the Cameron group's own numbers, the response rate for the original 7 municipalities can be computed as the number of completed surveys (4,340) divided by the total number of valid households initially selected for the target sample (18,418 – which is the sum of the reported number of respondents contacted, including refusals [9,129], plus the reported not-at-homes [9,289]).

This yields a response rate for the 7-municipality study of 4,380/18,418 = 23.6%. Based on their reported data, the response rate for the Dallas survey appears to have been 20.7%. Combining the Dallas data with the 1983 survey data yields an overall response rate across the 8 municipalities of approximately 23%.

Thus, completed surveys were successfully obtained from fewer than one-fourth of the sample households. The residents of three out of every four households that should have been in the study never participated. They directly refused, accepted a survey form but never returned it, or were never contacted.

Although survey researchers do not have an absolute standard for what constitutes an acceptable response rate, a rate of 23% is clearly inadequate. Thus, the sample cannot be considered "random." Rather, the Cameron group's efforts ultimately resulted in what was essentially a convenience sample. Consequently, their conclusions cannot be generalized to any larger group. The extent to which they describe the entire US population – or even the populations of the eight municipalities sampled – cannot be known.

In a 1988 paper, Cameron and Cameron tried to salvage their data and defend their practice of generalizing from a sample that, by their own admission, they had incorrectly characterized as national and random, by claiming that they observed "usually reasonable agreement" between data reported from other studies and response patterns to some of their questions. However, the adequacy of a sample is judged first by the method through which respondents were included in it. Some response patterns in data from a badly executed sample may resemble those observed in well designed studies with probability samples. That resemblance, however, does not make a bad sample representative.

 

Error #3 Conclusions were based on data from subsamples that were too small to permit reliable analyses.

 

 

 

Top of Page

Error #1

Error #2

Error #3

Error #4

Error #5

Error #6

Conclusion

If the Cameron group's combined 1983-84 sample had been a random national sample (which it was not, as explained above), its size (N = 5,182 people) would have been large enough to permit estimates of population characteristics with only a small margin of error. Because their extremely low response rate rules out the possibility of making any population estimates on the basis of their data, however, this point is moot.

Yet, even if other sampling problems were not present, the Cameron group could not make reliable estimates about extremely small subgroups in their sample, as they tried to do in several papers.

For example, in one of their 1996 papers, Cameron and Cameron identified 17 respondents from the combined 1983-84 samples who claimed to have a homosexual parent. The questionnaire responses of these 17 people were scrutinized for various negative experiences, such as reporting incestuous relations with a parent (5 reported such incest). From these data, Cameron and Cameron concluded that 29% (5/17) of children with a homosexual parent have incestuous relations with a parent, compared to 0.6% of the children of heterosexuals, and that "having a homosexual parent(s) appears to increase the risk of incest with a parent by a factor of about 50."

Even if findings from their sample could be generalized (which, as shown above, is not the case) and if all of their respondents gave truthful and accurate answers (an assumption that is questioned below), drawing such conclusions from a subsample of 17 people is invalid. This is because data from such a tiny sample have an unacceptably large margin of sampling error.

In a simple random sample of 17, the margin of error due to sampling (with a confidence level of 99%) would be plus-or-minus 33 percentage points. (Because the Cameron group used a cluster sample, rather than a simple random sample, the margin of error probably would have been even higher.)

Thus, even if the numbers had come from a representative sample, the only valid conclusion that the Cameron group could have drawn is that the true proportion of adults who report having a homosexual parent and being an incest victim is somewhere between -4% (effectively, zero) and +62%. This is such a wide margin of error as to be meaningless. Moreover, because this confidence interval includes zero, the Cameron group cannot legitimately conclude that the true number of children of homosexual parents (in the 8 municipalities sampled) who were victims of parental incest was actually different from zero.

Again, the Cameron group's numbers are meaningless because the sample could not be considered representative of any larger group. But even if it had been representative, their conclusions from extremely small subgroups of their sample would not be valid.

 

Error #4 The validity of the questionnaire items is doubtful.

 

 

 

Top of Page

Error #1

Error #2

Error #3

Error #4

Error #5

Error #6

Conclusion

The worth of data from a survey study hinges on whether or not people give true and accurate answers. When participants give wrong or misleading answers, they do so for two principal reasons: (1) they are unable to give an accurate response or (2) they are unwilling to do so.

In the Cameron group's survey study, there is reason to believe that the accuracy of the data was affected by both factors.

Validity problems resulting from respondents' inability to provide accurate information. The Cameron group's self-administered questionnaire consisted of 550 items and required approximately 75 minutes to complete. It included a large number of questions that dealt with highly sensitive aspects of sexuality, many of them presented in an extremely complicated format. This procedure raises concerns about respondent fatigue and item difficulty.

By the time they reach the later stages of a very long task (such as filling out a questionnaire for more than an hour), respondents tire. They often become careless in their responses or skip questions entirely in their hurry to finish.

One way that researchers assess whether respondent fatigue created problems in a long questionnaire is by including consistency checks: Questions from an early section of the questionnaire are repeated in a later part (either in identical form or alternatively phrased) so that the reliability of responses can be checked. The Cameron group did not report any systematic checks for the internal consistency of questionnaire responses, although in one paper they noted discrepancies between responses to some of the survey items about early sexual experiences.

Another problem results from using highly complex questions. The Cameron group's questionnaire contained questions that not only addressed sensitive topics but also required respondents to read a large number of alternatives and follow intricate instructions. In one section, for example, respondents were expected to read a list of 36 categories of persons (e.g., my female grade school teacher, my male [camp, Y, Scout] counselor), then to note the age at which each person made serious sexual advances to me, then to note the age at which each person had experienced physical sexual relations with me, and then to report the total number of people in each category with whom the respondent had sexual relations.

Another item asked respondents why they thought they had developed their sexual orientation, and gave a checklist of 44 reasons, including I was seduced by a homosexual adult, I had childhood homosexual experiences with an adult, and I failed at heterosexuality.

Many respondents probably found such tasks confusing (because of their length and complexity) or alienating (because of their content). In addition, it is likely that many respondents did not read these long lists of response alternatives carefully and completely.

A related problem is that the questionnaire used language that was probably difficult for many respondents to understand. It used terms such as defecating, urinating, genitals, anus, penis, and vagina – words that may not have been understood by some respondents, especially those with poor reading skills and those who knew only slang terms for these concepts. Whether such problems led to underreporting or overreporting of various experiences cannot be known from the Cameron group's data.

Validity problems resulting from intentional misrepresentations by respondents. Even when survey respondents can understand the question, they sometimes purposely lie or hide the truth. Self-report measures are necessarily based on the assumption that respondents do their best to provide truthful answers. In some cases, however, people do not wish to divulge sensitive information about themselves. This is especially likely for questions about finances or behavior that is stigmatized, illegal, or potentially embarrassing. In other cases, they intentionally give false answers out of a mischievous or malicious motivation.

In the Cameron group's survey, most questionnaire items focused on highly personal and sensitive sexual issues. Recognizing the inherent difficulty in getting honest answers to such questions, experienced survey researchers use various techniques to overcome respondents' reluctance to reveal sensitive information or respond accurately. One of the most important of these is convincing respondents that their privacy will be preserved.

However, internal contradictions in the Cameron group's survey reports make it unclear whether respondents could reasonably believe that their answers truly were anonymous. Throughout their reports, the Cameron group described the questionnaire as anonymous and reported that it was returned in a sealed envelope. But in a 1989 paper they reported that "postquestionnaire inquiry with selected respondents indicated that many homosexuals did not count persons contacted in an orgy or restroom type setting as 'partners'" (Cameron et al., 1989, p. 1175).

For that last statement to be true, the researchers had to know which respondents to select for the post-questionnaire inquiry in order to reach "many homosexuals" who had participated in orgies or sex in restrooms (there were too few such individuals to have been detected simply through a small number of randomly targeted follow-up interviews).

How the supposedly anonymous questionnaire answers (e.g., self-reports of sexual orientation and sexual activities) were linked to specific respondents was not reported. Apparently, however, respondents' anonymity was not absolute, a factor likely to discourage some respondents from divulging sensitive information about themselves.

Whereas many members of the sample simply refused to participate, others probably completed the questionnaire but provided bogus answers. There were many reasons for potential respondents not to take the Cameron survey seriously. It was presented by a stranger who simply appeared at the door, with no affiliation to a university or prestigious research institute that would inspire confidence. As noted above, the questionnaire was excessively long and contained many questions about highly personal topics. In one city, the local newspaper quoted a police officer who advised a neighbor not to participate, describing the survey as "kind of raunchy" (Omaha World Herald, May 23, 1983, p. 1.).

Given the many reasons not to take the survey seriously, at least a few people probably decided to have a bit of fun with the researchers.

Suppose that someone purposely gave untrue responses with the mischievous intention of portraying himself as an individual who routinely engages in what might be considered outrageous sexual behavior. He probably would have overstated his general level of sexual activity, reported frequent participation in multiple unconventional sexual acts, and provided an unusual sexual history (e.g., incest with multiple family members).

If as few as 3 people in each city faked their responses in this manner, then a substantial portion of the total number of reports of such activities would be invalid.

The Cameron group's survey is highly vulnerable to faking by a small number of mischief makers for three important reasons.

  1. They based many of their conclusions on extremely small subsets of their sample – such as the 17 people who said they had a homosexual parent. If only a few of these respondents were lying or faking their answers, it dramatically alters the findings.
     
  2. The impact of mischief makers is maximized in samples with low response rates, like that of the Cameron group. Such samples tend to exclude respondents who provide dispassionate, honest answers that would offset the influence of individuals who purposely provide false data.
     
  3. Because they lacked systematic checks on the validity of responses to their questionnaire, and because interviewers did not directly observe respondents while they completed the questionnaire, the Cameron group could not determine how many of their respondents gave false answers.

In summary, the length, format, and content of the questionnaire – as well as the manner in which it was administered and the researchers' apparent failure to create a credible context for eliciting highly sensitive information – raise serious concerns about its validity. None of these concerns were addressed in the Cameron group's published papers.

 

Error #5 The interviewers may have been biased and may not have followed uniform procedures.

 

 

 

Top of Page

Error #1

Error #2

Error #3

Error #4

Error #5

Error #6

Conclusion

Professional survey organizations carefully train and monitor their interviewers in the field to ensure that they strictly follow standardized procedures and communicate a neutral, nonjudgmental attitude to participants.

To avoid systematic biases from interviewers' personal values or expectations, researchers typically employ field staff who don't know the study's hypotheses and who are carefully trained to communicate a nonjudgmental and respectful attitude to all respondents. These considerations are especially important in surveys that involve sensitive information.

Were adequate quality control procedures followed by the Cameron group?

It is impossible to answer this question based on their published papers. The Cameron group reported virtually no information about the characteristics, qualifications, and training of people who collected their data. It is reasonable to assume that the surveys were not conducted by a professional survey organization. Otherwise, the published reports most likely would have noted this fact. The Cameron group's reports gave no information about how interviewers were trained or supervised in the field.

For example, it is not clear if a supervisor randomly recontacted some respondents to check that an interviewer had indeed visited their home. This is a standard quality-control practice to make sure that interviewers do not falsely report having contacted all of the households assigned to them, or that they don't simply complete several questionnaires themselves.

In the absence of such information, we cannot assume that the Cameron group's interviewers met the standards maintained by professional survey researchers.

An even more serious concern is that high-level members of the research team apparently were themselves directly involved in data collection. This conclusion is suggested by a 1984 pamphlet, called Murder, Violence and Homosexuality, which was distributed by the Paul Cameron's Institute for the Scientific Investigation of Sexuality (ISIS). The pamphlet reported results from the Cameron group's 1983 survey and related an anecdote about an allegedly homosexual man who, in response to a question about having ever killed another person, provided his phone and social security numbers and pleaded to "keep him in mind if we wanted anyone killed." "His metallic eyes and steel spring sneer as he assured us of his sincerity are not readily forgotten."

This anecdote is significant because, if true, it suggests that the author of the pamphlet – perhaps Cameron himself – was directly involved in data collection for the 1983 survey.

This is problematic because the authors of the study had clear expectations about the results. They also had strong biases about sexual orientation, revealed in their statements to the news media indicating antipathy toward homosexuality at the time the surveys were conducted. Even if they made an honest effort to avoid communicating these biases to respondents, it is unlikely that they could have successfully done so if they directly participated in data collection.

 

Error #6 The Cameron group's biases were publicized to potential respondents while data were being collected.

One of the principal challenges of social research is that the individuals who are being studied can become aware of the researcher's expectations or goals, which can alter their behavior. For this reason, researchers do not communicate their expectations or hypotheses in advance to research participants. Nor do they bias participants' responses by suggesting that a particular answer is more correct or desirable than others.

 

Omaha World Herald, May 23, 1983

Click for full image
(80 KB)

Contrary to this well-established norm, Paul Cameron publicly disclosed the survey's goals and his own political agenda in the local newspaper of at least one surveyed city (Omaha) while data collection was in process. In that front-page interview, he was reported to have characterized the survey as providing "ammunition for those who want laws adopted banning homosexual acts throughout the United States" and he was quoted as saying that the survey's sponsors were "betting that (the survey results will show) that the kinds of sexual patterns suggested in the Judeo-Christian philosophy are more valid than the Playboy philosophy" ("Lincoln man: Poll will help oppose gays." Omaha World Herald, May 23, 1983, p. 1).

Whether or not similar publicity directly linked to the survey appeared in other target cities during data collection is not known. While data collection was in progress, however, Cameron received national attention for his calls to quarantine gays, which included public remarks in Houston (TX) while the Dallas survey was being conducted.

Such publicity is the worst nightmare of a legitimate survey researcher. It must be assumed to have biased the sample composition and the responses of those who elected to participate, at least in Omaha (approximately 19% of the final sample). After reading or hearing about the front-page item in Omaha's only daily newspaper, many potential respondents probably decided not to participate, whereas others may have given false answers to the researchers because they perceived that the survey had political or religious – rather than scientific – aims.

Not only would legitimate researchers have avoided the press during data collection, they most likely would have halted the study if such a newspaper article appeared.

 

Conclusion As noted earlier, an empirical study manifesting even one of these six weaknesses would be considered seriously flawed. In combination, the multiple methodological problems evident in the Cameron group's surveys mean that their results cannot even be considered a valid description of the specific group of individuals who returned the survey questionnaire.

Because the data are essentially meaningless, it is not surprising that they have been virtually ignored by the scientific community.

 

Bibliography Survey Reports by the Cameron Group
(in chronological order)

 

 

 

Top of Page

Error #1

Error #2

Error #3

Error #4

Error #5

Error #6

Conclusion

Cameron, P., Proctor, K., Coburn, W., & Forde, N. (1985). Sexual orientation and sexually transmitted diseases. Nebraska Medical Journal, 70, 292-299.

Cameron, P., Proctor, K., Coburn, W., Forde, N., Larson, H., & Cameron, K. (1986). Child molestation and homosexuality. Psychological Reports, 58, 327-337.

Cameron, P., Cameron, K., & Proctor, K. (1988). Homosexuals in the armed forces. Psychological Reports, 62, 211-219.

Cameron, P., Cameron, K., & Proctor, K. (1989). Effect of homosexuality upon public health and social order. Psychological Reports, 64, 1167-1179.

Cameron, P., & Cameron, K. (1995). Does incest cause homosexuality? Psychological Reports, 76, 611-621.

Cameron, P., & Cameron, K. (1996a). Homosexual parents. Adolescence, 31, 757-776.

Cameron, P., & Cameron, K. (1996b). Do homosexual teachers pose a risk to pupils? Journal of Psychology, 130, 603-613.

 

Critiques of the Cameron Group

Boor, M. (1988). Homosexuals in the armed forces: A reply to Cameron, Cameron, and Proctor. Psychological Reports, 62, 488.

Boor, M. (1988). Homosexuals in the armed forces: A rejoinder to the reply by Cameron and Cameron. Psychological Reports, 62, 602.

Brown, R. D., & Cole, J. K. (1985). Letter to the editor. Nebraska Medical Journal, 70, 410-414.

Duncan, D. F. (1988). Homosexuals in the armed forces: A comment on generalizability. Psychological Reports, 62, 489.

Gonsiorek, J. C., & Weinrich, J. D. (1991). The definition and scope of sexual orientation. In J. C. Gonsiorek, & J. D. Weinrich (Eds.), Homosexuality: Research implications for public policy (pp. 1-12). Thousand Oaks, CA: Sage.

Herek, G. M. (1991). Myths about sexual orientation: A lawyer's guide to social science research. Law & Sexuality, 1, 133-172.

Herron, W. G., & Herron, M. J. (1996). The complexity of sexuality. Psychological Reports, 78, 129-130.

Weinrich, James D. (1988). Re: Sex survey (Letter). Science, 242, 16.

 

Readings on Survey Methods

Bradburn, N. M., & Sudman, S. (1988). Polls and surveys: Understanding what they tell us. San Francisco: Jossey-Bass.

Fowler, F. J. Jr. (1984). Survey research methods. Thousand Oaks, CA: Sage.

Schuman, H., & Presser, S. (1981). Questions and answers in attitude surveys: Experiments on question form, wording, and context. New York: Academic Press.

Sudman, S. (1976). Applied sampling. New York: Academic Press.

Sudman, S., & Bradburn, N. M. (1985). Asking questions: A practical guide to questionnaire design. San Francisco: Jossey-Bass.

Turner, C. F., & Martin, E. (Eds.). (1984). Surveying subjective phenomena. New York: Russell Sage Foundation.

 

Paul Cameron: Introduction
 
The Cameron group's survey studies
 
The Cameron group's study of obituaries
 
The Cameron group's publication outlets
 
Fact sheet about Paul Cameron
 
Jim Burroway at Box Turtle Bulletin monitors Paul Cameron's activities
 
Exposing Junk Science about Same-Sex Behavior

 

Links to other web sites about Paul Cameron

 

Return to Facts About Homosexuality and Child Molestation

 

Home  |  Hate Crimes  |  AIDS  |  Sacramento  |  The Facts  |  Military  |  Sexual Prejudice
 

Blog   |   Contents   |   Publications   |   Library   |   Site Search   |   Contact Us   |   Useful Links   |   Social Psych Net

 

All original content of this website is copyright © 1997-2012 by Gregory M. Herek, Ph.D.
All rights reserved