Written Assignment: Survey Research

Purpose: To demonstrate and apply your knowledge regarding survey research methods.

Instructions: There are four parts to this assignment (listed below). For each section, please provide complete answers to all of the questions. Clearly label your answers to each section.

Mechanics: All written work must be typed, double-spaced, and with one-inch margins. Please keep a copy of your report for your own records.

Due Date: This assignment is due at the beginning of class on TBA.  All late reports will be penalized by 10% for each and every day they are late.

Grading Criteria: Your written work will be graded according to the following criteria:

a. Completion of all phases of the assignment.
b. Clear and organized presentation of findings.
c. Accurate application of course concepts.

Section I – Evaluation of Survey Research in the Public Domain

Instructions: You are to find two examples of survey research recently published in the public domain and evaluate and critique the use of survey methods in each case. One of the surveys chosen should rely on random-, scientific-, representative-, or probability-based sampling methods. The other survey should rely on non-representative or non-probably based sampling techniques. To complete this section of the assignment you should analyze each survey separately. To do this, you should carefully read the criteria for evaluating survey research provided below and then explicitly evaluate each survey according to this criterion. After evaluating both surveys, reflect on what you have learned from this assignment. In short, this part of the assignment should contain three subsections: 1) your evaluation of the scientific survey, 2) your evaluation of the non-representative survey, and 3) your reflections on what insights you’ve gained about survey methods from this assignment.

Criteria for Evaluating Survey Research (excerpt adapted from the National Council on Public Polls.)

Who did the survey?
What polling firm, research house, political campaign, corporation or other group conducted the survey?
If you don't know who did the survey, you can't get the answers to all the other questions listed here. If the person providing survey results can't or won't tell you who did it, serious questions must be raised about the reliability and truthfulness of the results being presented.
Reputable polling firms will provide you with the information you need to evaluate the survey. Because reputation is important to a quality firm, a professionally conducted survey will avoid many errors.
Who paid for the survey and why was it done?
You must know who paid for the survey, because that tells you – and your audience – who thought these topics are important enough to spend money finding out what people think. This is central to the whole issue of why the survey was done.
Surveys are not conducted for the good of the world. They are conducted for a reason – either to gain helpful information or to advance a particular cause.
It may be the news organization wants to develop a good story. It may be the politician wants to be re-elected. It may be that the corporation is trying to push sales of its new product. Or a special-interest group may be trying to prove that its views are the views of the entire country.
All are legitimate reasons for doing a survey.
Examples of suspect surveys are private surveys conducted for a political campaign. These surveys are conducted solely to help the candidate win –and for no other reason. The survey may have very slanted questions or a strange sampling methodology, all with a tactical campaign purpose. A campaign may be testing out new slogans, a new statement on a key issue or a new attack on an opponent. But since the goal of the candidate’s survey may not be a straightforward, unbiased reading of the public's sentiments, the results should be reported with great care.
Likewise, reporting on a survey by a special-interest group is tricky. For example, an environmental group trumpets a survey saying the American people support strong measures to protect the environment. That may be true, but the survey was conducted for a group with definite views. That may have swayed the question wording, the timing of the survey, the group interviewed and the order of the questions. You should examine the survey to be certain that it accurately reflects public opinion and does not simply push a single viewpoint.
How many people were interviewed for the survey?
Because surveys give approximate answers, the more people interviewed in a scientific survey, the smaller the error due to the size of the sample, all other things being equal.
A common trap to avoid is that "more is automatically better." It is absolutely true that the more people interviewed in a scientific survey, the smaller the sampling error – all other things being equal. But other factors may be more important in judging the quality of a survey.
How were those people chosen?
The key reason that some surveys reflect public opinion accurately and other surveys are unscientific junk is how the people were chosen to be interviewed.
In scientific surveys, the pollster uses a specific method for picking respondents. In unscientific surveys, the person picks himself to participate.
The method pollsters use to pick interviewees relies on the bedrock of mathematical reality: when the chance of selecting each person in the target population is known, then and only then do the results of the sample survey reflect the entire population. This is called a random sample or a probability sample. This is the reason that interviews with 1,000 American adults can accurately reflect the opinions of more than 200 million American adults.
Most scientific samples use special techniques to be economically feasible. For example, some sampling methods for telephone interviewing do not just pick randomly generated telephone numbers. Only telephone exchanges that are known to contain working residential numbers are selected – to reduce the number of wasted calls. This still produces a random sample. Samples of only listed telephone numbers do not produce a random sample of all working telephone numbers.
But even a random sample cannot be purely random in practice since some people don't have phones, refuse to answer, or aren't home.
What area (nation, state, or region) or what group (teachers, lawyers, Democratic voters, etc.) were these people chosen from?
It is absolutely critical to know from which group the interviewees were chosen.
You must know if a sample was draw from among all adults in the United States, or just from those in one state or in one city, or from another group. For example, a survey of business people can reflect the opinions of business people – but not of all adults. Only if the interviewees were chosen from among all American adults can the survey reflect the opinions of all American adults.
In the case of telephone samples, the population represented is that of people living in households with telephones. For most purposes, telephone households may be similar to the general population. But if you were reporting a survey on what it was like to be poor or homeless, a telephone sample would not be appropriate. Remember, the use of a scientific sampling technique does not mean that the correct population was interviewed.
Political surveys are especially sensitive to this issue.
In pre-primary and pre-election surveys, which people are chosen as the base for survey results is critical. A survey of all adults, for example, is not very useful on a primary race where only 25 percent of the registered voters actually turn out. So look for surveys based on registered voters, "likely voters," previous primary voters, and such. These distinctions are important and should be included in the story, for one of the most difficult challenges in polling is trying to figure out who actually is going to vote.
Who should have been interviewed and was not?
No survey ever reaches everyone who should have been interviewed. You ought to know what steps were undertaken to minimize non-response, such as the number of attempts to reach the appropriate respondent and over how many days.
There are many reasons why people who should have been interviewed were not. They may have refused attempts to interview them. Or interviews may not have been attempted if people were not home when the interviewer called. Or there may have been a language problem or a hearing problem.
When was the survey done?
Events have a dramatic impact on survey results. Your interpretation of a survey should depend on when it was conducted relative to key events. Even the freshest survey results can be overtaken by events. The President may have given a stirring speech to the nation, the stock market may have crashed or an oil tanker may have sunk, spilling millions of gallons of crude on beautiful beaches.
Survey results that are several weeks or months old may be perfectly valid, but events may have erased any newsworthy relationship to current public opinion.
How were the interviews conducted?
There are three main possibilities: in person, by telephone or by mail. Most surveys are now conducted by telephone, with the calls made by interviewers from a central location. However, some surveys are still conducted by sending interviewers into people's homes to conduct the interviews.
Some surveys are conducted by mail. In scientific surveys, the pollster picks the people to receive the mail questionnaires. The respondent fills out the questionnaire and returns it.
Mail surveys can be excellent sources of information, but it takes weeks to do a mail survey, meaning that the results cannot be as timely as a telephone survey. And mail surveys can be subject to other kinds of errors, particularly low response rates. In many mail surveys, more people fail to participate than do. This makes the results suspect.
Surveys done in shopping malls, in stores or on the sidewalk may have their uses for their sponsors, but they should never be treated as if they represent a public opinion survey.
Advances in computer technology have allowed the development of computerized interviewing systems that dial the phone, play taped questions to a respondent and then record answers the person gives by punching numbers on the telephone keypad. Such surveys have a variety of severe problems, including uncontrolled selection of respondents and poor response rates, and should be avoided.
What about surveys on the Internet or World Wide Web?
The explosive growth of the Internet and the World Wide Web has given rise to an equally explosive growth in various types of online surveys. Many online surveys may be good entertainment, but they tell you nothing about public opinion.
Most Internet surveys are simply the latest variation on the convenience-surveys that have existed for many years. Whether the effort is a click-on Web survey, a dial-in survey or a mail-in survey, the results should be ignored and not considered. All these convenience-surveys suffer from the same problem: the respondents are self-selected. The individuals choose themselves to take part in the survey – there is no pollster choosing the respondents to be interviewed.
Remember, the ideal purpose of a survey is to draw conclusions about the population, not about the sample. In these convenience-surveys, there is no way to project the results to any larger group. Any similarity between the results of a convenience sample and a scientific survey is pure chance.
The 900-number dial-in surveys may be fine for deciding whether or not Larry the Lobster should be cooked on Saturday Night Live or even for dedicated fans to express their opinions on who is the greatest quarterback in the National Football League. The opinions expressed may be real, but in sum the numbers are just entertainment. There is no way to tell who actually called in, how old they are, or how many times each person called.
Never be fooled by the number of responses. In some cases a few people call in thousands of times. Even if 500,000 calls are tallied, no one has any real knowledge of what the results mean. If big numbers impress you, remember that the Literary Digest's non-scientific sample of 12,000,000 people said Landon would beat Roosevelt in the 1936 Presidential election.
Mail-in coupon surveys are just as bad. In this case, the magazine or newspaper includes a coupon to be returned with the answers to the questions. Again, there is no way to know who responded and how many times each person did.
Another variation on the convenience sample comes as part of a fund-raising effort. An organization sends out a letter with a survey form attached to a large list of people, asking for opinions and for the respondent to send money to support the organization or pay for tabulating the survey. The questions are often loaded and the results of such an effort are always meaningless.
This technique is used by a wide variety of organizations from political parties and special-interest groups to charitable organizations. Again, if the survey in question is part of a fund-raising pitch, pitch it – in the wastebasket.
What is the sampling error for the survey results?
Interviews with a scientific sample of 1,000 adults can accurately reflect the opinions of nearly 200 million American adults. That means interviews attempted with all 200 million adults – if such were possible – would give approximately the same results as a well-conducted survey based on 1,000 interviews.
What happens if another carefully done survey of 1,000 adults gives slightly different results from the first survey? Neither of the surveys is "wrong." This range of possible results is called the error due to sampling, often called the margin of error.
This is not an "error" in the sense of making a mistake. Rather, it is a measure of the possible range of approximation in the results because a sample was used.
Pollsters express the degree of the certainty of results based on a sample as a "confidence level." This means a sample is likely to be within so many points of the results one would have gotten if an interview were attempted with the entire target population. They usually say this with 95% confidence.
Thus, for example, a "3 percentage point margin of error" in a national survey means that if the attempt were made to interview every adult in the nation with the same questions in the same way at about the same time as the survey was taken, the survey's answers would fall within plus or minus 3 percentage points of the complete count’s results 95% of the time.
This does not address the issue of whether people cooperate with the survey, or if the questions are understood, or if any other methodological issue exists. The sampling error is only the portion of the potential error in a survey introduced by using a sample rather than interviewing the entire population. Sampling error tells us nothing about the refusals or those consistently unavailable for interview; it also tells us nothing about the biasing effects of a particular question wording or the bias a particular interviewer may inject into the interview situation.
Remember that the sampling error margin applies to each figure in the results – it is at least 3 percentage points plus or minus for each one in our example.
What other kinds of factors can skew survey results?
The margin of sampling error is just one possible source of inaccuracy in a survey. It is not necessarily the source of the greatest source of possible error; we use it because it's the only one that can be quantified. And, other things being equal, it is useful for evaluating whether differences between survey results are meaningful in a statistical sense.
Question phrasing and question order are also likely sources of flaws. Inadequate interviewer training and supervision, data processing errors and other operational problems can also introduce errors. Professional polling operations are less subject to these problems than volunteer-conducted surveys, which are usually less trustworthy.
You should always ask if the survey results have been "weighted." This process is usually used to account for unequal probabilities of selection and to adjust slightly the demographics in the sample. You should be aware that a survey could be manipulated unduly by weighting the numbers to produce a desired result. While some weighting may be appropriate, other weighting is not. Weighting a scientific survey is only appropriate to reflect unequal probabilities or to adjust to independent values that are mostly constant.
What questions were asked?
You must find out the exact wording of the survey questions. Why? Because the very wording of questions can make major differences in the results.
Perhaps the best test of any survey question is your reaction to it. On the face of it, does the question seem fair and unbiased? Does it present a balanced set of choices? Would most people be able to answer the question?
On sensitive questions – such as abortion – the complete wording of the question should probably be included in your story. It may well be worthwhile to compare the results of several different surveys from different organizations on sensitive questions. You should examine carefully both the results and the exact wording of the questions.
In what order were the questions asked?
Sometimes the very order of the questions can have an impact on the results. Often that impact is intentional; sometimes it is not. The impact of order can often be subtle.
During troubled economic times, for example, if people are asked what they think of the economy before they are asked their opinion of the president, the presidential popularity rating will probably be lower than if you had reversed the order of the questions. And in good economic times, the opposite is true.
What is important here is whether the questions that were asked prior to the critical question in the survey could sway the results. If the survey asks questions about abortion just before a question about an abortion ballot measure, the prior questions could sway the results.
 

Section II – Evaluation of Survey Research in the Social Sciences

Instructions: You are to read the following article on conflict in romantic relationships published in the Journal of Social and Personal Relationships. You are to evaluate the survey methods used by addressing the following issues: 1) Describe the scales used in this research. 2) Are the scales used reliable measures? Explain your reasoning. 3) Are the scales used valid measures? Again, explain your reasoning. 4) Describe some of the relationships among the scales used. 5) Do you believe that the methods used and conclusions reached are legitimate? Explain your reasoning.
 
Article: Klein, R. C. A., & Lamm, H. (1996). Legitimate interest in couple conflict. Journal of Social and Personal Relationships, 13, 619-626.  Copy of article can be found here (.pdf format)   [Note:  pdf file contains more than just the article you need]

 

Section III – Working with Statistics (Correlation)

Instructions: Using the data set provided below answer the following questions: 1) Calculate the correlation coefficient between hours spent studying and GPA. 2) Calculate the degrees of freedom and determine if the correlation obtained is significant. 3) Describe the direction and magnitude of the correlation obtained.

Student

Hours Spent Studying

GPA

A

40

3.75

B

30

3.00

C

35

3.25

D

5

1.75

E

10

2.00

F

15

2.25

G

25

3.00

 

Section IV – Reasoning with Data

Instructions: Using the data provided below critique the knowledge claim that is offered. That is, determine if the knowledge claim provided is warranted in light of the data collected and analysis preformed. (Hint: There are two major problems with the claim being offered based on the data and analysis provided)
 
Data: Annual Wine Consumption and Heart Disease Deaths, by selected countries:

Country

Average Annual
Consumption
(liters per person)

Heart Disease
Death Rate
(deaths/thousand deaths)

Australia

2.5

211

Austria

3.9

167

Belg./Luxe

2.9

131

Canada

2.4

191

Denmark

2.9

220

Finland

0.8

297

France

9.1

71

Germany (West)

2.7

172

Iceland

0.8

211

Ireland

0.7

300

Italy

7.9

107

Netherlands

1.8

167

New Zealand

1.9

266

Norway

0.8

227

Spain

6.5

86

Sweden

1.6

207

Switzerland

5.8

115

U.K.

1.3

285

United States

1.2

199

Analysis: Regression

 

Knowledge Claim: If individuals drink wine, it will reduce the risk of heart disease.

 

END OF ASSIGNMENT