Some Basic Terms Related to Experimental Method
Sample:
A strategically and systematically identified sub-group of people or events that
accurately represents the total population of persons or the universe
of events being studied. To sample means to identify subjects or events for
study in a systematic way.
Randomization:
The assignment of objects (subjects, treatments, groups) of a universe to subsets
of that universe in such a way...that every member of the universe has an equal
probably of being chosen for assignment. Usually carried out by using a table of
random numbers.
Variables:
Independent variable: A variable that is independent from the phenomenon being
studied or measured.
Dependent variable: A variable which depends for its existence on the influence of
the independent variable(s).
Extraneous variable: A variable which may influence the dependent variable but is
not or cannot be identified by the researcher.
Intervening variable: A variable which is observed to have an influence during the
experiment but cannot be controlled or eliminated.
Treatment and Control:
The process of introducing different variables; also called manipulation of
variables. Control of experimental conditions means minimizing the
effects of possible extraneous variables.
Observation:
In addition to simply viewing behavior or events, this terms is generally used to
denote any form of assessment in addition to visual observation (eg., pre- and
post-tests).
Validity:
How accurately cause and effect were established in a study.
Internal validity: Those factors which affect the degree to which the research
measures what it purports to measure. Threats to internal validity may include:
history of events in the research process; maturation of subjects over time; effects
of testing upon subjects; errors of measurement or observation; biased selection of
subjects; statistical regression.
External validity: Those factors which pertain to the extent that results of a study
are generalizable to other situations. Such factors may include: extent of
randomization of subjects; effects of pretesting on subjects; effects of the experi-
mental setting; effects of multiple treatments in the research process.
Reliability:
How consistently an instrument measures what it attempts to measure. The
statistical method for inferring reliability is statistical correlation.
1.00 = perfect reliability
0.00 = absence of reliability
Hypothesis:
A tentative explanation of phenomena that may be empirically tested which
gives direction to research; the "hunches" from which the researcher works. Each
hypothesis is usually related to an important variable or set of relationships within
the study. Research hypotheses are often converted into negative statements, or
"null hypotheses," which state that no significant difference exists between
experimental and control groups; rejection of the null hypothesis and acceptance
of an original or alternative hypothesis indicates that a difference does indeed
exist, using standard levels of probability called statistical significance.
Statistical significance:
If results would have occurred by chance less than 5 times in 100, we say that these
results are significant at the .05 level (also ".05 level of confidence" or ".05 alpha
level"). If the probability is less than .001 or.0001, that is usually stated, but most
researchers are satisfied with significance at the .01 level. Remember: The larger
the sample, the greater the likelihood that differences are statistically significant.
Standard deviation:
A measure of variability. Standard deviation is based on the mean, or average, of the
differences between each score and the arithmetic mean of the distribution. In
normal distribution (the standard bell curve) 68.3% of all results will fall +/- 1 standard
deviation (1SD) from the mean; an additional 27.2% will fall within +/- 2SD from the
mean; and 99.7% of all results will fall within +/- 3SD from the mean.
Measures of central tendency:
Mean, or average
Median: the point above and below which an equal number of results lie, without
regard for the size of those results
Mode: the most frequent finding; sometimes called the probability average
because it is the score most likely to be encountered in any distribution.
More than one mode is called a bimodal distribution.
-- from Merriam and Simpson, A Guide to Research for Educators
and Trainers of Adults (1989) and others. JWK, 1/95
Models for Tutorials and Guided Readings
1. Identify readings in advance. You could do this by doing a literature review; collecting and
sifting through course syllabi; reviewing and selecting from among entries in annotated
bibliographies; or asking your PA or other assessor/tutor to assign readings. This is a good model
if the body of literature is fairly standardized, and/or the subject matter is quite new to you.
Decide where and when to stop before you start.
2. Select an area or question of interest and an initial reading. Keep careful notes about what you
liked or disliked, agreed or disagreed with, or felt was unclear or curious in the reading. When
you meet with your assessor/tutor, discuss these and identify further reading based on emerging
interest. This is a good model if you are already familiar with a topic and want to go into some
depth in areas of debate or controversy. It requires considerable expertise on the part of the
assessor/tutor to know the literature, or considerable energy on your part to keep building your
bibliography as you go. Hard to know when to stop.
Ways to Document Tutorials and Guided Readings
1. Write a paper based on readings.
2. Have an oral examination with assessor/tutor.
3. Create annotated bibliography of what was read.
4. Keep a journal containing reading notes, emerging questions,
reactions to material, etc.
Remember: Format for tutorial and documentation of your learning
should be negotiated in advance with whomever will assess your learning.
Case Study Method
For further reference, see:
Stake, Robert E. "Case Studies." In Handbook of Qualitative Research, edited by
Norman K Denzin and Yvonna S. Lincoln. Thousand Oaks: Sage Publications,
1994.
Merriam, Sharan B. and Simpson, Edwin L. A Guide to Research for Educators and Trainers
of Adults. Malabar, FL: Krieger Publishing Company, 1989.
Definition: A case study is a "bounded system," that is, a specific and carefully defined set of
phenomena. In writing a case study, the boundaries must be carefully defined.
Case study is integrated, functioning system. Behavior has identifiable patterns. Cannot fully
understand single case without studying others, but for purposes of case study, others are usually not
considered.
In contrast to surveying a few variables across a range of units, case study focuses on many variables
and the interconnections and interplay among them in a single unit. Longitudinal case studies may
be done; one variable of interest is change over time.
Three approaches to case study:
1. Intrinsic interest: that is, a particular event or system is interesting in and of itself.
Study is undertaken because researcher wants a better understanding of a specific, carefully defined
phenomenon is all its complexity. Purpose is not theory building, though that may happen later.
2. Instrumental case study: that is, a case study which is examined to provide insight into
larger issues or refinement of theories.
While single case or small number of cases do not as a general rule result in good theory building,
one good negative case often provides limits on the generalizability of existing theory.
3. Collective or critical case study: that is, a group of cases which are studied together
because it is believed that understanding them will lead to better understanding about a still larger
collection of cases.
Cite SNL critical cases: Six students selected from first two clusters to represent variety of focus
areas, levels of professional experience, and performance in program.
What you must do to write a case study:
1. Set boundaries on the case, conceptualizing the object of your study.
2. Select and articulate the phenomena, themes, or issues (that is, the research questions)
you intend to emphasize.
3. Seek patterns of data to develop the issues, altering and adapting data collection
methods as appropriate.
4. Triangulate key observations and bases for interpretation.
Triangulate: To reduce likelihood of misinterpretation by researcher(s), a process of gathering
multiple perceptions of given phenomena to clarify meaning by identifying ways in which different
phenomena may be interpreted, and to verify repeatability.
5. Consider a variety of alternative interpretations for your data. You may choose to
compare with other cases, but this is not required.
6. Develop assertions or generalizations about the case through extensive written
narrative.
Limitations:
1. Case studies can be expensive and time-consuming.
2. Training in observation, interviewing techniques, and documentary analysis is required.
3. Because written product tends to be long and involved, so policy makers rarely take
time to read them.
4. Findings are difficult to generalize, and generalizations must be carefully defined and
circumscribed.
JWK, 1995.