Introduction to Reasearch Methods and Program Evaluation by Dr. David Abrahams

Introduction to Reasearch Methods and Program Evaluation by Dr. David Abrahams

Introduction to Research
"If we could first know where we are, and wither we are trending, we could better judge what to do and how to do it." Abraham Lincoln

Misuse or Misspoken use of Research
Frequently you find published research results in newspapers, magazines and on the television that fail to explain how they came to gather or analyze the results they are reporting. Related to this, are issues of results being misused, embellished or taken out of context. For example, during an election time, more and more news and television organizations conduct quick polls and surveys. They report and discuss the findings as valid unbiased indicators about a candidate’s position or the public views on an issue. Not once during their broadcast or while you are reading the article do they speak about how the data was gathered or what analysis were performed to get the results they are reporting. All of these individuals and groups claim to have conducted some form of research but fail to validate their findings.

It is important to qualify the point I am making; I am not saying that these activities are not important or lack value to the gatherers of the information. However, these activities lack rigor and do not meet the definition or the criterion of research. At best this type of research is informative, an indicator to an assumption, or a mechanism that alerts you to group and social trends, issues, and impressions.

It is important to make the distinction because of the value people attach when reading or are told that the findings are a result of research. The implication is that these results or finding are reliable and valid.
RELIABITY is an indication of how sound your research is and applies to both the design and the methods of your research; it is a measure of whether the results are replicable. VALIDITY refers to the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. Validity is a measure of accuracy and whether the instruments of measurement actually are measuring what they are intended to measure. Research can be affected by any internal or external factors, which can impact the reliability and validity of that research. Controlling all possible factors that threaten the reliability and validity of the research is a primary responsibility of every good researcher.

The function of research is to either create or test a
hypothesis. Research is the instrument used to test whether a hypothesis is good or not. It is the method by which you identify a problem, form a hypothesis, design an experiment, test the hypothesis, analyze the results, formulation and report conclusions. There are different ways of conducting research; however any method you use will be based on the systematic collection and analysis of data. The emphasis here is on the word systematic.

Research can be broadly defined as a form of systematic inquiry that contributes to knowledge. It is also important to understand what is meant by this term research. Research is about finding out. It is about searching systematically for solutions to problems. It is about a structured process and rules that guides you to the results. It is also about helping you to evaluate the research of others.

What is Research?
“Research is formalized curiosity. It is poking and prying with a purpose.” Zora Neale Hurston

Research is the cornerstone of any science, including both the hard sciences such as chemistry and physics and the social (or soft) sciences such as psychology, management, or education. It refers to the organized, structured, and purposeful investigation aimed at discovering, interpreting, and revising human knowledge on different aspects of the world by someone first hand. The setting may vary from natural, real world too highly constrained and carefully controlled laboratory environment.

Many argue that the structured attempt at gaining knowledge dates back to Aristotle and his identification of deductive reasoning. Deductive reasoning refers to a structured approach utilizing an accepted premise, a statement that is assumed to be true and from which a conclusion can be drawn.

Deductive reasoning utilizes, a major premise (a universal affirmative or negative proposition that is assumed to be true) and a minor premise (a particular affirmative or negative proposition that is assumed to be true), and a conclusion. This method of gaining knowledge, going from the general to the specific and drawing a conclusion, can also called a syllogism. For example:

All apples are fruit.
All fruits grow on trees.
Therefore all apples grow on trees.
Or
All apples are fruit.
Some apples are red.
Therefore some fruit is red.

Intuitively, one might deny the major premise and hence its conclusion; yet anyone accepting the premise accents’ the conclusion. Deductive reasoning is dependent on its premises and research is dependent on its hypothesis. Both have a similar weakness, that is, a false premise/hypothesis can possibly lead to a false result, and inconclusive premises/hypothesis will also yield an inconclusive conclusion. Example, the premise:

Dolphins are mammals, not fish.
Dolphins swim in the ocean.
Therefore, fish do not swim in the ocean.

In this case, the premise that "dolphins are mammals, not fish" is undistributed, but ends up (improperly) as part of the conclusion. The part of the conclusion that's wrong is the predicate "do not swim in the ocean. As a result, we cannot properly conclude that all fish do not swim in the ocean. Or, the hypothesis:

A survey indicates that the bird population in a certain area is declining. There is a statistical correlation with the increase in the human population and the decline of that bird population. Therefore, the decline in the bird population is due to the increase in the human population.

There may be a correlation, but without looking at other factors including disease, parasites, climate change and chemical threats, a valid case cannot be made that the decline in the bird population is due to increases in the human population.

Statistics is used to examine or study everything of interest (such as an item, person, how a person plays baseball, is something safe, how well a business is doing, or how much an endangered animals population has declined or grown in the past five years and if that change is significant). For example, suppose we are interested in who is winning the Democratic Primary Elections? We cannot ask every single person who voted or what their opinion is!

However, we can ask a selection of people. This raises a number of questions, however:
• What group of people do we choose?
• Who should be in the group?
• What can information gathered from a small group of people tell us about the voting outcome in the population in general?
• Won't the measured effect or outcome depend on which people are in the group? Won't it change from one group to the next? So, how can any useful information are found?
• How many people should be in such a group to obtain useful information?
To answer these questions, we need statistics and we need to know
how to Research.

Introduction to Research Design "It is the theory that decides what can be observed." Albert Einstein

Experiments, if conducted correctly can enable a better understanding of the relationship between a causal hypothesis and a particular phenomenon of theoretical or practical interest. One of the biggest challenges is deciding which research methodology to use. “Research that tests the adequacy of research methods does not prove which technique is better; it simply provides evidence relating to the potential strengths and limitations of each approach.” (Howard, 1985).

In research and evaluation, a true experimental design (also known as random experimental design), is the preferred method of research. It provides the highest degree of control over an experiment, enabling the researcher the ability to draw causal inferences with a high degree of confidence.

A true experimental design is a design in which subjects are randomly assigned to program and control groups. With this technique, every member of the target population has an equal chance of being selected for the sample. The fact that every member of the target population has an equal chance of being selected for the sample makes this design the strongest method for establishing equivalence between a program and control group.

Quasi-experimental group design differs from true experimental group design by the omission of
random assignment of subjects to a program and control group. As a result, you can not be sure that the program and the control group are equivalent.

The use of random experimental design to randomly assign subjects to a program and control group, controls for all threats to internal validity. Issues of internal validity arise when groups in the study are nonequivalent. Your ability as a researcher to say that your treatment caused the effect is compromised.

In most causal hypothesis tests, the central inferential question is whether any observed outcome differences between groups are attributable to the program or instead to some other factor. In order to argue for the internal validity of an inference the analyst must attempt to demonstrate that the program and not some plausible alternative explanation is responsible for the effect. In the literature on internal validity, these plausible alternative explanations or factors are often termed
threats" to internal validity" (Trochim, 1997).

Let us consider an instance in which an investigator wishes to determine if a program designed to reduce prejudice is effective. In this instance, the independent variable is a lecture on prejudice for grammar school students. For the dependent measure, the researcher will use a standard self-report test of prejudice. To conduct the study, the researcher selects a group of students from a local grammar school and administers the prejudice questionnaire to all of them. A week later, all the students receive the lecture on prejudice and, after the lecture, again are tested. The next step is to find out whether the prejudice scores collected before the intervention (call them the pretest scores) are substantially higher than scores obtained following the lecture (the posttest scores). The researcher might conclude that, if the posttest responses are lower than the pretest responses, the intervention has reduced subjects' prejudice. As you can see, what the researcher has done is assume that changes in the dependent variable were caused by the introduction of the independent variable. But what possibilities other than the operation of the independent variable on the dependent variable might explain the observed relationship (Campbell & Stanley, 1963)? The section on
experimental design explains several such threats to internal validity .

This is an important point to note. The research designs and methods used in an evaluation have a direct effect on whether or not a program is perceived effective. Did the cause really produced the effect or was it some other plausible explanation? If the cause produced the effect, can it be generalized to a different group in another location? These are questions of validity. "The first thing we have to ask is: "validity of what?" When we think about validity in research, most of us think about research components. We might say that a measure is a valid one, or that a valid sample was drawn, or that the design had strong validity. All of those statements are technically incorrect. Measures, samples and designs don't 'have' validity -- only propositions can be said to be valid. Technically, we should say that a measure leads to valid conclusions or that a sample enables valid inferences, and so on. It is a proposition, inference or conclusion that can 'have' validity"
(Trochim, 1997).


Validity
“We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Isaac Newton

Internal Validity and its Most Significant Threats
Threats to validity can be either internal, external, or both. A threat to validity, by definition is, any factor that influences the results of the experiment. In research and evaluation, internal validity refers to the degree the treatment or intervention effects change in the dependent variable. The greater the ability a researcher can attribute the effect to the cause, rather than to extraneous factors the higher the degree of confidence that the treatment or intervention caused the effect.

Internal validity is only relevant in studies that try to establish a causal relationship. It's not relevant in most observational or descriptive studies
(Trochim, 2006). Controlling for potentially confounding variables minimizes the potential for an alternative explanation of the treatment effects. The most significant threats to internal validity are: history, maturation, testing, instrumentation, regression, selection and experimental mortality.

History
History becomes a threat when other factors external to the subjects (in addition to the treatment variable) occur by virtue of the passage of time. For example, the reported effect of a year-long, institution-specific program to improve medical resident prescribing and order-writing practices may have been confounded by a self-directed continuing-education series on medication errors provided to residents by a pharmaceutical firm's medical education liaison.

Maturation
The maturation threat can operate when biological or psychological changes occur within subjects and these changes may account in part or in total for effects discerned in the study. For example, a reported decrease in emergency room visits in a long-term study of pediatric patients with asthma may be due to outgrowing childhood asthma rather than to any treatment regimen imposed. Both history and maturation are more of a concern in children and longitudinal studies.

Testing
The testing threat may occur when changes in test scores occur not because of the intervention but rather because of repeated testing. This is of particular concern when researchers administer identical pretests and posttests. Researchers that have subjects performing skilled based task, test of memory, IQ or Manual dexterity must take the threat of testing into account when designing their research at posttest. For example, a reported improvement in medical resident prescribing behaviors and order-writing practices in the study previously described may have been due to repeated administration of the same short quiz. That is, the residents simply learned to provide the right answers rather than truly achieving improved prescribing habits.

Instrumentation
When study results are due to changes in instrument calibration or observer changes rather than to a true treatment effect, the instrumentation threat is in operation. Instrumentation is a threat when study results are due to changes in the instrument calibration or observer changes rather than to a true treatment effect; this is especially true when the measuring instruments are human observers. For example: a human observer might become more proficient as a observer, noticing patterns and nuances in an observed subject that might have existed in the pretest but are only noticing it in the posttest. As a result the observer incorrectly attributes the observed change to the treatment.

Regression
Statistical regression threat is a threat to internal validity when subjects are assigned to treatments on the basis of extreme (low and high) scores on a test. During retest, the scores of extreme scorers tend to regress toward the mean even without treatment. For example, if a group of subjects was recruited on the basis of extremely high or low scores and an educational intervention is conducted, any post intervention improvements could be due partly or entirely, to regression rather than to the educational treatments presented in the program. Conceptually, the initial extremely high test score was attributed to measurement error (represented in the variability of test scores). When this changed randomly during the next test, high scores were no longer as high as before. The result is a regression towards the mean.

Selection
The selection threat is of utmost concern when subjects cannot be randomly assigned to treatment groups, particularly if groups are unequal in relevant variables before treatment intervention. For example, one obstetrics and gynecology clinic's patients receive a pharmacy-based educational intervention and another clinic's patients receive a mailed pamphlet; both methods are designed to encourage calcium supplementation. When the outcome is measured at the end of the study, it may be confounded by the fact that the groups were not equal with respect to relevant variables (e.g., age, race, income status, hysterectomy status, and menopausal status) before the educational program was implemented.

Experimental Mortality
Experimental mortality is also known as attrition is when subjects drop out of an experiment/treatment before the study is completed. Experimental mortality is a treat to internal validity when there is a differential loss of subjects from comparison groups resulting in unequal groups (Campbell and Stanley, 1963: 5). One example is a study designed to compare the effectiveness of a drug on a randomly selected group of sick participants. One group receives the drug/treatment and the other group receives the placebo. If subjects with the most severe symptoms dropped out of the active treatment group, the treatment may appear more effective than it really is.

External Validity
External validity is the degree to which the results of the study can be generalized to a population other than those studied. External validity is widely treated as an issue to be addressed through methodological procedures. In a study, it is usually impossible to measure an entire population; as a result, measurements are taken from a sample of that population. If subjects from a sample population are not randomly selected from the population, then their particular demographic, for example there: household, age, socio-economic, ethnic, racial, religious and/or income characteristics may bias their performance and the study's results may not be applicable to the population or to another comparable group.
The purpose of research is to learn something about the behavior of people. This knowledge is useful only to the extent that we can generalize the information to a larger population. However, the more we control the environment of the subjects (sub population) in a study, the more the subjects in the experimental and control groups can become different from those in the general population. Consequently, the results may have high internal validity; they may also lack external validity, meaning that they cannot be generalized beyond the particular groups used in the experiment.
Random assignment of treatment and control groups address the threats to internal validity and often create threats to external validity. When designing an experiment, each experimenter has to decide which requirement is more important, internal or external validity, and seek a balance between the two. The designed and execution of the experiment is the most effective way of testing for the effects of one variable on another variable.
Threats to External Validity
Del Siegle, Ph.D.Neag School of Education - University of Connecticut. Web:
http://www.gifted.uconn.edu/siegle/research/Samples/externalvalidity.html

Population Validity is defined as the extent to which the results of a study can be generalized from the specific sample that was studied to a larger group of subjects. They are:
The extent to which one can generalize from the study sample to a defined population. Therefore, if the sample is drawn from an accessible population, rather than the target population, generalizing the research results from the accessible population to the target population is risky.
The extent to which person logical variables interact with treatment effects. Thus, if the study is an experiment, it may be possible that different results might be found with students at different grades (a person logical variable).

Ecological Validity is defined as the extent to which the results of an experiment can be generalized from the set of environmental conditions created by the researcher to other environmental conditions (settings and conditions). They are:

Explicit description of the experimental treatment (not sufficiently described for others to replicate). If the researcher fails to adequately describe how he or she conducted a study, it is difficult to determine whether the results are applicable to other settings.
Multiple-treatment interference (catalyst effect); if a researcher were to apply several treatments, it is difficult to determine how well each of the treatments would work individually. It might be that only the combination of the treatments is effective.

Hawthorne effect (attention causes differences); subjects perform differently because they know they are being studied. "...External validity of the experiment is jeopardized because the findings might not generalize to a situation in which researchers or others who were involved in the research are not present" (Gall, Borg, & Gall, 1996, p. 475)

Novelty and disruption effect (anything different makes a difference); A treatment may work because it is novel and the subjects respond to the uniqueness, rather than the actual treatment. The opposite may also occur, the treatment may not work because it is unique, but given time for the subjects to adjust to it, it might have worked.

Experimenter effect (it only works with this experimenter); the treatment might have worked because of the person implementing it. Given a different person, the treatment might not work at all.

Pretest sensitization (pretest sets the stage); A treatment might only work if a pretest is given. Because they have taken a pretest, the subjects may be more sensitive to the treatment. Had they not taken a pretest, the treatment would not have worked.
Posttest sensitization (posttest helps treatment "fall into place"); the posttest can become a learning experience. "For example, the posttest might cause certain ideas presented during the treatment to 'fall into place' " (p. 477). If the subjects had not taken a posttest, the treatment would not have worked.

Interaction of history and treatment effect (...to everything there is a time...); not only should researchers be cautious about generalizing to other population, caution should be taken to generalize to a different time period. As time passes, the conditions under which treatments work change.

Measurement of the dependent variable (maybe only works with M/C tests); A treatment may only be evident with certain types of measurements. A teaching method may produce superior results when its effectiveness is tested with an essay test, but show no differences when the effectiveness is measured with a multiple choice test.
Interaction of time of measurement and treatment effect (it takes a while for the treatment to kick in); it may be that the treatment effect does not occur until several weeks after the end of the treatment. In this situation, a posttest at the end of the treatment would show no impact, but a posttest a month later might show an impact.

Bracht, G. H., & Glass, G. V. (1968). The external validity of experiments. American Education Research Journal, 5, 437-474.
Gall, M. D., Borg, W. R., & Gall, J. P. (1996). Educational research: An introduction. White Plains, NY: Longman.

Random Assignment
Newton, Isaac (1642-1727) ...from the same principles, I now demonstrate the frame of the System of the World.
What is necessary for an assignment to be considered random? The most important requirement is that each participant or subject in the study have an equal chance to be assigned to a group. Statistically, “equal chance” is expressed as “.5” or “50%.” One method is to randomly assign the subjects to two groups by flipping a coin. If the result of the coin toss is heads, the subject is assigned to one group. If the result is tails, the subject is assigned to the other group. A more effective method for random assignment is to use a personal computer with a statistical program like Mini-Tab or SPSS to randomly assign the subjects to groups.
Random Sampling is not a complete representation of the population from which it was drawn, this random variation in the results is known as sampling error. Sampling error or estimation error is the error caused by observing a sample instead of the whole population. Mathematical theory is available to assess the sampling error. Estimates obtained from random samples are accompanied by measures of the uncertainty associated with the estimate. This can take the form of a standard error, or if the sample is large enough for the central limit theorem to take effect, confidence intervals may be calculated.
The following link is an exercise desisigned to teach about random sampling and assignment.
Random Assignment Tutorial. If you would like to generarate your own random numbers for your project Research Randomizer has a Randomizer form to instantly generate random numbers. Randomizer form

Design Notation
"The greatest challenge to any thinker is stating the problem in a way that will allow a solution." Bertrand Russell

Design notation is a way to indicate the number of factors and the number of levels of each factor. This system provides the researcher with all the information necessary for describing an experimental design. Be aware of the fact that design notation is a very flexible system with respect to symbol definition. In order to accurately interpret the results of another researcher's work that utilizes design notation, that researcher must provide a legend that defines the meaning of the symbols. The elements of design notation are:


· “O” is the symbol for an observation or measurement.
· “X” is the symbol for a program or treatment group.

· The “Groups” (program and control) are each given their own line for their notation symbol(s).
For example, if there are three lines there are three groups.

· “Assignment Groups”- At the beginning of each line you will see letters (called notation symbols), the first letter describes how the group was assigned:

R= RANDOM ASSIGNMENT
N= NONEQUIVALENT GROUP DESIGN
C= CUTOFF POINT FOR ASSIGN

Note: If you do not see one of the above three letters on a notation line and the line begins with an “O”, that line indicates a nonequivalent group.


Experimental (Notation)Designs and Their Meaning
"To understand is to perceive patterns." Isaiah Berlin, Historical Inevitability

There are three types of experimental designs. They are:
* Pre-experimental designs
* True experimental designs and
* Quasi-experimental designs.
Pre-experimental designs lack random assignment to the program group and the control group.This design illustrates some inherent weaknesses in terms of establishing internal validity. The better designs are called true experimental designs and quasi-experimental designs. True experimental designs are more complex and use randomization and other techniques to control the threats to internal validity. Quasi-experimental designs (Trochim, 2006) are special designs for use in the approximation of true experimental control in nonexperimental settings. The closer your nonequivalent group approximates a true experimental population the stronger the internal validity.

Pre-experimental Designs and Their Meaning
Pre-experimental Designs
On the surface, the design below appears to be an adequate design. The subjects are pretested, exposed a treatment, and then posttested. It would seem that any differences between the pretest measures and posttest measures would be due to the progam treatment.


The One-Group Pretest-Posttest Design
However, there are serious weaknesses in this design. With the exceptions of selection and morality threat to internal validity, which are not factors due to the lack of a control group, this design is subject to five other threats to internal validity. If a historical event related to the dependent variable intervenes between the pretest and the posttest, its effects could be confused with those of the independent variable. Maturation changes in the subjects could also produce differences between pretest and posttest scores. If paper-and pencil measures are used on a pretest and a different test measure was used on the posttest, a shift of scores from pretest to posttest could occur resulting in a testing threat. Regardless of the measurement process utilized, instrumentation changes could produce variation in the pretest and posttest scores. Finally, if the subjects were selected because they possessed some extreme characteristic, differences between pretest and posttest scores could be due to regression toward the mean.
In all of these cases, variation on the dependent variable produced by one or more of the validity threats could easily be mistaken for variation due to the independent variable. The fact that plausible alternative explanation can not be ruled out makes it very difficult to say with any kind of confidence the treatment given caused the observed effect.
The next pre-experimental design involves comparing one group that experiences the treatment with another group that does not.
Experimental group: X O
Control group: O
In considering this design, it is important to recognize that the comparison group that appears to be a control group is not, in the true sense, a control group. The major validity threat to this design is selection. Note that no random assignment (omission of the letter "R") is the indicator that the comparison group nonequivalent. In the above design, the group compared is picked up only for the purpose of comparison. There is no assurance of comparability between it and the experimental group. For example, we might wish to test the impact of a new type of math test by comparing a school in which the program exists with one that does not have the program. Any conclusions we might reach about the effects of the program might be inaccurate because of other differences between the two schools.
Despite their weaknesses, pre-experimental designs are used when resources do not permit the development of true experimental designs. The conclusions reached from this type of design should be regarded with the utmost caution and the results viewed as suggestive at best (Dooley, 1990).

True Experimental Designs and Their Meaning
True Experimental Designs
Probably the most common design is the Pretest-Posttest Group Design with random assignment. This design is used so often that it is frequently referred to by its popular name: the "classic" experimental design. In a true experimental design, the proper test of a hypotheses is the comparison of the posttests between the treatment group and the control group.

Experimental group: R O X O
Control group: R O O
This design utilizes a control group, using random assignment to equalize the comparison groups, which eliminates all the threats to internal validity except mortality. Because of this, we can have considerable confidence that any differences between treatment group and control group are due to the treatment.
Why are internal threats to validity removed by this design? History is removed as a rival explanation of differences between the groups on the posttest because both groups would experience the same events. Maturation effects are removed, because the same amount of time passes for both groups. Instrumentation threats are controlled by this design because although any unreliability in the measurement could cause a shift in scores from pretest to posttest, both groups would experience the same effect. By removing threats to internal validity you maintain equivalence between the groups. This enables you to conclude with a high degree of confidence that your treatment caused the observed effect and not some alternate plausible explanation.
With respect to regression, the classic experimental design can control for regression through random assignment of subjects with extreme characteristics. This ensures that whenever regression does take place both groups will equally experience its effect. Regression toward the mean should not, therefore, account for any differences between the groups on the posttest. Randomization also controls for selection threat to internal validity by making sure that the comparison groups are equivalent.

Another true experimental design is the Solomon Four-Group Design which is more sophisticated in that four different comparison groups are used.
Experimental group 1: R O X O
Control group 1: R O O
Experimental group 2: R X O
Control group 2: R O

The major advantage of the Solomon design is that it can tell us whether changes in the dependent variable are due to some interaction effect between the pretest and the treatment. For example, let's say we wanted to assess the effect on attitude about police officers (the dependent variable) after receiving positive information about a group of police officers' community service work (the independent variable). During the pretest, the groups are asked questions regarding their attitudes toward police officers. Next, they are exposed to the experimental treatment: newspaper articles reporting on civic deeds and rescue efforts of members of the police department.

If treatment group 1 scores lower on the attitude test than control group 1, it might be due to the independent variable. But it could also be that filling out a pretest questionnaire has sensitized people to the difficulties of being a police officer. The people in treatment group 1 are alerted to the issues and they react more strongly to the experimental treatment than they would have without such pretesting. If this is true, then experimental group 2 should show less change than experimental group 1. If the independent variable has an effect separate from its interaction with the treatment, then experimental group 2 should show more change than control group 1. If control group 1 and experimental group 2 show no change but experimental group 1 does show a change, then change is produced only by the interaction of pretesting and treatment.

When using the Solomon Four-Group Design our concern with history and maturation effects is usually only in terms of controlling their effects. The Solomon design enables us to make a more complex assessment of the cause of changes in the dependent variable. However, the combined effects of maturation and history can not only be controlled but also measured. By comparing the posttest of control group 2 with the pretests of experimental group 1 and control group 1, these effects can be assessed. However, our concern with history and maturation effects is usually only in terms of controlling their effects, not measuring them.

The Solomon design is often bypassed because it requires twice as many groups. This effectively doubles the time and cost of conducting the experiment. Many researchers decide that the advantages are not worth the added cost and complexity (Graziano and Raulin, 1996).

Social Researchers
We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to a "contentious group of truth seekers" that agrees to organize it in this way - an agreement that holds through the research community and is codified in the patterns of our language of research.

Our challenge as "truth seekers" is to see things as they are and ask why? Hopefully our solutions will have a positive effect on change and not be adversely affected by it.
David Abrahams

Bibliography
Campbell, Donald T and Julian C Stanley
Experimental and Quasi-Experimental Designs for Research, Rand McNally 1966
Graziano, Anthony and Raulin, Michael
Research Methods A Process of Inquiry Longman,Inc. 1996
Howard, George
Basic Reasearch Methods in the Social Sciences Scott, Foresman and Company 1985

Trochim, William, M.K.
Social Research Methods Available at:
http://www.socialresearchmethods.net


(Electronic Version): StatSoft, Inc. (2007). Electronic Statistics Textbook. Tulsa, OK: StatSoft.
WEB:
:http://www.statsoft.com/textbook/stathome.htm.

No comments: