No internet connection.
All search filters on the page have been cleared., your search has been saved..
- Sign in to my profile My Profile
Reader's guide
Entries a-z, subject index.
- Random Assignment
- By: Richard D. Harvey & Jessica P. Harvey
- In: The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation
- Chapter DOI: https:// doi. org/10.4135/9781506326139.n570
- Subject: Education
- Show page numbers Hide page numbers
Random assignment is a technique for assigning participants to experimental conditions and a prerequisite of true experimental designs. It requires the use of randomization methods to place participants of a particular study into experimental conditions (e.g., treatment vs. control). It ensures that each participant has an equal chance of being placed into either of the experimental groups.
Systemic differences at the outset of an experiment can hurt internal validity—the degree to which effects of the experiment can be attributed solely to the experimental treatment. The random assignment of study members into groups is a requirement for alleviating any initial systemic differences between experimental groups. However, random assignment alone is not a guarantee that there won’t be any initial differences between groups, but rather that any initial differences won’t be systemic.
Random assignment is commonly confused and used interchangeably with random selection. However, the terms denote different foci. Random assignment refers to the method by which study participants are randomly assigned to experimental conditions. By comparison, random selection refers to the method by which the sample is selected from the population for inclusion in a particular study.
[Page 1366] Although random assignment is a necessary component of an experimental design, random selection can be used with any research design. However, for an experiment, random assignment would typically follow after random selection has occurred.
There are two distinct forms of random assignment: simple and matched. Simple random assignment ensures that the participants are independently assigned to an experimental condition. Although simple random assignment improves internal validity, the experiment may be vulnerable to extraneous variables (i.e., individual differences). Matched random assignment controls for individual differences by pairing participants in “sets” based on a shared characteristic and subsequently assigning them to different experimental conditions. If an experiment has multiple conditions or the sample size is relatively small, participants can be allocated in “blocks” to ensure equal sample size distribution (block randomization).
Simple random assignment could be achieved using a computerized randomizer or manual technique (e.g., flipping a coin). However, in the cases of small samples or possible confounding variables, a researcher would use the block or matched random assignment. For example, if a researcher wanted to examine the effect of a new curriculum on academic performance, participants could be paired into sets based on GPA and then randomly assigned to the experimental conditions. By pairing the sets based on GPA, the researchers can avoid an unequal distribution of skill in either condition.
See also Experimental Designs ; Generalizability ; Random Selection ; Threats to Research Validity ; Validity ; Validity Generalization
Further Readings
- Race to the Top
Random Error
- Standards for Educational and Psychological Testing
- Accessibility of Assessment
- Accommodations
- African Americans and Testing
- Asian Americans and Testing
- Ethical Issues in Testing
- Gender and Testing
- High-Stakes Tests
- Latinos and Testing
- Minority Issues in Testing
- Second Language Learners, Assessment of
- Test Security
- Testwiseness
- Ability Tests
- Achievement Tests
- Adaptive Behavior Assessments
- Admissions Tests
- Alternate Assessments
- Aptitude Tests
- Attenuation, Correction for
- Attitude Scaling
- Basal Level and Ceiling Level
- Buros Mental Measurements Yearbook
- Classification
- Cognitive Diagnosis
- Computer-Based Testing
- Computerized Adaptive Testing
- Confidence Interval
- Curriculum-Based Assessment
- Diagnostic Tests
- Difficulty Index
- Discrimination Index
- English Language Proficiency Assessment
- Formative Assessment
- Intelligence Tests
- Interquartile Range
- Minimum Competency Testing
- Personality Assessment
- Power Tests
- Progress Monitoring
- Projective Tests
- Psychometrics
- Reading Comprehension Assessments
- Screening Tests
- Self-Report Inventories
- Sociometric Assessment
- Speeded Tests
- Standards-Based Assessment
- Summative Assessment
- Technology-Enhanced Items
- Test Battery
- Testing, History of
- Value-Added Models
- Written Language Assessment
- Authentic Assessment
- Backward Design
- Bloom’s Taxonomy
- Classroom Assessment
- Constructed-Response Items
- Curriculum-Based Measurement
- Essay Items
- Fill-in-the-Blank Items
- Game-Based Assessment
- Matching Items
- Multiple-Choice Items
- Paper-and-Pencil Assessment
- Performance-Based Assessment
- Portfolio Assessment
- Selection Items
- Student Self-Assessment
- Supply Items
- Technology in Classroom Assessment
- True-False Items
- Universal Design of Assessment
- a Parameter
- b Parameter
- c Parameter
- Conditional Standard Error of Measurement
- Differential Item Functioning
- Item Information Function
- Item Response Theory
- Multidimensional Item Response Theory
- Rasch Model
- Test Information Function
- Testlet Response Theory
- Coefficient Alpha
- Decision Consistency
- Inter-Rater Reliability
- Internal Consistency
- Kappa Coefficient of Agreement
- Phi Coefficient (in Generalizability Theory)
- Reliability
- Spearman-Brown Prophecy Formula
- Split-Half Reliability
- Test–Retest Reliability
- Age Equivalent Scores
- Analytic Scoring
- Automated Essay Evaluation
- Criterion-Referenced Interpretation
- Grade-Equivalent Scores
- Guttman Scaling
- Holistic Scoring
- Intelligence Quotient
- Interval-Level Measurement
- Ipsative Scales
- Levels of Measurement
- Likert Scaling
- Multidimensional Scaling
- Nominal-Level Measurement
- Norm-Referenced Interpretation
- Normal Curve Equivalent Score
- Ordinal-Level Measurement
- Percentile Rank
- Primary Trait Scoring
- Propensity Scores
- Rating Scales
- Reverse Scoring
- Score Reporting
- Semantic Differential Scaling
- Standardized Scores
- Thurstone Scaling
- Visual Analog Scales
- W Difference Scores
- Bayley Scales of Infant and Toddler Development
- Beck Depression Inventory
- Dynamic Indicators of Basic Early Literacy Skills
- Educational Testing Service
- Iowa Test of Basic Skills
- Kaufman-ABC Intelligence Test
- Minnesota Multiphasic Personality Inventory
- National Assessment of Educational Progress
- Partnership for Assessment of Readiness for College and Careers
- Peabody Picture Vocabulary Test
- Programme for International Student Assessment
- Progress in International Reading Literacy Study
- Raven’s Progressive Matrices
- Smarter Balanced Assessment Consortium
- Standardized Tests
- Stanford-Binet Intelligence Scales
- Torrance Tests of Creative Thinking
- Trends in International Mathematics and Science Study
- Wechsler Intelligence Scales
- Woodcock-Johnson Tests of Achievement
- Woodcock-Johnson Tests of Cognitive Ability
- Woodcock-Johnson Tests of Oral Language
- Concurrent Validity
- Consequential Validity Evidence
- Construct Irrelevance
- Construct Underrepresentation
- Content-Related Validity Evidence
- Criterion-Based Validity Evidence
- Measurement Invariance
- Multicultural Validity
- Multitrait–Multimethod Matrix
- Predictive Validity
- Sensitivity
- Social Desirability
- Specificity
- Unitary View of Validity
- Validity Coefficients
- Validity Generalization
- Validity, History of
- Critical Thinking
- Learned Helplessness
- Locus of Control
- Long-Term Memory
- Metacognition
- Problem Solving
- Self-Efficacy
- Self-Regulation
- Short-Term Memory
- Working Memory
- Data Visualization Methods
- Graphical Modeling
- Scatterplots
- Asperger’s Syndrome
- Attention-Deficit/Hyperactivity Disorder
- Autism Spectrum Disorder
- Bipolar Disorder
- Developmental Disabilities
- Intellectual Disability and Postsecondary Education
- Learning Disabilities
- F Distribution
- Areas Under the Normal Curve
- Bernoulli Distribution
- Distributions
- Moments of a Distribution
- Normal Distribution
- Poisson Distribution
- Posterior Distribution
- Prior Distribution
- Brown v. Board of Education
- Adequate Yearly Progress
- Americans with Disabilities Act
- Coleman Report
- Common Core State Standards
- Corporal Punishment
- Every Student Succeeds Act
- Family Educational Rights and Privacy Act
- Great Society Programs
- Health Insurance Portability and Accountability Act
- Individualized Education Program
- Individuals With Disabilities Education Act
- Least Restrictive Environment
- No Child Left Behind Act
- Policy Research
- School Vouchers
- Special Education Identification
- Special Education Law
- State Standards
- Advocacy in Evaluation
- Collaboration, Evaluation of
- Conceptual Framework
- Evaluation Versus Research
- Evaluation, History of
- Feasibility
- Goals and Objectives
- Logic Models
- Program Theory of Change
- Stakeholders
- Triangulation
- Appreciative Inquiry
- CIPP Evaluation Model
- Collaborative Evaluation
- Consumer-Oriented Evaluation Approach
- Cost–Benefit Analysis
- Culturally Responsive Evaluation
- Democratic Evaluation
- Developmental Evaluation
- Empowerment Evaluation
- Evaluation Capacity Building
- Evidence-Centered Design
- External Evaluation
- Feminist Evaluation
- Formative Evaluation
- Four-Level Evaluation Model
- Goal-Free Evaluation
- Internal Evaluation
- Needs Assessment
- Participatory Evaluation
- Personnel Evaluation
- Policy Evaluation
- Process Evaluation
- Program Evaluation
- Responsive Evaluation
- Success Case Method
- Summative Evaluation
- Utilization-Focused Evaluation
- Adolescence
- Cognitive Development, Theory of
- Erikson’s Stages of Psychosocial Development
- Kohlberg’s Stages of Moral Development
- Parenting Styles
- Accreditation
- Angoff Method
- Body of Work Method
- Bookmark Method
- Construct-Related Validity Evidence
- Content Analysis
- Content Standard
- Content Validity Ratio
- Curriculum Mapping
- Ebel Method
- Instructional Sensitivity
- Item Analysis
- Item Banking
- Item Development
- Learning Maps
- Modified Angoff Method
- Proficiency Levels in Language
- Readability
- Score Linking
- Standard Setting
- Table of Specifications
- Vertical Scaling
- American Educational Research Association
- American Evaluation Association
- American Psychological Association
- Institute of Education Sciences
- Interstate School Leaders Licensure Consortium Standards
- Joint Committee on Standards for Educational Evaluation
- National Council on Measurement in Education
- National Science Foundation
- Office of Elementary and Secondary Education
- Organisation for Economic Co-operation and Development
- Teachers’ Associations
- U.S. Department of Education
- World Education Research Association
- Diagnostic and Statistical Manual of Mental Disorders
- Guiding Principles for Evaluators
- Accountability
- Certification
- Classroom Observations
- Confidentiality
- Conflict of Interest
- Data-Driven Decision Making
- Educational Researchers, Training of
- Ethical Issues in Educational Research
- Ethical Issues in Evaluation
- Evaluation Consultants
- Federally Sponsored Research and Programs
- Framework for Teaching
- Professional Development of Teachers
- Professional Learning Communities
- School Psychology
- Teacher Evaluation
- Demographics
- Dissertations
- Journal Articles
- Literature Review
- Methods Section
- Research Proposals
- Results Section
- Delphi Technique
- Discourse Analysis
- Document Analysis
- Ethnography
- Field Notes
- Focus Groups
- Grounded Theory
- Historical Research
- Interviewer Bias
- Market Research
- Member Check
- Narrative Research
- Naturalistic Inquiry
- Participant Observation
- Qualitative Data Analysis
- Qualitative Research Methods
- Transcription
- Trustworthiness
- Applied Research
- Aptitude-Treatment Interaction
- Causal Inference
- Ecological Validity
- External Validity
- File Drawer Problem
- Fraudulent and Misleading Data
- Generalizability
- Hypothesis Testing
- Impartiality
- Interaction
- Internal Validity
- Objectivity
- Order Effects
- Representativeness
- Response Rate
- Scientific Method
- Type III Error
- ABA Designs
- Action Research
- Case Study Method
- Causal-Comparative Research
- Cross-Cultural Research
- Crossover Design
- Design-Based Research
- Double-Blind Design
- Experimental Designs
- Gain Scores, Analysis of
- Latin Square Design
- Meta-Analysis
- Mixed Methods Research
- Monte Carlo Simulation Studies
- Nonexperimental Designs
- Pilot Studies
- Posttest-Only Control Group Design
- Pre-experimental Designs
- Pretest–Posttest Designs
- Quasi-Experimental Designs
- Regression Discontinuity Analysis
- Repeated Measures Designs
- Single-Case Research
- Solomon Four-Group Design
- Split-Plot Design
- Static Group Design
- Time Series Analysis
- Triple-Blind Studies
- Twin Studies
- Zelen’s Randomized Consent Design
- Cluster Sampling
- Control Variables
- Convenience Sampling
- Expert Sampling
- Judgment Sampling
- Markov Chain Monte Carlo Methods
- Quantitative Research Methods
- Quota Sampling
- Random Selection
- Replication
- Simple Random Sampling
- Snowball Sampling
- Stratified Random Sampling
- Survey Methods
- Systematic Sampling
- Bubble Drawing
- C Programming Languages
- Collage Technique
- Computer Programming in Quantitative Analysis
- Concept Mapping
- HyperResearch
- Johari Window
- 45 CFR Part 46
- Belmont Report
- Cultural Competence
- Deception in Human Subjects Research
- Declaration of Helsinki
- Falsified Data in Large-Scale Surveys
- Flynn Effect
- Human Subjects Protections
- Human Subjects Research, Definition of
- Informed Consent
- Institutional Review Boards
- Nuremberg Code
- Service-Learning
- Social Justice
- STEM Education
- Matrices (in Social Network Analysis)
- Network Centrality
- Network Cohesion
- Network Density
- Social Network Analysis
- Social Network Analysis Using R
- Bayes’s Theorem
- Bayesian Statistics
- Marginal Maximum Likelihood Estimation
- Maximum Likelihood Estimation
- Analysis of Covariance
- Analysis of Variance
- Binomial Test
- Canonical Correlation
- Chi-Square Test
- Cluster Analysis
- Cochran Q Test
- Confirmatory Factor Analysis
- Cramér’s V Coefficient
- Descriptive Statistics
- Discriminant Function Analysis
- Exploratory Factor Analysis
- Fisher Exact Test
- Friedman Test
- Goodness-of-Fit Tests
- Hierarchical Regression
- Inferential Statistics
- Kolmogorov-Smirnov Test
- Kruskal-Wallis Test
- Levene’s Homogeneity of Variance Test
- Logistic Regression
- Mann-Whitney Test
- Mantel-Haenszel Test
- McNemar Change Test
- Measures of Central Tendency
- Measures of Variability
- Median Test
- Mixed Model Analysis of Variance
- Multiple Linear Regression
- Multivariate Analysis of Variance
- Part Correlations
- Partial Correlations
- Path Analysis
- Pearson Correlation Coefficient
- Phi Correlation Coefficient
- Repeated Measures Analysis of Variance
- Simple Linear Regression
- Spearman Correlation Coefficient
- Standard Error of Measurement
- Stepwise Regression
- Structural Equation Modeling
- Survival Analysis
- Two-Way Analysis of Variance
- Two-Way Chi-Square
- Wilcoxon Signed Ranks Test
- Alpha Level
- Autocorrelation
- Bonferroni Procedure
- Bootstrapping
- Categorical Data Analysis
- Central Limit Theorem
- Conditional Independence
- Convergence
- Correlation
- Data Mining
- Dummy Variables
- Effect Size
- Estimation Bias
- Eta Squared
- Gauss-Markov Theorem
- Holm’s Sequential Bonferroni Procedure
- Latent Class Analysis
- Local Independence
- Longitudinal Data Analysis
- Matrix Algebra
- Mediation Analysis
- Missing Data Analysis
- Multicollinearity
- Parameter Invariance
- Parameter Mean Squared Error
- Parameter Random Error
- Post Hoc Analysis
- Power Analysis
- Probit Transformation
- Robust Statistics
- Sample Size
- Significance
- Simpson’s Paradox
- Standard Deviation
- Type I Error
- Type II Error
- Winsorizing
- Cross-Classified Models
- Diagnostic Classification Models
- Dyadic Data Analysis
- Generalized Linear Mixed Models
- Growth Curve Modeling
- Hierarchical Linear Modeling
- Model–Data Fit
- Active Learning
- Bilingual Education, Research on
- College Success
- Constructivist Approach
- Cooperative Learning
- Distance Learning
- Evidence-Based Interventions
- Homeschooling
- Instructional Objectives
- Instructional Rounds
- Kindergarten
- Kinesthetic Learning
- Learning Progressions
- Learning Styles
- Learning Theories
- Mastery Learning
- Montessori Schools
- Out-of-School Activities
- Pygmalion Effect
- Quantitative Literacy
- Reading Comprehension
- Scaffolding
- School Leadership
- Self-Directed Learning
- Social Learning
- Socio-Emotional Learning
- Waldorf Schools
- g Theory of Intelligence
- Ability–Achievement Discrepancy
- Applied Behavior Analysis
- Attribution Theory
- Behaviorism
- Cattell–Horn–Carroll Theory of Intelligence
- Classical Conditioning
- Classical Test Theory
- Cognitive Neuroscience
- Educational Psychology
- Educational Research, History of
- Emotional Intelligence
- Epistemologies, Teacher and Student
- Experimental Phonetics
- Feedback Intervention Theory
- Generalizability Theory
- Improvement Science Research
- Information Processing Theory
- Instructional Theory
- Multiple Intelligences, Theory of
- Operant Conditioning
- Paradigm Shift
- Phenomenology
- Postpositivism
- Pragmatic Paradigm
- Premack Principle
- Reinforcement
- Response to Intervention
- School-Wide Positive Behavior Support
- Social Cognitive Theory
- Speech-Language Pathology
- Terman Study of the Gifted
- Transformative Paradigm
- Triarchic Theory of Intelligence
- Universal Design in Education
- Wicked Problems
- Zone of Proximal Development
- Hawthorne Effect
- Instrumentation
- John Henry Effect
- Nonresponse Bias
- Observer Effect
- Placebo Effect
- Reactive Arrangements
- Regression Toward the Mean
- Restriction of Range
- Selection Bias
- Threats to Research Validity
- Treatment Integrity
Sign in to access this content
Get a 30 day free trial, more like this, sage recommends.
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches
- Sign in/register
Navigating away from this page will delete your results
Please save your results to "My Self-Assessments" in your profile before navigating away from this page.
Sign in to my profile
Please sign into your institution before accessing your profile
Sign up for a free trial and experience all Sage Learning Resources have to offer.
You must have a valid academic email address to sign up.
Get off-campus access
- View or download all content my institution has access to.
Sign up for a free trial and experience all Sage Learning Resources has to offer.
- view my profile
- view my lists
Random Assignment in Psychology: Definition & Examples
Julia Simkus
Editor at Simply Psychology
BA (Hons) Psychology, Princeton University
Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.
Learn about our Editorial Process
Saul McLeod, PhD
Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
Olivia Guy-Evans, MSc
Associate Editor for Simply Psychology
BSc (Hons) Psychology, MSc Psychology of Education
Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.
In psychology, random assignment refers to the practice of allocating participants to different experimental groups in a study in a completely unbiased way, ensuring each participant has an equal chance of being assigned to any group.
In experimental research, random assignment, or random placement, organizes participants from your sample into different groups using randomization.
Random assignment uses chance procedures to ensure that each participant has an equal opportunity of being assigned to either a control or experimental group.
The control group does not receive the treatment in question, whereas the experimental group does receive the treatment.
When using random assignment, neither the researcher nor the participant can choose the group to which the participant is assigned. This ensures that any differences between and within the groups are not systematic at the onset of the study.
In a study to test the success of a weight-loss program, investigators randomly assigned a pool of participants to one of two groups.
Group A participants participated in the weight-loss program for 10 weeks and took a class where they learned about the benefits of healthy eating and exercise.
Group B participants read a 200-page book that explains the benefits of weight loss. The investigator randomly assigned participants to one of the two groups.
The researchers found that those who participated in the program and took the class were more likely to lose weight than those in the other group that received only the book.
Importance
Random assignment ensures that each group in the experiment is identical before applying the independent variable.
In experiments , researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. Random assignment increases the likelihood that the treatment groups are the same at the onset of a study.
Thus, any changes that result from the independent variable can be assumed to be a result of the treatment of interest. This is particularly important for eliminating sources of bias and strengthening the internal validity of an experiment.
Random assignment is the best method for inferring a causal relationship between a treatment and an outcome.
Random Selection vs. Random Assignment
Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study.
On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups.
Random selection ensures that everyone in the population has an equal chance of being selected for the study. Once the pool of participants has been chosen, experimenters use random assignment to assign participants into groups.
Random assignment is only used in between-subjects experimental designs, while random selection can be used in a variety of study designs.
Random Assignment vs Random Sampling
Random sampling refers to selecting participants from a population so that each individual has an equal chance of being chosen. This method enhances the representativeness of the sample.
Random assignment, on the other hand, is used in experimental designs once participants are selected. It involves allocating these participants to different experimental groups or conditions randomly.
This helps ensure that any differences in results across groups are due to manipulating the independent variable, not preexisting differences among participants.
When to Use Random Assignment
Random assignment is used in experiments with a between-groups or independent measures design.
In these research designs, researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables.
There is usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable at the onset of the study.
How to Use Random Assignment
There are a variety of ways to assign participants into study groups randomly. Here are a handful of popular methods:
- Random Number Generator : Give each member of the sample a unique number; use a computer program to randomly generate a number from the list for each group.
- Lottery : Give each member of the sample a unique number. Place all numbers in a hat or bucket and draw numbers at random for each group.
- Flipping a Coin : Flip a coin for each participant to decide if they will be in the control group or experimental group (this method can only be used when you have just two groups)
- Roll a Die : For each number on the list, roll a dice to decide which of the groups they will be in. For example, assume that rolling 1, 2, or 3 places them in a control group and rolling 3, 4, 5 lands them in an experimental group.
When is Random Assignment not used?
- When it is not ethically permissible: Randomization is only ethical if the researcher has no evidence that one treatment is superior to the other or that one treatment might have harmful side effects.
- When answering non-causal questions : If the researcher is just interested in predicting the probability of an event, the causal relationship between the variables is not important and observational designs would be more suitable than random assignment.
- When studying the effect of variables that cannot be manipulated: Some risk factors cannot be manipulated and so it would not make any sense to study them in a randomized trial. For example, we cannot randomly assign participants into categories based on age, gender, or genetic factors.
Drawbacks of Random Assignment
While randomization assures an unbiased assignment of participants to groups, it does not guarantee the equality of these groups. There could still be extraneous variables that differ between groups or group differences that arise from chance. Additionally, there is still an element of luck with random assignments.
Thus, researchers can not produce perfectly equal groups for each specific study. Differences between the treatment group and control group might still exist, and the results of a randomized trial may sometimes be wrong, but this is absolutely okay.
Scientific evidence is a long and continuous process, and the groups will tend to be equal in the long run when data is aggregated in a meta-analysis.
Additionally, external validity (i.e., the extent to which the researcher can use the results of the study to generalize to the larger population) is compromised with random assignment.
Random assignment is challenging to implement outside of controlled laboratory conditions and might not represent what would happen in the real world at the population level.
Random assignment can also be more costly than simple observational studies, where an investigator is just observing events without intervening with the population.
Randomization also can be time-consuming and challenging, especially when participants refuse to receive the assigned treatment or do not adhere to recommendations.
What is the difference between random sampling and random assignment?
Random sampling refers to randomly selecting a sample of participants from a population. Random assignment refers to randomly assigning participants to treatment groups from the selected sample.
Does random assignment increase internal validity?
Yes, random assignment ensures that there are no systematic differences between the participants in each group, enhancing the study’s internal validity .
Does random assignment reduce sampling error?
Yes, with random assignment, participants have an equal chance of being assigned to either a control group or an experimental group, resulting in a sample that is, in theory, representative of the population.
Random assignment does not completely eliminate sampling error because a sample only approximates the population from which it is drawn. However, random sampling is a way to minimize sampling errors.
When is random assignment not possible?
Random assignment is not possible when the experimenters cannot control the treatment or independent variable.
For example, if you want to compare how men and women perform on a test, you cannot randomly assign subjects to these groups.
Participants are not randomly assigned to different groups in this study, but instead assigned based on their characteristics.
Does random assignment eliminate confounding variables?
Yes, random assignment eliminates the influence of any confounding variables on the treatment because it distributes them at random among the study groups. Randomization invalidates any relationship between a confounding variable and the treatment.
Why is random assignment of participants to treatment conditions in an experiment used?
Random assignment is used to ensure that all groups are comparable at the start of a study. This allows researchers to conclude that the outcomes of the study can be attributed to the intervention at hand and to rule out alternative explanations for study results.
Further Reading
- Bogomolnaia, A., & Moulin, H. (2001). A new solution to the random assignment problem . Journal of Economic theory , 100 (2), 295-328.
- Krause, M. S., & Howard, K. I. (2003). What random assignment does and does not do . Journal of Clinical Psychology , 59 (7), 751-766.
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
30 8.1 Experimental design: What is it and when should it be used?
Learning objectives.
- Define experiment
- Identify the core features of true experimental designs
- Describe the difference between an experimental group and a control group
- Identify and describe the various types of true experimental designs
Experiments are an excellent data collection strategy for social workers wishing to observe the effects of a clinical intervention or social welfare program. Understanding what experiments are and how they are conducted is useful for all social scientists, whether they actually plan to use this methodology or simply aim to understand findings from experimental studies. An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies.
Experiments have a long and important history in social science. Behaviorists such as John Watson, B. F. Skinner, Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of conditioning. Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects. Moreover, behaviorist experiments brought psychology and social science away from the abstract world of Freudian analysis and towards empirical inquiry, grounded in real-world observations and objectively-defined variables. Experiments are used at all levels of social work inquiry, including agency-based experiments that test therapeutic interventions and policy experiments that test new programs.
Several kinds of experimental designs exist. In general, designs considered to be true experiments contain three basic key features:
- random assignment of participants into experimental and control groups
- a “treatment” (or intervention) provided to the experimental group
- measurement of the effects of the treatment in a post-test administered to both groups
Some true experiments are more complex. Their designs can also include a pre-test and can have more than two groups, but these are the minimum requirements for a design to be a true experiment.
Experimental and control groups
In a true experiment, the effect of an intervention is tested by comparing two groups: one that is exposed to the intervention (the experimental group , also known as the treatment group) and another that does not receive the intervention (the control group ). Importantly, participants in a true experiment need to be randomly assigned to either the control or experimental groups. Random assignment uses a random number generator or some other random process to assign people into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups are due to random chance. We will address more of the logic behind random assignment in the next section.
Treatment or intervention
In an experiment, the independent variable is receiving the intervention being tested—for example, a therapeutic technique, prevention program, or access to some service or support. It is less common in of social work research, but social science research may also have a stimulus, rather than an intervention as the independent variable. For example, an electric shock or a reading about death might be used as a stimulus to provoke a response.
In some cases, it may be immoral to withhold treatment completely from a control group within an experiment. If you recruited two groups of people with severe addiction and only provided treatment to one group, the other group would likely suffer. For these cases, researchers use a control group that receives “treatment as usual.” Experimenters must clearly define what treatment as usual means. For example, a standard treatment in substance abuse recovery is attending Alcoholics Anonymous or Narcotics Anonymous meetings. A substance abuse researcher conducting an experiment may use twelve-step programs in their control group and use their experimental intervention in the experimental group. The results would show whether the experimental intervention worked better than normal treatment, which is useful information.
The dependent variable is usually the intended effect the researcher wants the intervention to have. If the researcher is testing a new therapy for individuals with binge eating disorder, their dependent variable may be the number of binge eating episodes a participant reports. The researcher likely expects her intervention to decrease the number of binge eating episodes reported by participants. Thus, she must, at a minimum, measure the number of episodes that occur after the intervention, which is the post-test . In a classic experimental design, participants are also given a pretest to measure the dependent variable before the experimental treatment begins.
Types of experimental design
Let’s put these concepts in chronological order so we can better understand how an experiment runs from start to finish. Once you’ve collected your sample, you’ll need to randomly assign your participants to the experimental group and control group. In a common type of experimental design, you will then give both groups your pretest, which measures your dependent variable, to see what your participants are like before you start your intervention. Next, you will provide your intervention, or independent variable, to your experimental group, but not to your control group. Many interventions last a few weeks or months to complete, particularly therapeutic treatments. Finally, you will administer your post-test to both groups to observe any changes in your dependent variable. What we’ve just described is known as the classical experimental design and is the simplest type of true experimental design. All of the designs we review in this section are variations on this approach. Figure 8.1 visually represents these steps.
An interesting example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) study of people’s perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Participants in the experimental group were then asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Clearly, these were not meant to be interventions or treatments to help depression, but were stimuli designed to elicit changes in people’s depression levels. Upon measuring depression scores during the post-test period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.
In addition to classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963). The posttest-only control group design is almost the same as classic experimental design, except it does not use a pretest. Researchers who use posttest-only designs want to eliminate testing effects , in which participants’ scores on a measure change because they have already been exposed to it. If you took multiple SAT or ACT practice exams before you took the real one you sent to colleges, you’ve taken advantage of testing effects to get a better score. Considering the previous example on racism and depression, participants who are given a pretest about depression before being exposed to the stimulus would likely assume that the intervention is designed to address depression. That knowledge could cause them to answer differently on the post-test than they otherwise would. In theory, as long as the control and experimental groups have been determined randomly and are therefore comparable, no pretest is needed. However, most researchers prefer to use pretests in case randomization did not result in equivalent groups and to help assess change over time within both the experimental and control groups.
Researchers wishing to account for testing effects but also gather pretest data can use a Solomon four-group design. In the Solomon four-group design , the researcher uses four groups. Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design. By having one set of experimental and control groups that complete the pretest (Groups 1 and 2) and another set that does not complete the pretest (Groups 3 and 4), researchers using the Solomon four-group design can account for testing effects in their analysis.
Solomon four-group designs are challenging to implement in the real world because they are time- and resource-intensive. Researchers must recruit enough participants to create four groups and implement interventions in two of them.
Overall, true experimental designs are sometimes difficult to implement in a real-world practice environment. It may be impossible to withhold treatment from a control group or randomly assign participants in a study. In these cases, pre-experimental and quasi-experimental designs–which we will discuss in the next section–can be used. However, the differences in rigor from true experimental designs leave their conclusions more open to critique.
Experimental design in macro-level research
You can imagine that social work researchers may be limited in their ability to use random assignment when examining the effects of governmental policy on individuals. For example, it is unlikely that a researcher could randomly assign some states to implement decriminalization of recreational marijuana and some states not to in order to assess the effects of the policy change. There are, however, important examples of policy experiments that use random assignment, including the Oregon Medicaid experiment. In the Oregon Medicaid experiment, the wait list for Oregon was so long, state officials conducted a lottery to see who from the wait list would receive Medicaid (Baicker et al., 2013). Researchers used the lottery as a natural experiment that included random assignment. People selected to be a part of Medicaid were the experimental group and those on the wait list were in the control group. There are some practical complications macro-level experiments, just as with other experiments. For example, the ethical concern with using people on a wait list as a control group exists in macro-level research just as it does in micro-level research.
Key Takeaways
- True experimental designs require random assignment.
- Control groups do not receive an intervention, and experimental groups receive an intervention.
- The basic components of a true experiment include a pretest, posttest, control group, and experimental group.
- Testing effects may cause researchers to use variations on the classic experimental design.
- Classic experimental design- uses random assignment, an experimental and control group, as well as pre- and posttesting
- Control group- the group in an experiment that does not receive the intervention
- Experiment- a method of data collection designed to test hypotheses under controlled conditions
- Experimental group- the group in an experiment that receives the intervention
- Posttest- a measurement taken after the intervention
- Posttest-only control group design- a type of experimental design that uses random assignment, and an experimental and control group, but does not use a pretest
- Pretest- a measurement taken prior to the intervention
- Random assignment-using a random process to assign people into experimental and control groups
- Solomon four-group design- uses random assignment, two experimental and two control groups, pretests for half of the groups, and posttests for all
- Testing effects- when a participant’s scores on a measure change because they have already been exposed to it
- True experiments- a group of experimental designs that contain independent and dependent variables, pretesting and post testing, and experimental and control groups
Image attributions
exam scientific experiment by mohamed_hassan CC-0
Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
IMAGES
COMMENTS
A. Random assignment to two groups B. Assessment of change in the dependent variable C. Complete control over the experiment's context D. An experimental group and a control group, Which of the following is a requirement for the pre-test in an experiment? A. The pre-test must be a previously validated instrument. B.
a.Random assignment ensures that the groups are matched on some variable before the experiment. b.Random assignment helps to make the groups equal on a variety of variables. c.Random assignment ensures that each member of the population has an equal chance of being chosen to be in the experiment. d.Random assignment increases the test-retest ...
In a nonequivalent control group design, the groups are created by random assignment. FALSE In the process of conducting an experiment, a researcher administers a survey before the treatment is applied, in order to measure the dependent variable prior to the experimental intervention.
Oct 29, 2024 · Prerequisites for Random Assignment. Before implementing random assignment, several prerequisites must be met to ensure the process is effective. First, researchers must have clearly defined experimental groups, such as a group that receives a medical treatment and other that does not and serves as a control.
Mar 8, 2021 · Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.
Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment (e.g., a treatment group versus a control group) using randomization, such as by a chance procedure (e.g., flipping a coin) or a random number generator. [1]
The random assignment of study members into groups is a requirement for alleviating any initial systemic differences between experimental groups. However, random assignment alone is not a guarantee that there won’t be any initial differences between groups, but rather that any initial differences won’t be systemic.
Jul 31, 2023 · Random assignment ensures that each group in the experiment is identical before applying the independent variable. In experiments , researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables.
Random assignment . Random assignment is a procedure used in experiments to create multiple study groups that include participants with similar characteristics so that the groups are equivalent at the beginning of the study. The procedure involves assigning individuals to an experimental treatment or program at random, or by chance (like the ...
Classic experimental design- uses random assignment, an experimental and control group, as well as pre- and posttesting; Control group- the group in an experiment that does not receive the intervention; Experiment- a method of data collection designed to test hypotheses under controlled conditions