Part D- Experimental Design
D-1: Distinguish between dependent and independent variables
Dependent Variable: A measurable, dimensional quality of a target behaviour to change (i.e., rate, frequency, duration…)
Independent Variable: An intervention or treatment that the experimenter can manipulate to identify if the target behaviour will change.
D-2: Distinguish between internal and external validity
Internal Validity: Efficacy of experimental design; A question to ask yourself when identifying internal validity is if the treatment/intervention was responsible for the observed effects of could the effects be due to confounding factors? It’s like the intervention asking, “Is it me? Am I the problem?”
Threats to Internal Validity:
Subject Confounds
Maturation- changes in the subject over the course of an experiment
Prior learning history and current situation
Drop-outs
Setting Confounds
Variables are easier to control in a laboratory than in natural settings
Measurement Confounds
Observer drift
Observer bias
Change in measurement methodology
Repeated assessment
Sequential treatments
Regression to the mean
External Validity: Determining the effectiveness of a treatment/intervention dependent on the functional relation between the social validity and generalizability of a treatment/intervention, under various conditions
Threats to External Validity:
Nonrandomized/non-representative selection of subjects
Nonrandomized assignment of subjects
Limited number of therapists/settings
Reactive effects of pretesting
Reactive effects of being in an experiment
Drop outs
D-3: Identify the defining features of single subject experimental designs (e.g., individuals serve as their own controls, repeated measures, prediction, verification, replication)
Individuals serve as their own controls: Behaviour is an individual and dynamic phenomenon. It changes for each person over time and can be caused by environmental events. An individual has their own learning history which is taken into account when conducting single-subject experimental designs.
Repeated Measures: Collecting measurement of the dependent variable multiple times during the baseline and intervention phases to ensure if the intervention is changing behaviour effectively.
Prediction: The hypothesis; the anticipated outcome of a presently unknown or future measurement.
Verification: Demonstrates that if the independent variable was not introduced, the behaviour would revert to baseline levels.
Replication: Demonstrates reliability by reverting to baseline/intervention conditions multiple times.
D-4: Describe the advantages of single-subject experimental designs compared to group designs
D-5: Use single-subject experimental designs (e.g., reversal, multiple baseline, multi-element, changing criterion)
Reversal Design: A-B-A Design; Baseline (A) then independent variable (B) is introduced, then revert back to baseline conditions (A) to determine whether there was a change in behaviour due to the independent variable. The preferred version of a reversal design is to conduct A-B-A-B to demonstrate replication of the behaviour change effects by the independent variable.
Multiple Baseline Design: An alternate to reversal designs, where clinicians can analyze the effects of an independent variable across various behaviours, participants, or settings, without removal of the treatment. This design may be weaker for demonstrating experimental control.
Alternating Treatment Design/Multi-element Design: This experimental design compares two or more independent variables to determine behaviour change effects. A removal of the treatment is not required, different interventions can be compared quickly, sequence effects are minimized, and the intervention can begin immediately. Limitation of this design is that it’s unnatural due to its rapidly changing treatments, there is a limited capacity of 4 conditions, and some treatments may not show effect as longer time is required to demonstrate learning.
Changing Criterion Design: It is used to evaluate effects of an intervention in a systematic format. A formal baseline is conducted at prior to the first invention and then, each phase serves as baseline for the following phase. This experimental design requires careful manipulation of the length of phases, magnitude of criterion changes, and number of criterion changes.
D-6: Describe rationales for conducting comparative, component, and parametric analyses
Comparative Analysis: A comparison of two different treatments.
Example: Using an alternating treatment design to compare performance of a learner with mediators vs staff for PECS.
Component Analysis: Experimental designs that compare different parts of a treatment package or two or more independent variables to identify which part or IV is responsible for behaviour change; Clinicians can use two types of component analysis, drop-out or add-in. Drop-out component analysis refers to when parts of a treatment package are systematically removed to determine effects on the behaviour and Add-in component analysis refers to when parts of a treatment package are systematically added to determine effects on the behaviour.
Example: Treatment package of functional communication response, teaching a replacement skill, visual schedule, and first-then board. Introducing all items, then systematically removing each step of the treatment package to determine which part was most effective at changing behaviour.
Parametric Analysis: Seeks to discover the differential effects of a range of values of the independent variable
Example: How long should a time-out procedure be implemented to be effective at reducing challenging behaviour, 1 minute, 3 minutes, or 5 minutes?