“If we research practices to identify project constraints, then we can identify ways to improve project performance, but how do we know we can trust our findings before we commit irrevocable resources into the performance improvement effort?”
The perspective of Integrated PM is perfect for true experimental research, and performance improvement. This is how a program center can finally explore the correlations and interconnections between all the facets of the project, and other projects within the program and portfolio. This True Experimental Research is the investigation of possible cause-and-effect relationships by exposing one or more experimental groups to one or more treatment conditions, and comparing the results to one or more control groups not receiving the treatment.
- To investigate the effects of two methods of new practice implementation as a function of project size (Maintenance & Utility, Enhancement & Improvement, and Transformational) and levels of PM experience (high, average, low), using random assignment of projects and project manager experience levels to method and business unit.
- To investigate the effects of a new practice training program on the organization’s project managers using experimental and control groups who are either exposed or not exposed to the program, respectively, and using a pretest-posttest design in which only half of the project managers randomly receive the pretest to determine how much of a performance change can be attributed to pretesting or to the training program.
- To investigate the effects of two methods of project value realization evaluation on the performance of project teams within the business unit. N in this study would be the number of project teams, rather than project managers, and the method would be assigned by stratified random techniques such that there would be a balanced distribution of the two methods to projects across the business unit.
Characteristics of Experimental Designs:
- True experimental research requires rigorous management of experimental variables and conditions either by direct control manipulation or through randomization.
- Typically uses a control group as a baseline against which to compare the group(s) receiving the experimental treatment.
- Concentrates on the control of variance:
- To maximize the variance of the variable(s) associated with the research hypotheses.
- To minimize the variance of extraneous or "unwanted" variables that might affect the experimental outcomes, but are not themselves the object of study.
- To minimize the error or random variance, including so-called errors of measurement.
Random selection of subjects, random assignment of subjects to groups, and random assignment of experimental treatments to groups yield the best solution.
Internal validity is the ‘sine qua non’ of research design and the first objective of experimental methodology. It asks the question: Did the experimental manipulation in this study really make a difference?
External validity is the second objective of experimental methodology. It asks the question: How representative are the findings and can the results be generalized to similar circumstances and subjects?
In classic experimental design, all variables of concern are held constant except a single treatment variable which is deliberately manipulated or allowed to vary. Advances in methodology such as factorial designs, analysis of variance and multiple regression now allow the experimenter to permit more than one variable to be manipulated or varied concurrently across more than one experimental group.
This permits the simultaneous determination of
- the effects of the principal independent variables (treatments),
- the variation associated with classificatory variables, and
- the interaction of selected combinations of independent and/or classificatory variables.
While the experimental approach is the most powerful because of the control it allows over relevant variables, it is also the most restrictive and artificial. This is a major weakness in applications involving human subjects in real world situations, since resources often act differently if their behavior is artificially restricted, manipu¬lated, or exposed to systematic observation and evaluation.
Seven Steps in Experimental Research:
- Survey the literature relating to the problem.
- Identify and define the problem.
- Formulate a problem hypothesis, deducing the consequences, and defining basic terms and variables.
- Construct an experimental plan:
- Identify all nonexperimental variables that might contaminate the experiment, and determine how to control them.
- Select a research design.
- Select a sample of subjects to represent a given population, assign subjects to groups, and assign experimental treatments to groups.
- Select or construct and validate instruments to measure the outcome of the experiment.
- Outline procedures for collecting the data, and possibly conduct a pilot or "trial run" test to perfect the instruments or design.
- State the statistical or null hypothesis.
- Conduct the experiments.
- Reduce the raw data in a manner that will produce the best appraisal of the effect which is presumed to exist.
- Apply an appropriate test of significance to determine the confidence one can place on the results of the study.