pmNERDS Community: Login

  • Innovation_e_Publication
  • Integrated_PM_Feedback_Blog
  • Job_Board
  • pmNERDS_Center_of_Excelence
  • Research_and_Development_Journal
  • Community_Blog
A+ A A-

An Integrated Project Management blog post within the category of Performance Management used to increase competitive advantage.

Take the Portfolio Management Leap

“If I wait to deploy portfolio management practices until the organization is using more mature Program and Project practices, then it will be easier to adopt portfolio management, but we won’t be able to leverage the capability of portfolio management to help mature other PM practices as well.”

Following Project Portfolio principles within your organization, regardless of your job title enables and strengthens many of today’s best-practices. Typically, before focusing on Project Portfolio practices, a higher level of Project and Program maturity must be achieved. Within Integrated PM, these practices are integrated, and seen as a single system happening simultaneously.

Given this system, the following capabilities are developed using Integrated PM. An iterative spiral of increased process maturity develops, due to the reinforcing feedback loop of your portfolios, that you may or may not even be aware of. This structure increases PM process maturity, which then increases portfolio management capabilities, which then enables better practices, which increases process maturity.

The message here is DON’T WAIT, start now, and your process will begin to mature, slowly at first, and more rapidly as capabilities grow. Focus your efforts in the following areas for the biggest initial benefit.

Organizational Change Management

Rapid changes in the economy, markets, technology, and regulations are forcing organizations to formulate new strategies or fine-tune the current ones more frequently than ever. As these strategies are translated into new initiatives supported by new programs and projects, portfolio management offers a framework to manage the change effectively. It helps you make the right investment decisions to generate value for stakeholders. It provides you with the right tools to rapidly alter the course of action in response to fast changes in the environment. As Portfolio Management policies and practices are used, opportunities to strengthen change management practices will increase.

Clear Alignment

A well-designed and managed formal portfolio management process ensures that projects are aligned with the organizational strategy and goals at all times. New project ideas are evaluated for their alignment with the strategy and goals, and no projects are funded unless there is clear alignment. In addition, the degree of alignment is continuously monitored as the selected projects go through their individual life cycles. If an ongoing project no longer shows strong alignment, it may be terminated and the resources allocated to other higher priority projects. As Portfolio Management policies and practices are used, opportunities to strengthen strategic alignment practices will increase.

Value Creation

Portfolio management helps you deliver value to your stakeholders by managing project investments through a structured and disciplined process. The justification for the projects is clearly identified by quantifying the expected benefits (both tangible and intangible) and costs. Only those projects that promise high-value and rank high against the competing ones throughout their life cycles are funded. Portfolio management gives you a bigger bang for your investment buck in the long run because you are managing the investments in a systematic fashion. As Portfolio Management policies and practices are used, opportunities to strengthen value creation practices will increase.

Value Balancing

For a profit-driven company, the organizational goal may be to generate the maximum financial returns possible for the owners or shareholders. But if the projects selected for investment are based solely on financial value generation potential, interests of other key stakeholders may be compromised. PPM will help you create a balance among the projects to deliver not only financial value but other value forms as well. As Portfolio Management policies and practices are used, opportunities to strengthen value balancing practices will increase.

Long-Term Risk Management

When projects are initiated and implemented without the portfolio framework, project sponsors and managers typically focus on the short-term risks related to the completion of the project and do not pay enough attention to the long-term risks and rewards. Furthermore, they are oblivious to the collective risk profile of the project investments. Under a portfolio structure, the risk-reward equation is examined for projects individually as well as collectively in the context of the overall business. By diversifying the investments and balancing the portfolio, you are able to create a proper mix of projects of different risk profiles and manage the risks more effectively. As Portfolio Management policies and practices are used, opportunities to strengthen long-term risk management practices will increase.

Termination of Projects

Just because a project initially shows a strong business case does not nec-essarily mean it should continue to receive funding through its completion. Projects that no longer hold a strong business case as they go through their life cycles should be terminated. This helps you focus on those projects that will generate value and kill others, thereby maximizing the value of the portfolio as a whole. In most organizations, once a project receives authorization and enters into the implementation phase, it will most likely continue to receive funding until its completion. Terminating projects is a taboo in most organizations. It is a highly political and emotional issue for many decision makers and executives. Portfolio management helps make the project termination decisions more objective and less political or emotional. As Portfolio Management policies and practices are used, opportunities to strengthen project end-of-life practices will increase.

Better and Faster Decision Making

Portfolio management brings more focus to the decision-making process, making it faster and more effective. An integral part of PPM, portfolio and project governance provides a formal structure and process for making go/ no-go project investment decisions. It places the responsibility of decision making in the hands of independent parties-rather than the project sponsors with possible self-interest that can evaluate competing projects more objectively using the same measurements, metrics, and standards. As Portfolio Management policies and practices are used, opportunities to strengthen decision making practices will increase.

Reducing Redundancies

It is not uncommon in relatively large organizations to face a situation where the "left hand doesn't know what the right is doing." Organizational resources are sometimes wasted on different projects trying to produce the same output. The PPM process helps you eliminate or reduce redundancy yielding significant savings to the organization. When the portfolio management process is standardized across the whole enterprise, projects become more transparent and adequate checks and balances can help you detect redundancies early. As Portfolio Management policies and practices are used, opportunities to strengthen redundancy-identification practices will increase.

Better Communications and Roadmapping

Many organizations have "silos" around each function that block effective communication, a critical ingredient of success for cross-functional projects. PPM is a mechanism that opens the channels of communication for people from various business and technical functions. Silos also undermine innovation that is critical to organizational success in today's hypercompetitive environment. PPM breaks the silo barriers creating opportunities for people to learn new insights from each other and become more innovative. As Portfolio Management policies and practices are used, opportunities to strengthen communications and roadmapping practices will increase.

Efficient Resource Allocation

One of the biggest challenges for any organization is the efficient allocation of resources-both monetary and human. While every manager claims that she would like to see the biggest bang for her buck, only a few organizations have proper systems in place to prioritize her projects based on their return on investment. Allocation of the right people to the right projects at the right time is an even bigger challenge. You can rarely find a detailed inventory of the human resources vs. the project needs, that is, supply vs. demand. The situation becomes even worse when the project resources also have ad-ministrative and other operational responsibilities. One of the advantages of portfolio management is that it provides the structure and tools for efficient advance planning, needs prioritization, and resource allocation. As Portfolio Management policies and practices are used, opportunities to strengthen resource allocation practices will increase.

Consistent Performance and Growth over Time

One of the key portfolio management processes is the evaluation of projects for their financial value-generating merit. This involves forecasting the future cash flows of every project and its outputs over its life cycle. Cash flow analysis for the entire portfolio over future time increments enables you to estimate any investment gaps and corresponding projects to match the growth targets of the organization. The portfolio thus can offer consistent long-term growth and performance for the organization. As Portfolio Management policies and practices are used, opportunities to strengthen performance and growth practices will increase. .

Continue reading
120 Hits
0 Comments

Action Research

“If I look for better ways to manage projects, then I can increase competitive advantage for our entire organization, but it’s hard for organizations to learn from their activities.”

Whatever way you look at Action Research, you’ll find it the favorite of project managers, and by the way, the most valuable part of being in a community. The purpose of action research is to develop new skills or new approaches and to solve problems with direct application to your project, Integrated PM methods, or within the working PM setting.

Examples:

  • An Integrated PM training program to help train PMs to work more effectively with project team members; to develop an exploratory program leveraging less approval points for efficiency; to solve the problem of apathy in chartering meeting with the project sponsor; to test a fresh approach to interesting more clients in project progress prior to project completion.
  • A community initiative to get more community members contributing their thoughts and solutions in the community forum called Straight Talk.
  • Teaching site visitors how to use the pmNERDS’ website annotation features to further their Integrated PM studies Characteristics:
  • Practical and directly relevant to an actual situation in the working Integrated PM world. The subjects are the project managers, the sponsors, community members, or others with whom you are primarily involved. Provides an orderly framework for problem-solving and new developments that are superior to the impressionistic, fragmentary approach that otherwise typifies develop¬ments in project management. It also is empirical in the sense that it relies on actual observations and behavioral data, and does not fall back on subjective committee "studies" or opinions of people based on their past experience.
  • Flexible and adaptive, allowing changes during the trial period and sacrificing control in favor of responsiveness and on-the-spot experimentation and innovation.
  • While attempting to be systematic, action research lacks scientific rigor because its internal and external validity are weak. Its objective is situational, its sample is restricted and unrepresentative, and it has little control over independent variables. Hence, its findings, while useful within the practical dimensions of the situation, do not directly contribute to the general body of Integrated PM knowledge.

Steps:

  1. Define the problem or set the goal. What is it that needs improvement or that might be developed as a new skill or solution?
  2. Review the literature to learn whether others have met similar problems or achieved related objectives.
  3. Formulate testable hypotheses or strategies of approach, stating them in clear, specific, pragmatic language.
  4. Arrange the research setting and spell out the procedures and conditions. What are the tasks you will perform in an attempt to meet your objectives?
  5. Establish evaluation criteria, measurement techniques, and other means of acquiring useful feedback.
  6. Analyze the data and evaluate the outcomes.
Continue reading
168 Hits
0 Comments

Quasi Experimental Research

“If we used a true experimental research method, then we would have the best-results, but we can’t control all the relevant state variables.”

Many times, our clients don’t have the time, or variable control required for true experimental research, but still need estimates to provide direction for their performance improvement initiatives. In these cases, they can approximate the conditions of the true experiment in a setting which does not allow the control and/or manipulation of all relevant variables. The researcher must clearly understand what compromises exist in the internal and external validity of the design, and proceed within these limitations.

Examples:

  1. To investigate the effects of spaced versus massed task execution in the to-do lists for project teams without being able to assign team members to the test at random or to supervise closely the execution of their tasks.
  2. To assess the effectiveness of three approaches to teaching basic principles and concepts in Integrated PM when some of the teachers could inadvertently volunteer for one of the approaches because of its impressive-looking materials.
  3. Project value realization research involving a pretest-posttest design in which such variables as maturation, effects of testing, statistical regression, selective attrition, and stimulus novelty or adaptation, are unavoidable or overlooked.
  4. Most studies of business unit problems of late projects, poor quality, cost over runs, or instances of canceled projects, where control and manipulation are not always feasible.

Characteristics:

  1. Quasi-experimental research typically involves applied settings where it is not possible to control all the relevant variables but only some of them. The researcher gets as close to the true experimental rigor as conditions allow, carefully qualifying the important exceptions and limitations. Therefore, this research is characterized by methods of partial control based on a careful identification of factors influencing both internal and external validity.
  2. The distinction between true and quasi-experimental research is tenuous, particularly where human subjects are involved as in projects. A careful study of the relative nature of this distinction as a matter of approximation on a continuum between "one-shot case studies" of an action research nature to experimental-control group designs with randomization and rigorous man¬agement of all foreseeable variables influencing internal and external validity.
  3. While action research can have quasi-experimental status, it is often so unformalized as to deserve separate recognition. Once the research plan systemati¬cally examines the validity question, moving out of the intuitive and exploratory realm, the beginnings of experimental methodology are visible.

Steps in Quasi-Experimental Research: The same as with true experimental research, carefully recognizing each limitation to the internal and external validity of the design.

Continue reading
97 Hits
0 Comments

True Experimental Research

“If we research practices to identify project constraints, then we can identify ways to improve project performance, but how do we know we can trust our findings before we commit irrevocable resources into the performance improvement effort?”

The perspective of Integrated PM is perfect for true experimental research, and performance improvement. This is how a program center can finally explore the correlations and interconnections between all the facets of the project, and other projects within the program and portfolio. This True Experimental Research is the investigation of possible cause-and-effect relationships by exposing one or more experimental groups to one or more treatment conditions, and comparing the results to one or more control groups not receiving the treatment.

Examples:

  1. To investigate the effects of two methods of new practice implementation as a function of project size (Maintenance & Utility, Enhancement & Improvement, and Transformational) and levels of PM experience (high, average, low), using random assignment of projects and project manager experience levels to method and business unit.
  2. To investigate the effects of a new practice training program on the organization’s project managers using experimental and control groups who are either exposed or not exposed to the program, respectively, and using a pretest-posttest design in which only half of the project managers randomly receive the pretest to determine how much of a performance change can be attributed to pretesting or to the training program.
  3. To investigate the effects of two methods of project value realization evaluation on the performance of project teams within the business unit. N in this study would be the number of project teams, rather than project managers, and the method would be assigned by stratified random techniques such that there would be a balanced distribution of the two methods to projects across the business unit.

Characteristics of Experimental Designs:

  1. True experimental research requires rigorous management of experimental variables and conditions either by direct control manipulation or through randomization.
  2. Typically uses a control group as a baseline against which to compare the group(s) receiving the experimental treatment.
  3. Concentrates on the control of variance:
    • To maximize the variance of the variable(s) associated with the research hypotheses.
    • To minimize the variance of extraneous or "unwanted" variables that might affect the experimental outcomes, but are not themselves the object of study.
    • To minimize the error or random variance, including so-called errors of measurement.

Random selection of subjects, random assignment of subjects to groups, and random assignment of experimental treatments to groups yield the best solution.

Internal validity is the ‘sine qua non’ of research design and the first objective of experimental methodology. It asks the question: Did the experimental manipulation in this study really make a difference?

External validity is the second objective of experimental methodology. It asks the question: How representative are the findings and can the results be generalized to similar circumstances and subjects?

In classic experimental design, all variables of concern are held constant except a single treatment variable which is deliberately manipulated or allowed to vary. Advances in methodology such as factorial designs, analysis of variance and multiple regression now allow the experimenter to permit more than one variable to be manipulated or varied concurrently across more than one experimental group.

This permits the simultaneous determination of

  • the effects of the principal independent variables (treatments),
  • the variation associated with classificatory variables, and
  • the interaction of selected combinations of independent and/or classificatory variables.

While the experimental approach is the most powerful because of the control it allows over relevant variables, it is also the most restrictive and artificial. This is a major weakness in applications involving human subjects in real world situations, since resources often act differently if their behavior is artificially restricted, manipu¬lated, or exposed to systematic observation and evaluation.

Seven Steps in Experimental Research:

  1. Survey the literature relating to the problem.
  2. Identify and define the problem.
  3. Formulate a problem hypothesis, deducing the consequences, and defining basic terms and variables.
  4. Construct an experimental plan:
    • Identify all nonexperimental variables that might contaminate the experiment, and determine how to control them.
    • Select a research design.
    • Select a sample of subjects to represent a given population, assign subjects to groups, and assign experimental treatments to groups.
    • Select or construct and validate instruments to measure the outcome of the experiment.
    • Outline procedures for collecting the data, and possibly conduct a pilot or "trial run" test to perfect the instruments or design.
    • State the statistical or null hypothesis.
  5. Conduct the experiments.
  6. Reduce the raw data in a manner that will produce the best appraisal of the effect which is presumed to exist.
  7. Apply an appropriate test of significance to determine the confidence one can place on the results of the study.
Continue reading
81 Hits
0 Comments

Correlational Research

“If we compare project managers to the project performance, then we might understand what is causing good project performance, but we don’t understand how to conduct good correlational studies.”

Have you ever wondered just what caused you bad hair day? This is the purpose of ‘Correlational Research’, well not really. It’s not this at all, but we can still benefit from the research. The purpose is to investigate the extent to which variations in one factor of Integrated PM correspond with variations in one or more other project factors based on correlation coefficients.

Examples:

  • A study investigating the relationship between charter existence as the criterion variable and a few of the variables for successful projects.
  • A factor-analytic study of several personality tests of the project manager.
  • A study to predict success in project performance based on intercorrelation patterns for task variables.

Characteristics: Appropriate where variables are very complex and/or do not lend themselves to the experimental method and controlled manipulation. Correlational Research permits the measurement of several variables and their interrelationships simultane¬ously and in a realistic setting. It gets at the degrees of relationship rather than the all-or-nothing question posed by experimental design: "Is an effect present or absent?"

Weakness: Among its limitations are the following:

  • It only identifies what goes with what-it does not necessarily identify cause-and-effect relationships.
  • It is less rigorous than the experimental approach because it exercises less control over the independent variables.
  • It is prone to identify spurious relational patterns or elements which have little or no reliability or validity.
  • The relational patterns are often arbitrary and ambiguous.
  • It encourages a "shot-gun" approach to research, indiscriminately throwing in data from miscellaneous sources and defying any meaningful or useful interpretation.

Steps:

  1. Define the problem.
  2. Review the literature.
  3. Design the approach:
  4. Identify the relevant variables.
  5. Select appropriate subjects.
  6. Select or develop appropriate measuring instruments.
  7. Select the correlational approach that fits the problem.
  8. Collect the data.
  9. Analyze and interpret the results.

.

Continue reading
64 Hits
0 Comments

Causal - Comparative Research

“If we want to reduce the amount of changes made at the end of our projects, then we need to find out the causes to so many changes, but that means we should do causal-comparative research.”

Nowadays they tell us that the Boston Massacre wasn’t really what it was claimed to be. What was the cause? There are times that we might feel discovering the cause of project problems might be just as unpopular as digging around in the Boston Massacre. But if you must, then do it right.

One of the most common research methods used in Integrated PM is the Causal-Comparative research method. It’s used to investigate possible cause-and-effect relationships by observing some existing consequence (effect) and searching back through the data for plausible causal factors. This contrasts with the experimental method which collects its data under controlled conditions in the present.

Examples:

  1. To identify factors characterizing persons having either high or low approval rates, using data from past project records.
  2. To determine the attributes of effective project sponsors as defined, for example, by their realized project value. Portfolio and program records over the past five years are then examined, comparing these data to the number of innovative projects or several other factors.
  3. To look for patterns of behavior and achievement associated with project manager experience differences, using descriptive data on project behavior and project value achievement.

Principal Characteristics: Causal-comparative research is "ex post facto" in nature, which means the data are collected after all the events of interest have occurred. The investigator then takes one or more effects (dependent variables) and examines the data by going back through time, seeking out causes, relationships, and their meanings.

Strengths: The causal-comparative method is appropriate in many circumstances where the more powerful experimental method is not possible. It is used when it is not always possible to select, control, and manipulate the facts necessary to study cause-and-effect relations directly. It also can be used when the control of all variations except a single independent variable may be highly unrealistic and artificial, preventing the normal interaction with other influential variables. Of course, with most project conditions, it is used when laboratory controls for many research purposes would be impractical, costly, or ethically questionable.

Note: The experimental method involves both an experimental and a control group. Some treatment "A" is given the experimental group, and the result "B" is observed. The control group is not exposed to "A" and their condition is compared to the experimental group to see what effects "A" might have had in producing "B." In the causal-comparative method, the investigator reverses this process, observing a result "B" which already exists and searches back through several possible causes ("A" type of events) that are related to "B."

  1. It yields useful Integrated PM information concerning the nature of phenomena: what goes with what, under what conditions, in what sequences and patterns, and the like.
  2. Improvements in techniques, statistical methods, and designs with partial control features, in recent years involving Integrated PM, have made these studies more defensible.

Weaknesses: The main weakness of any ex post facto design is the lack of control over independent variables. Within the limits of selection, the investigator must take the facts as they are found with no opportunity to arrange the conditions or manipulate the variables that influenced the facts in the first place. To reach sound conclusions, the investigator must consider all the other possible reasons or plausible rival hypotheses which might account for the results obtained. To the extent that the conclusions can be successfully justified against these other alternatives puts the investigator in a position of relative strength. The difficulty in being certain that the relevant causative factor is included among the many factors under study.

  1. The complication that no single factor is the cause of an outcome but some combination and interaction of factors go together under certain conditions to yield a given outcome. A phenomenon may result not only from multiple causes but also from one cause in one instance and from another cause in another instance.
  2. When a relationship between two variables is discovered, determining which is the cause and which the effect may be difficult.
  3. The fact that two or more factors are related does not necessarily imply a cause-and-¬effect relationship. They all simply may be related to an additional factor not recognized or observed. Classifying subjects into dichotomous groups (e.g., "Achievers" and "Non-achievers") for comparison is fraught with problems, since categories like these are vague, variable, and transitory. Such investigations often do not yield useful findings.
  4. Comparative studies in natural situations do not allow controlled selection of subjects. Locating existing groups of subjects who are similar in all respects except for their exposure to one variable is extremely difficult.

Steps:

  1. Define the problem.
  2. Survey the literature. State the hypotheses. List the assumptions upon which the hypotheses and procedures will be based.
  3. Design the approach:
    • Select appropriate subjects and source materials.
    • Select or construct techniques for collecting the data.
    • Establish categories for classifying data that are unambiguous, appropriate for the study, and capable of bringing out significant likenesses or relationships.
  4. Validate the data-gathering techniques.
  5. Describe, analyze, and interpret the findings in clear, precise terms.
Continue reading
82 Hits
0 Comments

Case & Field Study Research

“If I gain a wholistic understanding of why projects succeed and fail, then I can take measures to encourage success, but I’m not sure of how to conduct this kind of investigation.”

I don’t know. Living out with the wild creatures doesn’t seem like my cup of tea. But I’m glad someone is doing it. My wife and I love watching the films, but don’t need the dirty and danger part. It’s a good thing that while studying Integrated PM you rarely need to live with the Lions. Can you imagine hiding behind a rock or bush building a case study for someone?

The case and field study method is used to intensively study the background, status, and cultural interactions of a given business unit: an individual, project team, program, or enterprise.

Examples:

  • Many of my clients have benefited from a study of past projects to define a set of common project life-cycle stages.
  • An in-depth study of an individual working in a specific job role who either excels above others, or is consistently a low performer to explain the anomalies.
  • An intensive study of a "project team" culture and communication styles in a business unit.
  • A study of projects within a portfolio examining the interconnections of state variables and objective dependencies.

Characteristics: Case studies are in-depth investigations of a given business unit resulting in a complete, well-organized picture of that unit. Depending upon the purpose, the scope of the study may encompass an entire project life cycle or only a selected segment; it may concentrate upon specific factors or take in the totality of elements and events.

Compared to a survey study which tends to examine a small number of variables across a large sample of units, the case study tends to examine a small number of units across a large number of variables and conditions.

Strengths: Case studies are particularly useful as background information for planning major capital projects and programs. Because they are intensive, they bring to light the important variables, processes, and interactions that deserve more extensive attention. They are a part of strategic planning, may pioneer new ground, and often are the source of fruitful hypotheses for further projects. Case study data provide useful anecdotes or examples to illustrate more generalized statistical findings.

Weaknesses: Because of their narrow focus on a few business units, case studies are limited in their representativeness. They do not allow valid generalizations to the enterprise or beyond where they came until the appropriate follow-up research is accomplished, focusing on specific hypotheses and using proper sampling methods.

Case studies are particularly vulnerable to subjective biases. The case itself may be selected because of it's dramatic, rather than typical, attributes; or because it neatly fits the researcher’s preconceptions. To the extent selective judgments rule certain data in or out, or assign a high or low value to their significance, or place them in one context rather than another, subjective interpretation is influencing the outcome.

Steps:

  1. State the objectives. What is the unit of study and what characteristics, relationships, and processes will direct the investigation?
  2. Design the approach. How will the units be selected? What sources of data are available?
  3. What data collection methods will be used?
  4. Collect the data.
  5. Organize the information to form a coherent, well-integrated reconstruction of the unit being studied.
  6. Report the results and discuss their significance. 

The case study is a perfect launching point to provide focus in further studies and for establishing internal common grounds in programs.

Continue reading
119 Hits
0 Comments

Descriptive Research

“If I use observation, surveys, and market evidence to test developed hypothesizes, then I may discover important concepts, but I’m not sure how to do it, and I may lead myself off track.”

Keith Goffin, author of ‘Identifying Hidden Needs’ indicates “that a survey researcher asks people questions in a written questionnaire … or during an interview, then records answers. The researcher manipulates no situation or condition; people simply answer questions.” The trick is to gather information without influencing it.

The descriptive research method is a common practice for performance and process improvement within the world of projects. The desire is to describe systematically the facts and characteristics of a given team, organization, or set of projects, factually and accurately.

Some examples of this method include:

  • An opinion survey to assess the perceived value of an IT line of service.
  • A team survey to review the effectiveness of a project workflow.
  • A study and definition of all job roles within a business unit.
  • A report of project completion status ‘On Time’ vs. ‘Late.’

Descriptive research is used in the literal sense of describing situations or events. It is the accumulation of a data base that is solely descriptive- it does not necessarily seek or explain relationships, test hypotheses, make predictions, or get at meanings and impli¬cations, although research aimed at these more powerful purposes may incorporate descriptive methods. In this way, historic records of completed projects, and related values of state variables support descriptive research.

Research authorities, however, are not in agreement on what constitutes "descriptive research" and often broaden the term to include all forms of research except historical and experimental. In this broader context, the term survey studies are often used to cover the examples listed above.

Typical purpose of these ‘Survey Studies’ include:

  • To collect detailed information that describes existing phenomena.
  • To identify problems or justify current conditions and practices.
  • To make comparisons and evaluations.
  • To determine what others are doing with similar problems or situations and benefit from their experience in making plans and decisions.

The steps for conducting Descriptive Research are:

  1. Define the objectives in clear, specific terms. What facts and characteristics are to be uncovered? .
  2. Design the approach. How will the data be collected? How will the subjects be selected to insure they represent the population to be described? What instruments or observa¬tion techniques are available or will need to be developed? Will the data collection methods need to be field-tested and will data gatherers need to be trained? .
  3. Collect the data. .
  4. Report the results.
Continue reading
66 Hits
0 Comments

Developmental Research

“If I studied past projects, then I could identify areas of improved performance, but I’d like an overview of the process before beginning.”

Ok, I’ve got a real fancy pants set of words for you today, “Developmental Research.” Yes, it is a R&D method, but we already are doing this less formally at some time. In fact, that’s the case with most of the methods in this series.

The purpose for developmental research methods are to investigate patterns and sequences of growth and/or change as a function of time. Of course, this becomes an essential method of most project performance improvements initiatives.

Some examples of Developmental Research would include:

  • A project study that involves repeated observations of the same set state variables (e.g., project or task durations) over long periods of time.
  • Quality studies directly measuring the nature and rate of changes in a sample of the same work products being delivered in different quarters.
  • Cross-sectional growth studies indirectly measuring the nature and rate of the same state variable changes by drawing samples of different projects from representative business units over time.
  • Trend studies designed to establish patterns of change in the past to predict future patterns or conditions.

Some common characteristics of developmental research include focuses on the study of variables and their development over a period of months or years. It asks, ''What are the patterns of growth, their rates, their directions, their sequences, and the interrelated factors affecting these characteristics?"

The sampling problem in this method is complicated by the limited number of subjects it can follow over the years; any selective factor affecting attrition biases the study. If the threat of attrition is avoided by sampling from a stable population, this introduces unknown biases associated with such populations. Furthermore, once underway, the method does not lend itself to improve¬ments in techniques without losing the continuity of the procedures. Finally, this method requires the continuity of staff and financial support over an extended period and typically is confined to a specific business unit or program that can maintain such an effort.

Cross-sectional studies involving developmental research usually include more subjects, but describe fewer growth factors than single unit studies. While the latter is the only direct method of studying project team development, the cross-sectional approach is less expensive and faster since the actual passage of time is eliminated by sampling different project managers across organizations.

Sampling in the cross-sectional method is complicated because the same project managers are not involved with the same projects and may not be comparable. To generalize intrinsic developmental patterns from these sequential samples of projects runs the risk of confusing differences due to development with other differences between the groups that are artifacts of the sampling process.

Trend studies are vulnerable to unpredictable factors that modify or invalidate trends based on the past. In general, long-range prediction is an educated guess while short-range prediction is more reliable and valid.

The steps for conducting developmental research are:

  1. Define the problem or state the objectives.
  2. Review the literature to establish a baseline of existing information and to compare research methodologies including available instruments and data collection techniques.
  3. Design the approach.
  4. Collect the data.
  5. Evaluate the data and report the results.
Continue reading
76 Hits
0 Comments

A Score Card for Product Management

The business intent of Product Management is to increase an organization's competitive advantage through various product initiatives. Competitive advantage is defined by product differentiation and process efficiency. An XY chart displays a competitive advantage score in terms of differentiation and process efficiency. This type of score card provides actionable information that a Product Management team can use for self-diagnostic in process improvement initiatives, and for performance comparisons between business quarters, or between different teams within the same organization. Standard charts enable comparisons across an industry.


You can use this score card as a sample. Create one table for each of the Seven Pillars. You will want to adjust the strategies to be in context of each pillar. Take the aggregated scores for each of the Pillars and combine them into one score for Differentiation. Computing a percentage achieved out of total possible will make your graphic representation easier to read and comparisons more meaningful.

After you’ve completed the assessment in each Pillar you switch and look at process efficiency. We use the same Seven Pillars to establish business goals. The objectives of processes within each Pillar are listed at the end of this post for reference. Remove the ones not valid for your organization. For each remaining objective, you'll need to identify specific strategies your team is performing to accomplish the specific objective.


At this point your ready to identify metrics and conduct the self assessment of process efficiency.



Aggregate your scores as you did for Differentiation. When you have your percentage, take it and subtract it from 1. This converts a high score, which is bad in performance, into a low Efficiency score. When you're done you'll have two scores, one for Differentiation and one for Efficiency. We map these on an XY diagram as shown here.


You can use this diagram to compare snap shots of your team's product management over time, or at one time compare different teams within the same organization. The closer you are in the upper right quadrant, the better your Product Management Team is doing.

Efficiency Objectives

Market Sensing

Market sensing thru market analysis methods

Market sensing thru competitive analysis methods

Market sensing thru customer satisfaction methods

Market input selection thru consensus & voting methods

Market input selection thru analytical scoring & portfolio balance methods

Market input selection thru qualitative assessment methods

Problem Definition

Conceptualization methods used for problem definition

Analysis methods used for problem definition

Refinement methods used for problem definition

Target market sampling methods used for problem validation

Deductive reasoning and test methods used for problem validation

Expert opinion methods used for problem validation

Opportunity Definition

Conceptualization methods used for opportunity definition

Analysis methods used for opportunity definition

Refinement methods used for opportunity definition

Tracing methods used for strategic alignment

Scoring methods used for strategic alignment

Decomposition methods used for strategic alignment

Direct and proxy approval methods used for buyer-gate management

Analytical scoring & portfolio balance methods used for buyer-gate management

Modeling and forecasting methods used for buyer-gate management

Consensus & voting methods used for opportunity selection

Analytical scoring & portfolio balance methods used for opportunity selection

Qualitative assessment methods used for opportunity selection

Feature Definition

Conceptualization methods used for feature definition

Refinement methods used for feature definition

Analysis methods used for feature definition

Target market sampling methods used for feature validation

Deductive reasoning and test methods used for feature validation

Expert opinion methods used for feature validation

Roadmap Definition

Conceptualization methods used for roadmap definition

Refinement methods used for roadmap definition

Analysis methods used for roadmap definition

Direct and proxy approval methods used for buyer-constraint management

Analytical scoring & portfolio balance methods used for buyer-constraint management

Modeling and forecasting methods used for buyer-constraint management Demand driven methods used for resource planning

Resource Driven Methods used for resource planning

Portfolio Driven Methods used for resource planning

Consensus & voting methods used for roadmap selection

Analytical scoring & portfolio balance methods used for roadmap selection

Qualitative assessment methods used for roadmap selection

Requirement Definition

Conceptualization methods used for requirement definition

Analysis methods used for requirement definition

Refinement methods used for requirement definition

Target market sampling methods used for requirement validation

Deductive reasoning and test methods used for requirement validation

Expert opinion methods used for requirement validation

Expert opinion methods used for requirement estimation

Modeling methods used for requirement estimation

Historical comparison methods used for requirement estimation

Launch Definition

Conceptualization methods used for launch definition

Refinement methods used for launch definition

Analysis methods used for launch definition

Target market sampling methods used for launch validation

Deductive reasoning & test methods used for launch validation

Expert opinion methods used for launch validation

On-Demand methods used for launch execution

Calendar paced methods used for launch execution

Milestone paced methods used for launch execution

Continue reading
391 Hits
0 Comments