| GUIDANCE | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| The Pricing Handbook
19. Software Pricing 19.8.1 Step 1: Determine Needs 19.8.2 Step 2: Select Candidate Model 19.8.3 Step 3: Choose Appropriate Model 19.8 Model Selection The purpose of this section is to outline a process that can aid the estimator in selecting the best model for his/her particular needs. With the multitude of software cost and sizing models available, the model user faces the dilemma: "Which model should I use?" In his article "A Common Sense Approach to Software Cost Estimation, " Daniel Ferens outlines a four step approach to help the user determine the answer to this often perplexing question. The four steps are:
19.8.1 Step 1: Determine Needs This first step is the most crucial. Several studies have concluded that different models are best for different applications. Therefore, a user must understand his or her unique situation before selecting a model. The user should first write a general statement of the organization’s needs, then attempt to describe the situation in more detail. A "weighted factors approach" as illustrated in Table 19-11 can help a user better define his or her unique situation. Also, accuracy requirements should be defined. The list of factors and weightings in Table 19-11 reflect the importance of these factors to the user’s organization. They are only presented as an example; the factors and weightings for other organizations will often be quite different. Also, the listing of factors and assignment of weightings can be subjective. Still, such a listing can provide the user or organization a framework for considering qualitative evaluation factors. When deciding which factors to include, consideration should be given to not only to the use of the model for cost estimating, but also how useful it will be supporting the software pricing process outlined in section 19.7. The data requirements from section are especially important. Table 19-11. The Weighted Factors Approach
According to the NASA Handbook, the accuracy of the estimates the model provides is dependent upon the input data provided by the user, therefore, input data availability is critical. For government estimators, being able to assess a contractor’s software development personnel capabilities is usually an unknown, especially before contract award. The other cost drivers in the model should be carefully reviewed to determine what data are/will be available to the government. Major functional capabilities that should be considered when selecting a software estimating tool are listed below. Depending on the organization's needs, the level of significance of these capabilities may differ, and should be considered accordingly. In addition, the organization should analyze their own needs and identify additional desired capabilities specific to them. The organization should then match available tools with overall needs. In general, the tool should:
19.8.2 Step 2: Select Candidate Model The second step is to select a set of candidate models that meet the needs determined in Step 1. An examination of needs can point out which types of models are most suitable. For size estimation, the user can choose from various categories of models (e.g. analogy, grass roots, expert judgment, or parametric); however, for cost models, the choices will probably be restricted to parametric models as explained in section 19.5.5. Once the category or categories have been identified, candidate models should be selected. For example, the parametric cost models discussed in Appendix 19D should probably be considered, along with any others the user determines may be appropriate.
19.8.3 Step 3: Choose Appropriate Model The user should perform both qualitative and quantitative (accuracy) assessments of the candidate models selected in Step 2, and choose the best model or models for his or her organization. It is recommended that two models be selected for routine use; one as the primary model and one for cross-checking the results of the primary model. A study by George A. Coggins and Roy C. Russell entitled Software Cost Estimating Models: A Comparative Study of What the Models Estimate showed that cost models, even given "equivalent" inputs, produce significantly different cost and schedule estimates. Their conclusion was that a user should learn one or two models well instead of trying to use several different models. Nevertheless, other models can still be used occasionally, if only for consideration for future use as discussed in Step 4. To effectively perform this step, the user must become familiar with each of the candidate models. This often involves attending a training course and using the model for several months, if time permits. (A well planned, on-going training program can benefit a cost estimating organization tremendously in terms of knowledge of models.) Once the user becomes sufficiently familiar with the models, the selection process can begin. It is highly desirable that the user does his or her own studies for this process, and not rely solely on outside information. Validation studies performed by outside agencies certainly can help the user in the model selection process. An excellent example is a study by Elizabeth K. Bailey, et al, A Descriptive Evaluation of Automated Software Cost-Estimation Models, which compared and evaluated features of most of the cost models described in Appendix 19D. While outside studies such as this can provide valuable information, the user should employ these studies as supplementary material and not as a substitute for his or her own efforts since they do not reflect the unique facets of the user’s environment. For quantitative assessments of candidate models, the weighted factors approach shown in Table 19-11 can help. The user assigns a weight to each factor (in Step 1), assigns a rating between "1" and "10" to each model on how well it addresses each factor, multiplies the model and importance ratings, and sums the results. The highest total can indicate the best model (e.g., Model B in Table 19-11); however, other models that are close (e.g., Model A in Table 19-13 ) should be scrutinized further. Since there is some subjectivity in this process, small differences may be negligible. Again, while the weighted factors approach is somewhat subjective, it can help a user consider what is important in model selection and quantify the rating process. For qualitative assessments, or determining whether the models meet accuracy requirements, the user should calibrate the models, then run the models against projects for which the user has historical data and which were not used in calibration. An example is Ourada’s study performed on military ground programs with the REVIC model for an early version of the Air Force Space and Missiles Center database. For the 28 military ground programs, Ourada calibrated REVIC using 14 programs (selected at random), then ran the calibrated model against the other 14 programs to assess probable accuracy for other military ground programs outside the database. This approach is often arduous, but is essential if a user truly wishes to determine which model is most suitable from an accuracy standpoint.
19.8.4 Step 4: Reevaluate Model Choice User needs and models change over time. Many commercial models such as PRICE-S and SEER-SEM are updated every year, and major refinements occur every few years. New models occasionally appear and may merely fade away, or they may become widely used, or may be more suitable than current models. Therefore, a user should reevaluate his or her selection every few years. There is no reason to be "married" to a particular model or models for life unless they continue to be the best available.
19.8.5 Model Selection Summary In summary, the four-step approach presented here can be an invaluable aid to a user in model selection, especially when the choice is not dictated by those in higher authority. The most crucial step is the first, needs determination; all other steps hinge on the success of the first step. The four-step approach is sometimes laborious, but the benefits of improved estimating can make it worthwhile. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||