GUIDANCE
The Pricing Handbook

19. Software Pricing


19.7 Software Pricing Process

 

19.7 Software Pricing Process

Pricing generates the data needed by procurement officials to make the most advisable decisions possible. The Federal Aviation Administration’s Acquisition Management System (FAA AMS) provides direction on the use of pricing within the acquisition process. Pricing, as defined in the FAA Pricing Handbook (FAA PH), is the techniques used by the procuring official to establish a "fair and reasonable price". Price analysis and cost analysis are the two basic techniques used to accomplish this purpose. Price analysis should always be performed. Under certain circumstances, cost analysis also needs to be performed. This section will summarize selected information concerning Pricing in general and then the particulars of Software Pricing. Software pricing is based upon the same principles as software cost estimating, so an understanding of the previous sections is important. Detailed information regarding pricing in general is contained in the other chapters of the FAA PH.

 

19.7.1 Software Price Analysis

Price analysis is the process of examining and evaluating a proposed price without evaluating its separate cost elements and proposed profit. It may be accomplished by: 1) a comparison of submitted quotations; 2) a comparison of price quotations and contract prices with current quotations for the same or similar items; 3) the use of parameters (SLOC per person-month, for instance); and/or 4) a comparison of proposed prices with independently developed cost estimates.

In order to conduct software price analysis, the terms for comparing the data must be properly defined. Referring to The Department of the Navy Center for Cost Analysis (NCCA) Software Development Estimating Handbook, Phase One, dated February 1998, there are two basic types of information required to develop a quality software price analysis: 1) technical and programmatic information for the program proposal being analyzed and 2) technical and programmatic information for the analogous/similar historical programs used to develop the estimate/ICE that will be compared to the proposal.

Based on the analysis in NCCA’s Handbook, a software development estimate requires, at a minimum, the following information for the estimates being analyzed:

  • Some measure of the work to be performed with associated units (i.e., SLOC counts, words, function points, etc.)
  • If SLOC is utilized as the unit of measure, the associated counting convention (i.e., physical, physical with comments, logical, etc.)
  • The programming language utilized (at a minimum Assembly versus Higher Order Languages (HOL (e.g., FORTRAN, Jovial, CMS-2, etc.)) versus Fourth Generation Languages (4GL)).
  • The condition of the code (i.e., percent new, percent reused (modified, verbatim, translated, rehosted, COTS, etc.)), with associated definitions.
  • The phases of the software development life cycle to be estimated (e.g., System Design Review (SDR) through Formal Qualification Test (FQT)).
  • The development mode (at a minimum, embedded versus non-embedded). Embedded Software is defined as software that is inside a physical object and controls its behavior. This is a more or less specialized term for software inside navigational devices, radar sets, oscilloscopes, and other instruments. Embedded software has its own characteristic productivity and quality profiles. (Source: Capers Jones). This is software that determines the functionality of microprocessors and other programmable devices that are used to control electronic, electrical and electro-mechanical equipment and sub-systems. The programmable devices are often "invisible" to the user. (Source: The TickIT Guide.))
  • Especially for historical costs, if known, the name of the contractor responsible for developing the program. NCCA contends that contractor-specific data holds the greatest possibility for increasing the accuracy and decreasing the variance associated with software estimating tools.

In addition to the above items, there are four other areas that can be cost drivers for software that are input parameters for most software parametric cost models:

  1. Project Application – MIS, communications, radar, etc.
  2. Specification Level - MIL-STD-2167A, commercial, etc.
  3. Development Model - Waterfall, Spiral, etc.
  4. Project Scope - rehost, development, maintenance, etc.

Since all of the information requested affects the projected productivity of the development effort, it is crucial that the information gathered be as specific as possible. In addition to the aforementioned information on the program being estimated, the analyst must compile the same information for the analogous historical programs that will be used to develop the price analysis comparison. Furthermore, the actual effort, schedule, and cost (price) to develop the software, by software development phase if possible, should be obtained. With this information the most accurate productivity, schedule, and labor rate metrics can be developed. If the Screening Information Request (SIR) was developed correctly, the Software Development Plan (SDP) Contract Data Requirements List (CDRL) item is an excellent source of historical data. The SDP typically requires a list of previously delivered programs developed by the contractor, with the associated technical and programmatic data. If, however, the SDP is not available, this type of information can and should still be obtained from the contractor in whatever form is available. When collecting historical data, the analyst must ensure that the information is for completed programs because projections of on-going efforts are often mixed in with the list of the contractor’s programs. Since software development is continuously evolving, the analyst should always try to obtain the most recent data available.

It is important to know how the SLOC was counted so that any productivity or effort estimating relationships developed will be valid. There are two main categories of code counting conventions: physical and logical. Counting physical SLOC is accomplished by tallying the number of carriage returns in the source document. Logical SLOC are counted by tallying logical units (for example, an IF-THEN-ELSE statement is considered one logical unit).

The impact of the code counting convention is emphasized in the NCCA Handbook, which referenced two studies. An Institute for Defense Analyses (IDA) study found that on average, physical SLOC produce a code size that is about 20 percent higher than counting the same code using a logical SLOC definition. NASA's Software Engineering Laboratory (SEL) also found wide differences between physical and logical code counts. They found that a FORTRAN program’s ratio of physical lines to logical statements ranged from 2.5 to 5 due to variations in the number of comments. Likewise, Ada programs exhibited a similar ratio of 2.5 to 6 physical lines per logical statement. Not only is knowing the amount of source code necessary, but knowing the "condition" of the code is also important. NCCA used the term "condition" to describe the composition of the source code (i.e., %new, %reused). Sections 19.2.4 (Technology Insertion) and 19.4.3 (Reused Code) contain additional information concerning reused code. The amount of HOL a program contains is also an important factor to consider. All programming languages, except Assembly language, are defined as HOLs. Analysts should ask for the new and reused SLOC by language so as to avoid having to derive these values.

When using historical software effort data, it is important to consider the level of requirements under which the software was developed. A major program may have several software development efforts spanning different acquisition phases. For example, typical acquisition strategies require development of prototypes and associated software during a Prototype Phase. After a competitive selection process, one contractor's design is chosen for further development. Final development takes place for the deployable software by the winning contractor. The contractor probably reused code from the Prototype Phase that may not have undergone the same level of documentation, testing, or review as software developed for deployment. As a result, using historical prototype data points to estimate effort prior to deployment may not be appropriate without some adjustment.

Therefore, it is recommended that the data shown in Table 19-11 be requested from the offerors (if not already in the SIR) and from the sources of other data/ICE to allow a valid comparison.

 

Table 19-9. Software Estimate Comparison Parameters

Data Area Description
Project Application What is the software’s functional purpose
Project Size Most commonly in SLOC but could be in Function Points or Object Points
Size Counting Convention Defines what is included in the SLOC/FP/OP count
Project SW Language If coded in more than one language, try to get percentages. As a minimum, know Assembly versus HOL
Project Phasing What phases of the SW development schedule were included in the contract price
Code Condition % new, % modified (need definition e.g., less than 20% recoded), % unmodified, % COTS
Development Mode Embedded versus non-embedded as a minimum
Specification Type MIL-STD-2167A, commercial, etc.
Development Model Waterfall, Spiral, etc.
Project Scope Rehost, development, maintenance, etc.

If as a minimum the information in Table 19-11 is obtained for the proposed project and the item(s) to be used as the basis of comparison (ICE, market survey, similar items, etc.), then there should be good substantiation for the comparison.

 

19.7.2 Price Analysis Techniques

The FAA AMS Toolbox Guidance T.3.2.3, Section A-3, par. a-f provides the following pricing techniques to use when performing price analysis. The expected contract type that would be appropriate for the tool to be used with is indicated in Parenthesis.

  • Comparison of proposed prices with Independent Cost Estimates. (Any type)
  • Comparison of proposed prices of competitors received in response to the solicitation. (Competition)
  • Comparison of proposed prices received in response to the Screening Information Request (SIR) or Market Survey. (Competition)
  • Comparison of prior proposed prices and contract prices with current proposed prices for the same or similar end items and services in comparable quantities. (Sole Source or Cost Plus)
  • Comparison with competitive published catalogs or lists, published market prices or commodities, similar indexes, and discount or rebate arrangements. (Sole Source or Cost Plus)
  • Application of Software Cost Estimating Model parameters or Rules of Thumb (such as person-months per SLOC, or other units; see section 19.7.5) (AMS used the term "rough yardsticks" for this technique) to highlight significant inconsistencies that warrant additional pricing inquiry. (Sole Source or Cost Plus)
  • Ascertaining that the price is set by law or regulation.

 

19.7.3 Price Analysis Tools

Independent Cost Estimates (Ices)

ICEs are the key estimates required by and for cost and pricing evaluations. This estimate is an assessment of the total cost to be incurred by the offeror if the contract is awarded. ICEs should be developed early in a procurement cycle and be used to determine the reasonableness and completeness of the proposals and to detect buy-ins, unbalanced pricing, and other gaming techniques. Additionally, independent estimates can be used for future budget estimates and procurement planning and decision-making. ICEs may also be complemented by economic analysis and/or cost studies. Without these key elements, structuring of Sections B, L, and M can be difficult. The IPT will determine when an ICE is appropriate. The Contracting Officer may request an ICE prior to issuance of a Screening Information Request (SIR).

The evolution of these estimates progresses as the procurement concept grows. Estimates must be conducive to the pricing evaluation. Too often estimates are done only to support budget or technical trade-offs and cannot be translated to contract line item number (CLIN) and work breakdown structure (WBS) break-outs. In order for the analyst to develop a realistic pricing evaluation, initial estimates must support the development of Section B CLINS and provide evidence of establishing marketable prices that relate to these CLINs.

Comparison with a Similar Item’s Proposal/Price/Cost Estimate

This section is based on a U.S. Air Force Material Command (AFMC) white paper "Methods for Evaluating Similar Items", dated January 1996. The ability to compare a proposal with a similar item for price analysis presumes the price for that similar item is reasonable and acceptable. The Government might have purchased the item previously on the basis of adequate price competition, catalog or market pricing, commercial item pricing, or negotiations using cost or pricing data. If so, documentation that demonstrates price reasonableness for the item would already be possessed by the Government. Any of these should generate the confidence necessary to support the contention that price analysis will produce a reasonable price. However, it is also possible that the Government may never have purchased the similar item. In that situation, it would be necessary to establish the reasonableness of a similar item’s price before any further price analysis on the offered item could be conducted. It is imperative to determine a suitable basis, else the price analysis would be without merit.

Once comfortable with the reasonableness of the similar item’s price, an understanding of the technical similarities and differences between the offered item and the similar item is necessary. The pertinent characteristics (e.g., size, language, application type, etc. from Table 19-11) of each item must be identified to facilitate comparison.

If a direct comparison is not possible, break down the offered and similar items until a common baseline is reached. It may be as simple as segregating options and upgrades to the same basic, lower-level unit, along with two lists of adders that would complete the items. The automobile industry is the clearest example where this method can be applied. However, the same thing can be done with software, isolating differences such as functionality to leave the same basic operating system and application type. Then, proceed with establishing prices for the baseline unit and each adder.

The most complex, and perhaps most frequent, situation to be encountered, especially for a new development, involves the inability to reach a common, identifiable baseline unit. In this case compare the characteristics of the two items and determine some relationship between them. Examples could be found where similar items might be compared in areas such as size, language, development phases, and specification level. The price of the similar item, having been determined reasonable, is used as the baseline, and the differences between the two items are considered as pluses and minuses to that baseline.

When the procured and similar items are broken down for evaluation, any suitable price analysis tools and techniques may be used to substantiate prices of the segregated pieces. Utilize purchase history or catalog, market, or commercial price assessment as applicable. By closely aligning characteristics into comparable categories, some parametric relationship might be disclosed to explain the impact of a characteristic. Of course, if reasonableness of a portion cannot be established using price analysis techniques, the negotiator should request cost information (cost or pricing data if the threshold is met) on that portion and perform a cost analysis.

The Cost/Price Model

The analyst should plan for the development of a cost/price model. This is not a Cost Estimation Model as discussed in Appendix 19D, but a spreadsheet type model used specifically for cost/price analysis and/or proposal evaluation. When preparing a negotiation position, a cost/price model should consist of spreadsheets for the basic items being negotiated (usually CLINs or WBS elements), summary sheets, and sheets containing backup data (other direct costs, rates etc.). Essentially, all of the elements that are to be analyzed should be represented in the model.

In a competitive procurement it is often useful to provide an automated model with the SIR for direct input by the contractors. This reduces evaluation time and minimizes errors. The construction of the model will depend heavily on the type(s) of contract, CLIN structure, WBS structure, and quantity of data required. The analyst must work closely with the CO and IPT, to assure that mutual goals are met. A technical description and an in-depth explanation of cost/price models are provided in Chapter 3, "Automated Cost Models" of this handbook.

For software procurements, the model can be especially useful since it should contain the input parameter requirements (Table 19-11 as a minimum) needed to compare proposals to the ICE and/or similar items, plus enter data into a Software Cost Estimation Model.

 

19.7.4 Software Cost Analysis

Cost analysis is the review and evaluation of the separate cost elements and proposed profit/fee of:

  • An offeror’s or contractor’s cost or pricing data or information other than cost or pricing data and
  • The judgmental factors applied in projecting from the data to the estimated costs.

The purpose of the evaluation is to form an opinion on the degree to which the proposed costs represent what the cost of the contract should be, assuming reasonable economy and efficiency. However, cost analysis does not necessarily provide a picture of what the market is willing to pay for the product involved. For that price analysis is needed.

The cost areas analyzed in a Cost Analysis, each discussed in detail in a separate chapter of the FAA PH, are as follows:

    • Direct Labor
    • Material
    • Other Direct Costs (ODCs)
    • Indirect Costs
    • Facilities Capital Cost of Money
    • Profit/Fee

Common terms used in cost analysis that are applicable to software are contained in Table 19-12 below.

 

Table 19-10. Cost Analysis Terms and Definitions

Terms Definition
Basis of Estimate A statement of the rationale used by a contractor to generate a cost estimate for a specific task or item to be produced.
Direct Labor Work performed by individuals which is directly related to a specific cost objective. This work is readily identifiable with a particular product or service.
Indirect Labor Work performed by individuals which is not identifiable with a single final cost objective but is identified with two or more final cost objectives or an intermediate cost objective. One example of indirect labor is the work expended by the controller of a company. The controller’s work is not directly identifiable in the production of a specific product or service, since his or her work is spread across several projects or tasks.
Labor Hour The unit of time by which direct labor activity is measured.
Labor Rate The dollar amount paid to an individual per a given amount of time in consideration of work accomplished.
Labor Cost The product (i.e., result) of multiplying labor hours by appropriate labor rates.
Labor Category A grouping of workers with similar skills or expertise.
Labor Mix The combination of functional skills and levels of worker experience required to accomplish a given task.
Other Direct Costs (ODC) Other costs charged directly to the Government that have not been included in proposed material, direct labor, indirect costs, or any other category of cost. ODCs can include but are not limited to: special tooling, travel expenses, relocation expenses, pre-production and start-up costs, packaging and transportation costs, royalties, spoilage and rework, computer expenses, federal excise taxes, and reproduction costs.

 

19.7.5 Considerations for Software Cost Analysis

In addition to general cost analysis considerations, there are three key considerations that apply to most proposals for a software program. These are the software development productivity, code condition (percent new and reused code) and the software defect rate by program phase.

Productivity

Productivity is a primary indicator of how efficient the contractor is in developing software. Productivity relates software development effort to the organizational capabilities, experience, and individual talents of the team that will perform the software development. This rate should be based upon historical data. The productivity rate tends to remain constant for a given organization, so previous proposals by the same contractor are especially valuable. It is much more difficult to compare rates between different contractors because it is so dependent upon organization and personnel. Productivity is also affected by programming language, processes, specification level and software tools.

Referring to NCCA’s Handbook, two calculations should be made for software development productivity:

  1. Productivity expressed as hours per SLOC based on the following formula:

    image

  2. Productivity expressed as hours per new SLOC. This metric is important because new SLOC tends to drive the effort.

    image

    PM = person-months of effort, 152 is assumed to be the average work-hours in a month.

This rate along with the code size usually forms the basis for the software development effort. Therefore, it is important to compare productivity against a Rule of Thumb estimate, other programs and/or the contractors past performance to determine if it reasonable. Then, over the life of the contract, compare the rate from proposal to proposal. The rate does tend to increase (become less efficient) gradually over the life of the contract as additional requirements are made to the program (new code has to be integrated with more and more old code).

Code Condition

Code condition, although mentioned in section 19.7.1 in regards to price analysis, is also important to cost analysis. The proposal should separate the total software development effort by percent of new code, modified code, COTS and unmodified code (see Sections 19.1.2, 19.2.4 (Technology Insertion), and 19.4.3 for more details on the definitions of these terms and Appendix 19E for COTS considerations).

Reused code (modified, unmodified and COTS fall into this category) is included in a program to reduce effort, cost and schedule. According to NASA’s Software Engineering Lab (SEL), "Cost and Schedule Estimation Study Report", dated Nov 1993, for projects with moderate to low code reuse (less than 70 percent), the post-CDR growth in SLOC due to requirement changes and TBDs is commensurate with past SEL experience: 40 percent. For projects with high code reuse (70 percent or more), the post-CDR growth in SLOC is only about half as much (20 percent). For projects with moderate to low code reuse, the post-CDR growth in schedule is 35 percent. For projects with high reuse, the post-CDR growth in schedule is 5 percent.

As mentioned in Section 19.1.2, once the percentage of modifications in a block of code exceed 20%, it is usually less expensive to write new code. Often contractors are optimistic in the amount of code that can be reused. Monitoring metrics of software program growth (increasing cost and schedule), there are typically two sources of new code increase, requirements growth from the user and less modified/unmodified code than projected by the contractor. As the contractor gets into the process of actually developing the software, typically the task of reusing software becomes more difficult than planned. Therefore it is important to keep track of these percentages over the life of a contract as both indicators of future problems and for cost analysis. Table 19-4 of Section 19.3.1 includes code size as a process effectiveness metric.

Figure 19-10 below is an example of new code growth at the expense of reused code taken from an actual FAA program. Although total SLOC grew only 5% during the 17 month period shown, the amount of new code that had to be developed grew from 59% of the total code to 89%. There was also about 25% schedule growth on this program.

Figure 19-10. Comparison of New Code versus Reused Code

image

Software Defects

The contractor should indicate in the proposal the number of defects expected to be discovered and fixed for each phase of the program. As discussed in Section 19.6.2, the earlier in the development cycle that a software bug is discovered/detected, the less expensive it will be to fix. The quantity of software defects should be expressed in or converted to a rate since this is how defects are compared across the industry.

image

There should be separate rates calculated by phase using the SW Defect Rate equation for new code and modified code, plus a rate for defects remaining in the existing, unmodified code. These calculations would then become the basis for the effort required for testing/engineering to find and to fix the defects in each phase. Defect metrics can be useful during software cost analysis to aid the analyst in determining if the contractor’s estimated effort for identifying and fixing defects is consistent with the contractor’s/industry’s historical averages.

Figure 19-11 shows a typical trend for detecting software defects based on studies of completed software programs by Musa, Iannino and Okumoto in their book Software Reliablity-Measurement, Prediction, Application. The defects discovered during coding and unit test are usually not formally reported/tracked by the contractor so expect that only the estimates for the phases after Unit Test will be provided in a proposal. Musa et. al., per their study, organized the data to predict the number of defects remaining in the software per KSLOC at the start of each phase. Each defect quantity includes the defects for the remaining phases (6.01 is included in the 19.70). The book indicated that program size (SLOC) was the most important defect prediction factor, with specification change activity, programmer skill level and design documentation thoroughness being the other most significant predictive factors. Defects continue to exist in the software even after site installations are completed that are gradually discovered as changes in software use or hardware occur.

 

Figure 19-11. Mean Software Defect Rate by Phase

image

Figure 19-12 below shows a defect prediction curve from a previous FAA software system. The contractor updated the defect prediction curve (top line) at proposal time (based upon 6 defects/KSLOC of new code at the start of system testing for that program) and then tracked the defect detection curve as the program progressed. The points where the top curve stepped up occurred because of increases in program requirements (new SLOC being added). Even though the amount of code increased by steps, notice the discovery rate remained at basically a constant rate because it is proportional to the amount of testing being conducted. Although there will be more defects to be discovered as the amount of code increases, the testers are able to identify defects only so fast. Because this program spanned many years with the same contractor, personnel experience and processes improved overtime so that the defect rate decreased (distance between middle and top lines grew larger).

 

Figure 19-12. Software Defect Prediction versus Actual Curves

image

As will be discussed in Section 19.10.3 there are Product Quality metrics that track the defect discovery rate and defect solution (i.e., fixed or closed). These are indicators of backlog and rework effort, as well as the contractor’s process capability in finding and fixing defects, and the potential quality of the software product. The bottom curve of Figure 19-12 shows the number of defects that were fixed. Since the delta of unfixed defects increased, this would normally be an area of concern. For this program, a decision was made near the end of the program to not spend the funds to fix low-priority defects.

The defect rates in the proposal should approximate the trends of Figures 19-11 and 19-12, or this could be an area for further investigation. If the system has significantly more defects than predicted by the industry averages, then the contractor’s software process and quality assurance activities should be looked at. Defect detection and correction tend to drive schedule (and cost) during the system test phase and maintenance costs during operations. According to NRaD’s Manual, the cost of corrective maintenance will depend on the quality of the software. Software that is fielded with a great number of defects will result in a greater number of user generated software trouble reports which in turn could potentially result in high maintenance costs.

 

19.7.6 Cost Analysis of a Parametric Estimate

Since most software cost estimates and proposal estimates today are based at least in part on parametric models, this section summarizes some points to consider when analyzing a parametric estimate. This section, along with the REVIC (a software cost estimating model based on COCOMO, see appendix 19D for details) example are extended from NASA’s Handbook.

With the proliferation of parametric cost estimating models and tools, both commercial models and "home grown" versions, it is impossible to describe what to look for in every model and cost parameter. However, some generalizations can be made. An analyst confronted with a parametric cost estimate, should take a few steps to ensure a fair review. These are:

  • Understand the cost model used. Do not hesitate to ask questions and to consult with the experts. A user’s understanding of the model is not necessary, just a general knowledge about how it works. Appendix 19D and 19E can be of assistance and most commercial model builders welcome calls from users, analysts and auditors.
  • Review the program inputs to the model. Is the schedule correct? Does the WBS adequately describe the product being estimated? Is anything missing? Are there duplications? Does the WBS follow the statement of work?
  • Review the technical inputs to the model with government engineers or IPT. Check them for reasonableness and benchmark them using the experience of the resident experts.
  • Understand the model’s cost drivers. Generally, there are a few select parameters that are the predominant cost estimating factors that drive cost. Many of the others are just "tweaks" to the model. Concentrate on the cost drivers when performing the analysis.
  • Be aware of the assumptions the cost estimator made when the model was built. Are they still reasonable for this procurement?
  • Be knowledgeable of the historical cost basis for the model, if any. Be sure to review the source documentation. Be wary of any model used that has no basis in historical cost. How was the data "tuned" or normalized? Were data outliers disregarded? If so, why? Was the model calibrated? How was the calibration performed? Were any universal or "standard" cost factors used? Would they be applicable in this case?
  • Question how the future environment might be different from the historical case. Have these differences been accounted for?
  • Review the track record of the estimator or the estimating organization. What is their past performance? Have their estimates been reliable?
  • Understand the economics involved with the model. What are the effective costing rates used? Are they reasonable? Do they reflect the organization and skill levels being estimated? Are the skill levels appropriate for the type of work being performed?
  • Identify what cost factors have been "tweaked" and why. Focus on the "big ticket" items. Using expert opinion and "rules of thumb," are any significant cost factors outside the range of reasonableness? For instance, it is very easy to calibrate a software cost model’s cost (hours or dollars) per line of software code. Is the CER reasonable for the estimate? Since, models may define a software line of code differently, it is important to understand the definitions used in the model being evaluated.

 

19.7.7 Rules Of Thumb Use in Cost Analysis

Continuing to refer to NASA’s Handbook, rules of thumb are general rules, CER’s or other cost factors that are widely used, but do not apply precisely to a specific case. Each estimating organization will have its own peculiar set. Most long time estimators can develop a rapid estimate (a ROM) based on their own personal rules of thumb. These rules come from their own experience. Examples of rules of thumb are the "default" factors that populate many parametric models including the commercial ones. If no factor input is available, the model uses a universally derived factor - one taken from a universal database or expert opinion.

Although there is nothing wrong with the use of these rules per se, the danger is evident. The rules are only norms or benchmarks, and they will never apply to a specific estimate. Their value lies in the comparison of a factor in a model to the universal case. Too much deviation should be investigated. Why is the actual case so different from the rule or, why doesn’t the rule apply here?

For instance, if the rule of thumb is that a line of software code (for a particular type of software) should take about one-half hour to write (on average), and the parametric model indicates two hours, then some investigation is in order. Although either CER could be the correct one, there is far too much deviation from the norm, and suspicions are aroused. However, if the model indicates six tenths of an hour compared to one-half hour, the rule has performed the "sanity" check. The use of the rule does not preclude a thorough cost analysis, however, if a cost driver has been identified.

Rules of thumb are important, but they must be used wisely. Each analyst should develop his/her own list. This can be accomplished through experience, or by consulting with experts in the field. Many rules exist embedded within commercial parametric models, and are available to the users and modelers. In any event, it is important to have a list of benchmark factors. There will probably be a different list for each organization. Organizational and product differences require it. There are no shortcuts or magic formulas, so use the rules as a quality check, or for quick, rough order of magnitude estimates.

 

19.7.8 Parametric Cost Analysis Example

What follows is an example of some of the above discussion, based upon an example in NASA’s Handbook. The example here uses the software cost estimating model, REVIC.

The fundamental REVIC equation is: Man Months = A*(KDSI)B*P Fi, where,

  • A and B are coefficients of several Software Development Modes (for instance, embedded, organic, Ada);
  • KDSI is Thousand Delivered Source Instructions; essentially SLOC,
  • Fi is the product of various environmental factors the model uses to adjust the cost estimates. These environmental factors include: analyst and programmer capabilities, applications experience, programming language, storage constraints, requirements volatility, reliability requirements, database size, product complexity, the use of modern programming practices and software tools, platform (airborne, space, etc.), schedule compression, etc. Each factor is given a factor value from very low through nominal to very high.
  • Suppose the following cost estimate were included in the proposal:

    REVIC = IMS EP 1 = 3.312(DSI/1000)1.2 * P Fi

    IMS EP 1 = (3.312)(1337/1000)1.2 *.874 = 4.1PM

The basis of estimate (BOE) says this: "The Integrated Management System Evaluation Package for Release 1 (IMS EP 1) requires 1337 new lines of embedded source code of new algorithms and uses complex interfaces. Our superior programming staff using state-of-the-art software tools, modern programming practices, and possessing significant application experience allows the product of the environmental factors to be less (13%) than nominal." Some observations can be noted about this estimate. First, this particular estimate is relatively small (4.1 person-months). Although there may be more important cost issues, this one is analyzed as a "spot check". Additional information about REVIC is available in Appendix 19D and E on the Internet. REVIC is available free to government agencies/contractors, so a lot of information is available, including a copy of the model and its user manual. The resident expert can confirm that the technical description in the BOE is correct. That and a comparison with Table 19D-2 indicates that the model’s values for A and B are the correct ones for an embedded system. A quick calculation can compare the estimate with a rule of thumb supplied by the resident expert.

4.1PM * 155hrs./PM ÷ 1337DSI = .475hrs/DSI

Lets assume that the resident expert believes this factor to be quite low, possibly two times. He also indicates that the DSI number needs to be verified for accuracy, since it is a major cost driver for this estimate. The environmental factors (EFs) for the model do not appear to have been calibrated except for the vague references "...superior programming staff...." in the BOE. Since the product of all the EFs nominal values equals 1.4 (referring to a copy of REVIC 9.2 program), it would appear that about 75% of the twenty EFs were set to the nominal value. He is also suspicious of the "...less (13%) than nominal" environmental factors product. (Nominal here he believes should equal at least 1.5 due to the type platform involved.) Therefore 13% less appears to be a mistake of some kind (1.4-.874 = .526, or 36%), but it will have to be investigated. He also argues that since this is a competitive proposal that is almost 80% software, all software estimates may be "aggressive," and should be carefully reviewed.

All these questions could be investigated as part of the analysis. Although this is a relatively easy example, it demonstrates the general idea of what to look for in a parametric estimate.

 

19.7.9 COTS Special Pricing Considerations

See Appendix 19E for a detailed discussion of the areas that are special and should be considered in pricing a proposal that is for or includes COTS items.

 

19.7.10 Software Pricing Summary

Pricing is used by procuring officials to establish a "fair and reasonable price". Price analysis and cost analysis are the two basic techniques used to accomplish this purpose. Price analysis should always be performed. Under certain circumstances, cost analysis also needs to be performed. Price analysis plays the lead role in determining price reasonableness and fairness, and it becomes the responsibility of the analyst to research and gather pricing data from sources other than the contractor. As a result, the analyst must understand the fundamentals of performing price analysis along with the peculiar aspects associated with software cost estimation. In order to conduct software price analysis, the terms for comparing the data must be properly defined. There are three common Pricing Tools normally used for Price Analysis: ICEs, comparison with a similar program and an automated model (spreadsheet). In addition to the general cost analysis considerations, there are three key considerations that apply to most proposals for a software program: software development productivity, code condition (percent new, reused code) and the software defect rate by program phase. The software cost estimation techniques/methods contained in this chapter should provide the analyst with the background required to understand a software program cost estimate and this section provides specific areas on which to focus the software pricing effort.