Thursday, December 20, 2007

Risk Management

Risk Exposure = Level of Impact X Probability

Level of Impact: This is an estimate of the overall scale of the impact following an occurrence of risk. This is rated on the following scale: Very high impact, High impact, Medium impact, etc.

Probability: Likelihood of occurrence of an event.

Mitigation: This is an action to avoid the occurrence of the risk or to mitigate/lessen the chances/impact of the risk. On failure of mitigation, contingency plan is used. Mitigation is preventive. You apply mitigation plan even before the risk has occurred (to avoid occurrence of risk).

Early warning: This is an assessment of known parameters which are indications of risk, and are those which if not controlled can lead to occurrence of the risk. This is useful for taking the mitigation actions.

Contingency: Alternative actions to be taken in case, despite applying mitigation strategy, the risk occurs. Generally, risk realization leads to a change in schedule, effort, cost, project control, customer satisfaction, defects, etc. Hence, contingency is applied to identify which parameters would be affected by the risk and how they would be handled. Contingency is corrective. You apply contingency after the risk has occurred.

  1. When a project starts, prepare its risk profile (which is the threat the project poses for the business.)
  2. Pick up (from the PPDB) the most commonly occurring risks the project might be exposed. Record them in the risk tracker.
  3. Identify other probable sources of risk
  4. Classify the risks, identify their Impact on the business
  5. Identify their probability of realization
  6. Identify Mitigation and Contingency plans
  7. Define the threshold for mitigation and Contingency
  8. Revisit the risks at regular intervals (during weekly team meets / client calls).
  9. Note: At any point in time, PM and team should be aware of the three top risks plaguing the project.

Thursday, December 13, 2007

Process implementation...

For long running projects without defined processes, do not attempt a big bang approach. People are resistent to change. A big bang approach will lead only to greater resistance. Try to work out the as-is process and suggest improvements. Apply metrics to see the current level of performance. This will help in understanding areas which can be oiled for maximum efficiency. The measurements also allow for assessing the weakest links, forging of which will lead to enhanced performance.

After you have set up basic metrics in place, reach out to the client team to understand their expectations. Report the metrics to the client team. Facts will provide a handle to justify (realize if process tweak is necessary) the efforts of development team.

Talk with the development team to understand the challenges of implementing a defined process. Come out with a basic ETVX model for the as-is process. Suggest improvements to it. After the process flow has been established in consultation with the project manager, disseminate the information to the entire team via classroom sessions. Randomly check each developer to see if the defined process is being followed. Capture metrics at each stage of the defined process.

Compare the new metrics with the older ones to see the improvement in proces performance.

Wednesday, December 12, 2007

SQA Interview Questions...

  1. What is software quality assurance?
  2. What is a process?
  3. What are the two views of Quality?
  4. What is CoQ, and what are the types of Cost of Quality?
  5. What is the difference between COQ and COPQ?
  6. What is the difference between Verification and Validation?
  7. What is Baselining?
  8. How is a Walkthrough different from Review
  9. Differentiate between a Bug and a Defect.
  10. Define the terms - Code Coverage, Code Inspection, Code Walkthroughs.
  11. How is White box testing different from Black box testing?
  12. What is Regression testing?
  13. What is a metric?
  14. Define: Quality Circle, Quality Control, Quality Assurance, Quality Policy
  15. Differentiate Static Testing from Dynamic Testing.
  16. Differentiate between QA and Testing.
  17. What are the common problems in developing software?
  18. What is the role of a Quality Assurance professional in an organization?
  19. What is Configuration Management?
  20. What are the two types of configuration audits? How is functional config audit different from physical config audit?
  21. What do you understand by Acceptable Quality Level?
  22. What is Benchmarking?
  23. What do you understand by Earned Value Management / Return on Investment?
  24. How is a procedure different from a process?
  25. What is Risk Exposure?
  26. Differentiate between a Risk and an Issue.
  27. How is small q different from Big Q.
  28. How do you differentiate these one from the other - Quality Model, Process Model, Lifecycle Model?
  29. What is DAR? Give a few instances of DAR you have implemented in your projects.
  30. What is CAR? Give a few instances of CAR you have implemented in your projects.
  31. Tell us about a few risks in your projects. How was risk prioritization done?
  32. What is the difference between Contingency and Mitigation?
  33. What metrics do you collect for your projects? How are they aligned with your project goals, engagement goals, and organizational goals?
  34. How did you analyze the metrics you collected?
  35. What is Process Compliance Index? How is PCI derived?
  36. What do you mean by PCB? How do you derive PCBs for your company?
  37. What software lifecycle models does your company use?
  38. What are the various size estimation techniques you know and which ones were u involved in? / How is the size of a project estimated?
  39. How is a Generic Goal different from Specific Goal wrt CMMI?
  40. How is CMMI for Development different from CMMI V1.1
  41. In what ways is Level 3 different from Level 2; Level 2 from the levels below it; and Level 3 from Level 4 and Level 4 from Level 5.
  42. What do you understand by Bi-directional Traceability Matrix? What is foreward traceability, and what is backward traceability?
  43. What is Horizontal Traceability, and what is Vertical Traceability? What are their significance?
  44. Why should a process be tailored / what is process tailoring? / How is process tailoring done?
  45. How do you facilitate process understanding within your organization?
  46. Right from project initiation thru project closure, give a sequence of activities you carry out as SQA for your project.

Process Capability Baseline...

PCB specifies, based on data of past projects, what the performance of a process is. That is, what a project can expect by following a process. The performance factors of a process are primarily those that relate to quality and productivity. PCBs define - Productivity, Quality, Effort Distribution, Defect Distribution, Defect Injection Rate, CoQ, etc.

Using the capability baseline, a project can predict at a gross level the effort that will be needed for various stages, the defects likely to be observed during various development activities, and quality and productivity of the project.

Tuesday, December 11, 2007

Process Assets & Process Database...

Process Assets: Process Assets form the repository to facilitate dissemination of engagement learnings across an organization. A process asset could be any information from an engagement, which can be re-used by future engagements. Typically these include project plans, CM plans, requirements docs, design docs, test plans, standards, checklists, CAR reports, utilities, etc.

Process Database: Process Database is a s/w engineering database to study the processes in an organization with respect to productivity and quality. Its intents are:

a. To aid estimation of efforts and defects.
b. To get the productivity and quality data on different types of projects.
c. To aid in creating process capability baselines (PCBs).

Friday, November 30, 2007

NFT - Non functional testing...


Smoke Test

Smoke testing Refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright, the assembly is ready for more stressful testing.

· In plumbing, a smoke test forces actual smoke through newly plumbed pipes to find leaks, before water is allowed to flow through the pipes.

In computer programming and software testing, smoke testing is a preliminary to further testing, which should reveal simple failures severe enough to reject a prospective software release. In this case, the smoke is metaphorical.

Smoke test covers all the functionality in less time to ensure the application works fine. These are a narrow set of tests which determine if more extensive testing of the product is warranted. For example, in an OOP framework, one might instantiate an object from each class in the End User API and arbitrarily invoke a single method from each.

Daily Build and Smoke Test

If you want to create a simple computer program consisting of only one file, you merely need to compile and link that one file. On a typical team project involving dozens, hundreds, or even thousands of files, however, the process of creating an executable program becomes more complicated and time consuming. You must "build" the program from its various components.
A common practice at Microsoft and some other shrink-wrap software companies is the "daily build and smoke test" process. Every file is compiled, linked, and combined into an executable program every day, and the program is then put through a "smoke test," a relatively simple check to see whether the product "smokes" when it runs.

BENEFITS. This simple process produces several significant benefits.
It minimizes integration risk. One of the greatest risks that a team project faces is that, when the different team members combine or "integrate" the code they have been working on separately, the resulting composite code does not work well. Depending on how late in the project the incompatibility is discovered, debugging might take longer than it would have if integration had occurred earlier, program interfaces might have to be changed, or major parts of the system might have to be redesigned and reimplemented. In extreme cases, integration errors have caused projects to be cancelled. The daily build and smoke test process keeps integration errors small and manageable, and it prevents runaway integration problems.
It reduces the risk of low quality. Related to the risk of unsuccessful or problematic integration is the risk of low quality. By minimally smoke-testing all the code daily, quality problems are prevented from taking control of the project. You bring the system to a known, good state, and then you keep it there. You simply don't allow it to deteriorate to the point where time-consuming quality problems can occur.

It supports easier defect diagnosis. When the product is built and tested every day, it's easy to pinpoint why the product is broken on any given day. If the product worked on Day 17 and is broken on Day 18, something that happened between the two builds broke the product.
It improves morale. Seeing a product work provides an incredible boost to morale. It almost doesn't matter what the product does. Developers can be excited just to see it display a rectangle! With daily builds, a bit more of the product works every day, and that keeps morale high.

USING THE DAILY BUILD AND SMOKE TEST. The idea behind this process is simply to build the product and test it every day. Here are some of the ins and outs of this simple idea.
Build daily. The most fundamental part of the daily build is the "daily" part. As Jim McCarthy says (Dynamics of Software Development, Microsoft Press, 1995), treat the daily build as the heartbeat of the project. If there's no heartbeat, the project is dead. A little less metaphorically, Michael Cusumano and Richard W. Selby describe the daily build as the sync pulse of a project (Microsoft Secrets, The Free Press, 1995). Different developers' code is allowed to get a little out of sync between these pulses, but every time there's a sync pulse, the code has to come back into alignment. When you insist on keeping the pulses close together, you prevent developers from getting out of sync entirely.

Some organizations build every week, rather than every day. The problem with this is that if the build is broken one week, you might go for several weeks before the next good build. When that happens, you lose virtually all of the benefit of frequent builds.
Check for broken builds. For the daily-build process to work, the software that's built has to work. If the software isn't usable, the build is considered to be broken and fixing it becomes top priority.

Each project sets its own standard for what constitutes "breaking the build." The standard needs to set a quality level that's strict enough to keep showstopper defects out but lenient enough to dis-regard trivial defects, an undue attention to which could paralyze progress.

At a minimum, a "good" build should

· compile all files, libraries, and other components successfully;
· link all files, libraries, and other components successfully;
· not contain any showstopper bugs that prevent the program from being launched or that make it hazardous to operate; and
· pass the smoke test.

Smoke test daily. The smoke test should exercise the entire system from end to end. It does not have to be exhaustive, but it should be capable of exposing major problems. The smoke test should be thorough enough that if the build passes, you can assume that it is stable enough to be tested more thoroughly.

The daily build has little value without the smoke test. The smoke test is the sentry that guards against deteriorating product quality and creeping integration problems. Without it, the daily build becomes just a time-wasting exercise in ensuring that you have a clean compile every day.
The smoke test must evolve as the system evolves. At first, the smoke test will probably test something simple, such as whether the system can say, "Hello, World." As the system develops, the smoke test will become more thorough. The first test might take a matter of seconds to run; as the system grows, the smoke test can grow to 30 minutes, an hour, or more.
Establish a build group. On most projects, tending the daily build and keeping the smoke test up to date becomes a big enough task to be an explicit part of someone's job. On large projects, it can become a full-time job for more than one person. On Windows NT 3.0, for example, there were four full-time people in the build group (Pascal Zachary, Showstopper!, The Free Press, 1994).

Add revisions to the build only when it makes sense to do so. Individual developers usually don't write code quickly enough to add meaningful increments to the system on a daily basis. They should work on a chunk of code and then integrate it when they have a collection of code in a consistent state-usually once every few days.

Create a penalty for breaking the build. Most groups that use daily builds create a penalty for breaking the build. Make it clear from the beginning that keeping the build healthy is the project's top priority. A broken build should be the exception, not the rule. Insist that developers who have broken the build stop all other work until they've fixed it. If the build is broken too often, it's hard to take seriously the job of not breaking the build.
A light-hearted penalty can help to emphasize this priority. Some groups give out lollipops to each "sucker" who breaks the build. This developer then has to tape the sucker to his office door until he fixes the problem. Other groups have guilty developers wear goat horns or contribute $5 to a morale fund.

Some projects establish a penalty with more bite. Microsoft developers on high-profile projects such as Windows NT, Windows 95, and Excel have taken to wearing beepers in the late stages of their projects. If they break the build, they get called in to fix it even if their defect is discovered at 3 a.m.

Build and smoke even under pressure. When schedule pressure becomes intense, the work required to maintain the daily build can seem like extravagant overhead. The opposite is true. Under stress, developers lose some of their discipline. They feel pressure to take design and implementation shortcuts that they would not take under less stressful circumstances. They review and unit-test their own code less carefully than usual. The code tends toward a state of entropy more quickly than it does during less stressful times.

Against this backdrop, daily builds enforce discipline and keep pressure-cooker projects on track. The code still tends toward a state of entropy, but the build process brings that tendency to heel every day.

Who can benefit from this process? Some developers protest that it is impractical to build every day because their projects are too large. But what was perhaps the most complex software project in recent history used daily builds successfully. By the time it was released, Microsoft Windows NT 3.0 consisted of 5.6 million lines of code spread across 40,000 source files. A complete build took as many as 19 hours on several machines, but the NT development team still managed to build every day (Zachary, 1994). Far from being a nuisance, the NT team attributed much of its success on that huge project to their daily builds. Those of us who work on projects of less staggering proportions will have a hard time explaining why we aren't also reaping the benefits of this practice.

Courtesy: http://www.stevemcconnell.com/, http://www.wikipedia.org/

Saturday, September 08, 2007

Verification & Validation Techniques

Verification and validation techniques are of two types: Formal and Informal as shown in the diagram below.




Formal V&V Techniques

Formal V&V techniques are based on formal mathematical proofs of correctness. If attainable, a formal proof of correctness is the most effective means of model V&V. Unfortunately, “if attainable” is the sticking point. Current formal proof of correctness techniques cannot even be applied to a reasonably complex simulation; however, formal techniques can serve as the foundation for other V&V techniques. The most commonly known techniques are briefly described below.

Induction, Inference, and Logical Deduction are simply acts of justifying conclusions on the basis of premises given. An argument is valid if the steps used to progress from the premises to the conclusion conform to established rules of inference. Inductive reasoning is based on invariant properties of a set of observations; assertions are invariants because their value is defined to be true. Given that the initial model assertion is correct, it stands to reason that if each path progressing from that assertion is correct and each path subsequently progressing from the previous assertion is correct, then the model must be correct if it terminates. Birta and Ozmizrak (1996) present a knowledge-based approach for M&S validation that uses a validation knowledge base containing rules of inference.

Inductive Assertions assess model correctness based on an approach that is very close to formal proof of model correctness. It is conducted in three steps.

  1. Input-to-output relations for all model variables are identified
  2. These relations are converted into assertion statements and are placed along the model execution paths so that an assertion statement lies at the beginning and end of each model execution path
  3. Verification is achieved by proving for each path that, if the assertion at the beginning of the path is true and all statements along the path are executed, then the assertion at the end of the path is true

If all paths plus model termination can be proved, by induction, the model is proved to be correct.

Lambda Calculus is a system that transforms the model into formal expressions by rewriting strings. The model itself can be considered a large string. Lambda calculus specifies rules for rewriting strings to transform the model into lambda calculus expressions. Using lambda calculus, the modeler can express the model formally to apply mathematical proof of correctness techniques to it.

Predicate Calculus provides rules for manipulating predicates. A predicate is a combination of simple relations, such as completed_jobs > steady_state_length. A predicate will be either true or false. The model can be defined in terms of predicates and manipulated using the rules of predicate calculus. Predicate calculus forms the basis of all formal specification languages.

Predicate Transformation verifies model correctness by formally defining the semantics of the model with a mapping that transforms model output states to all possible model input states. This representation is the basis from which model correctness is proved.

Formal Proof of Correctness expresses the model in a precise notation and then mathematically proves that the executed model terminates and satisfies the requirements with sufficient accuracy. Attaining proof of correctness in a realistic sense is not possible under the current state of the art. The advantage of realizing proof of correctness is so great, however, that, when the capability is realized, it will revolutionize V&V.

Informal V&V Techniques

Informal techniques are among the most commonly used. They are called informal because they rely heavily on human reasoning and subjectivity without stringent mathematical formalism. The informal label should not imply, however, a lack of structure or formal guidelines in their use. In fact, these techniques should be applied using well-structured approaches under formal guidelines. They can be very effective if employed properly.

The following techniques are discussed in the paragraphs below:

  • Audit
  • Desk Checking / Self-inspection
  • Face Validation
  • Inspection
  • Review
  • Turing Test
  • Walkthroughs
  • Inspection vs Walkthrough vs Review

1. Audit: An audit is a verification technique performed throughout the development life cycle of a new model or simulation or during modification made to legacy models and simulations. An audit is a staff function that serves as the "eyes and ears of management". An audit is undertaken to assess how adequately a model or simulation is used with respect to established plans, policies, procedures, standards, and guidelines. Auditing is carried out by holding meetings and conducting observations and examinations. The process of documenting and retaining sufficient evidence about the substantiation of accuracy is called an audit trail. Auditing can be used to establish traceability within the simulation. When an error is identified, it should be traceable to its source via its audit trail.

2. Desk Checking / Self-inspection: Desk checking, or self-inspection, is an intense examination of a working product or document to ensure its correctness, completeness, consistency, and clarity. It is particularly useful during requirements verification, design verification, and code verification. Desk checking can involve a number of different tasks, such as those listed in the table below.

Typical Desk Checking Activities:

· Syntax review
· Cross-reference examination
· Convention violation assessment
· Detailed comparison to specifications
· Code reading
· Control flowgraph analysis
· Path sensitizing

To be effective, desk checking should be conducted carefully and thoroughly, preferably by someone not involved in the actual development of the product or document, because it is usually difficult to see one’s own errors.

3. Face Validation: The project team members, potential users of the model, and subject matter experts (SMEs) review simulation output (e.g., numerical results, animations, etc.) for reasonableness. They use their estimates and intuition to compare model and system behaviors subjectively under identical input conditions and judge whether the model and its results are reasonable.

Face validation is regularly cited in V&V efforts within the Department of Defense (DoD) M&S community. However, the term is commonly misused as a more general term and misapplied to other techniques involving visual reviews (e.g., inspection, desk check, review). Face validation is useful mostly as a preliminary approach to validation in the early stages of development. When a model is not mature or lacks a well-documented VV&A history, additional validation techniques may be required.

4. Inspection: Inspection is normally performed by a team that examines the product of a particular simulation development phase (e.g., M&S requirements definition, conceptual model development, M&S design). A team normally consists of four or five members, including a moderator or leader, a recorder, a reader (i.e., a representative of the Developer) who presents the material being inspected, the V&V Agent; and one or more appropriate subject matter experts (SMEs).

Normally, an inspection consists of five phases:

1. Overview
2. Preparation
3. Inspection
4. Rework
5. Follow up

5. Review: A review is intended to evaluate the simulation in light of development standards, guidelines, and specifications and to provide management, such as the User or M&S PM, with evidence that the simulation development process is being carried out according to the stated objectives. A review is similar to an inspection or walkthrough, except that the review team also includes management. As such, it is considered a higher-level technique than inspection or walkthrough.

A review team is generally comprised of management-level representatives of the User and M&S PM. Review agendas should focus less on technical issues and more on oversight than an inspection. The purpose is to evaluate the model or simulation relative to specifications and standards, recording defects and deficiencies. The V&V Agent should gather and distribute the documentation to all team members for examination before the review.

The V&V Agent may also prepare a checklist to help the team focus on the key points. The result of the review should be a document recording the events of the meeting, deficiencies identified, and review team recommendations. Appropriate actions should then be taken to correct any deficiencies and address all recommendations.

6. Turing Test: The Turing test is used to verify the accuracy of a simulation by focusing on differences between the system being simulated and the simulation of that system. System experts are presented with two blind sets of output data, one obtained from the model representing the system and one from the system, created under the same input conditions and are asked to differentiate between the two. If they cannot differentiate between the two, confidence in the model’s validity is increased. If they can differentiate between them, they are asked to describe the differences. Their responses provide valuable feedback regarding the accuracy and appropriateness of the system representation.

7. Walkthrough: The main thrust of the walkthrough is to detect and document faults; it is not a performance appraisal of the Developer. This point must be made to everyone involved so that full cooperation is achieved in discovering errors. A typical structured walkthrough team consists of:

  • Coordinator, often the V&V Agent, who organizes, moderates, and follows up the walkthrough activities
  • Presenter, usually the Developer
  • Recorder
  • Maintenance oracle, who focuses on long-term implications
  • Standards bearer, who assesses adherence to standards
  • Accreditation Agent, who reflects the needs and concerns of the UserAdditional reviewers such as the M&S PM and auditors

Inspection vs Walkthrough vs Review:

Inspections differ significantly from walkthroughs. An inspection is a five-step, formalized process. The inspection team uses the checklist approach for uncovering errors. A walkthrough is less formal, has fewer steps, and does not use a checklist to guide or a written report to document the team’s work. Although the inspection process takes much longer than a walkthrough, the extra time is justified because an inspection is extremely effective for detecting faults early in the development process when they are easiest and least costly to correct.

Inspections and walkthroughs concentrate on assessing correctness. Reviews seek to ascertain that tolerable levels of quality are being attained. The review team is more concerned with design deficiencies and deviations from the conceptual model and M&S requirements than it is with the intricate line-by-line details of the implementation. The focus of a review is not on discovering technical flaws but on ensuring that the design and development fully and accurately address the needs of the application. For this reason, the review process is effective early on during requirements verification and conceptual model validation.

Wednesday, February 21, 2007

CM Audits

There are two types of CM Audits:

  1. Physical Configuration Audit: To determine that the configuration item conforms to physical characteristics expected.
  2. Functional Configuration Audit: To verify that a Configuration Item’s actual performance agrees with its software requirements.

Configuration audits are performed differently for development and maintenance projects.

For development projects:

  • When software is being delivered / major release
  • End of each phase

For maintenance projects:

  • Periodically, considering that there are no major releases.

Monday, February 19, 2007

CMMI L2 and L3 - Critical Differences

Level 2 organizations do not plan to manage project risks. Risks could possibly become issues as the project progresses. Therefore, it is important to identify the potential risks, prioritize them according to the impact they might have on the project, and revisit them at regular intervals. While this happens at Level3, it is missing at L2.
















Wednesday, February 14, 2007

1. Define Phase

In the Define phase, we identify and get approval for six sigma projects. i.e. identify potential projects, shortlist them based on risk / effort / impact, and create a charter for the selected project.

So, the steps involved in the Define phase are:

1. Identify potential projects
2. Define the problem and objective
3. Shortlist projects based on risk, effort, impact
4. Create process overview maps (bird's eye view of the process)
5. Define project scope using SIPOC (Supplier, Input, Process, Output, and Customer)

When identifying projects for six sigma green belt, you should take into account not only the voice of business, but also voice of customer, cost of poor quality, and service quality gaps. This would lead to identification of factors critical to quality for your company. Against each project, list the related VOC, VOB, COPQ, and service quality gaps.

The ideal project will have the following characteristics:

a. Chronic problem (problem keeps recurring)
b. Huge impact on business (the problem should have direct benefits in terms of impact on the company's business)
c. Low risk (the project should carry low risk)
d. Less effort (you should be able to finish the project within a given timeframe)

The above combination is not realistic in the sense that it is difficult to find projects that have all these characteristics. A more realistic approach would be checking each project's Risk vs. Effort vs. Impact. Choose projects that have high impact, less effort and run low risk.

Tuesday, February 13, 2007

8 D Problem-solving Technique

  1. D0 - Prepare
  2. D1 - Establish the team
  3. D2 - Describe the problem
  4. D3 - Interim containtment action (to safeguard the customer, place a temporary solution)
  5. D4 - Define and verify root cause and escape point (a point where the problem could have been cought but was not)
  6. D5 - Permanent Corrective Action (PCA) : This solution would solve the problem of the team facing it, and not the entire org.
  7. D6 - Implement and validate the PCA : Check if the placed solution works.
  8. D7 - Prevent occurrence: prevent occurrence of the event for ever so that it can be prevented from occurring in all projects across the org.
  9. D8 - Recognize

5 Whys...

You will reach the root cause within 5 whys.

1. Why X has happened?
Because of some Y....
2. Why did Y occur?
Because of factor Y1
3. Why did Y1 occur?
Because of Y2
4. Why did Y2 occur?
Because of Y3
5. Why did Y3 occur
ANSWER

Past - Current - Futuristic

Most of the companies focus their energies in solving the following kinds of tasks:

1. Past related (rework, correction, pending)
2. Current (supposed to be done one mondays as usual)

They do not have time to focus on futuristic activities or plan for them in the first place. The ratio of (past + current) : (futuristic) for most companies is 70:30. This was the same for Motorola as well, while it was 30:70 for Toyota. So Motorola investigated the case further. It found that rework / correction...and current tasks are because of inefficient handling of issues, and issues kept appearing later though they were supposed to have been solved once. This was not the case with Toyota.

According to six sigma, resolve problems once and for all, so they would never recurr.

Mc Donalds...process control

Mc Donalds has transformed the art of cooking into science. In all its outlets worldwide, the variation in tastes of food it serves is almost identical. This has been achieved by carefully controlling the process that goes into preparation of the food.

ICICI - Addressing variation in customer needs...

At reservation counters / banks, which offer single window services, it is often seen that one line moves faster than the rest. (Lets not consider the capability or otherwise of the people who man the counters) Someone who comes late and is queued up in a line that moves faster, gets his work done faster than one who although has arrived earlier is held up in a slow moving line. This often leads to customer dissatisfaction.

ICICI has addressed this problem by introducing the token system - a solution that is foolproof and offers complete customer satisfaction.

DMAIC and DMADV

DMAIC - Define - Measure - Analyze - Improve - Control

DMAIC is a structured and repeated process improvement methodology, which focuses on defects reduction and helps improve existing products and processes. DMAIC is a defect reduction strategy.

DMADV - Define - Measure - Analyze - Design - Verify

Unlike DMAIC, DMADV is for develop / re-designing new products/processes. DMADV focusses on preventing errors and defects.

How smart customers quiz companies claiming 6sigma certification...

  1. Which processes of your company are at six sigma level?
  2. What are the specification limits of those processes? (Note that spec limits are set by the customer, while UCL and LCL are 3 times the standard deviation)
  3. Is your company willing to take penalty for defects?

Companies are NOT certified Six Sigma...their processes are...

Companies are not certified six sigma, their processes could be at six sigma level. Implementation, monitoring and control of six sigma are all done within the company implementing the six sigma program. No one outside comes and inspects it.

Companies focus on critical processes (as per VOC /VOB) and then take them to levels of six sigma efficiency.

Sunday, February 11, 2007

Mistake proofing...

Mistake proofing is when you take precautions to ensure that there is no possibility of error occurring. In England, it was found that people mistook between petrol and diesel fillers while fueling their cars at gas stations. (They have to fill it by themselves) . Manned gas stations did not solve the problem completely because there was always the possibility of human failure. Ultimately the car manufacturers fitted the petrol cars with positive polarity fuel tank mouth, and diesel cars with negative polarity. The gas stations had exactly the opposite polarity - petrol dispensers were fitted with negative polarity, and diesels dispensers with positive polarity. (Like charges repel, unlike attract!!!).

DPMO View

What is the need to view defects "per million", can % view not suffice?

Defects per million gives a better insight into defect severity. This magnifies defects and makes the performance look better (if the process is efficient). For example, you can talk that your process has only 1% defects. When talking in terms of percentage, the severity of defects does not seem much. Defects look manageable!!!

Parts per million magnifies the defects and shows a more realistic picture. When translated to PPM, 1% is equal to 10, 000 defects per million! Now this seems too big a value to neglect.

Instead of PPM, a better way to denote variation is by calling points as defects. Thus, we have DPMO (Defects per million opportunities) instead of PPM.

Same mean, diff std dev curves...

This image shows curves having same mean, but different standard deviations. Mean is the line where most of the values will be crowded. The curve with the highest peak is running the most efficient processes.

As the incline of the curve increases, the area that falls outside the specification limit is less. This means the number of defects is lesser as the peak gets taller. In the curve with the highest peak, less issues need to be addressed compared to the peak with least height.
Higher the value preceding the simga sign, the lower is the possibility of occurrence of defects. Thus, 8Sigma process has lower possibility than 6Sigma, which in turn has lower possibility of defects than a 4Sigma process.

Friday, February 09, 2007

Data Types

There are two data types: 1. Attribute 2. Continuous. Attribute data has countable quality characteristics for example, number of defects, Number of defectives, Number of NCs, etc. Continuous data on the other hand has measurable quality characteristics. For example, length of a spark plug, weight of a spark plug, temperature at which the spark plug has maximum efficiency, etc.

If a software project just collects data on whether each milestone is met or not met, it is collecting attribute data. This does not tell us whether we have overshot or under met the expectations.

Another example that shows the difference between attribute and continuous data: In a glass (drinking water glass) manufacturing industry, there are two teams, which assure that length of the glass is of stipulated length. The first team uses Vernier Calipers to measure the length. If the glass is of stipulated length, it passes the quality check, otherwise not. This type where the length of glass is MEASURED, is called Continuous data. The second team uses the go-noGO gauge technique. Here, the glass is allowed to pass through two separate raised platforms. The first platform has allows glass of stipulated length, while the second one allows only shorter. The inspection items are passes thru both the platforms one after another. If any glass passes thru both of them, then it is of shorter length than desired. If it does not pass thru any of the two platforms, it is of longer length. This way of gauging relies on ATTRIBUTE data, because the team checks for Yes/No condition for each glass.

Attribute data does not need costly implements. In our example, the second method is far cheaper than engaging vernier calipers, but we lose a lot of detail.

Note: Difference between defects, and defectives. “Defects” is the total number of defects in all the pieces inspected. “Defectives” is the count of items which have defects. For example, in a water glass manufacturing industry, in a lot of 100, these defects were found in one inspected item: the length of the glass is improper, has cracks. In another inspected item these defects were found: shape was malformed.

So, out of 100, the defectives here are 2 glasses, while the defects are three (for glass 1, length and cracks, and glass 2 the malformed shape). “Defects” therefore is always a better representative of the abnormalities / deviations, than “Defectives”.

Note that continuous data can be converted to attribute data, but vice versa. So, it is always better to go for continuous data if there is a possibility to measure it.

Quartile Deviations – Sample & Analysis

This table represents the marks scored in math exam by students in XII-A, XII-B, and XII-C sections. The Q4, Q3, Q2, Q1, and Q0 are the quartiles. In lay terms, the quartiles, divide the range of marks into 4 sections.


Q0-Q1 is the first quartile
Q1-Q2 is the second quartile
Q2-Q3 is the third quartile
Q3-Q4 is the fourth quartile


For XII-A, there is not much variation between Q3 and Q4. This means, there is little variation among the top performers of the class. Q3 and Q2 show some variation.

Mean, Media, and Mode

All these three are central tendencies. They are central score among a set of scores. Mean is heavily influenced by extreme values hence is not suitable for measuring process performance.

[Mean is also called average; median is the middle value in a set of sorted data; mode is the value repeating most of the times]

An illustration representing the fallibility of Mean and merit of Median is given below:

These are the marks obtained by students in mathematics in a particular class.

Marks
95
45
34
67
78
99
87
89
67
56
45
65
65
67
87
84
96

Here, the mean is 66.65, and the median is 67. If the mathematics teacher is asked to improve the MEAN MARKS BY 20 (i.e. performance should be so enhanced that mean becomes 80), it would be quite an easy task. Since mean can be boosted by inflating the extreme values, the teacher might pick up the brightest students of the class (students who have already scored quite high), and improve their performance. For example, a student at 87% can be easily trained to perform at 100%. (while neglecting the weak students, as training them and expecting a good performance so as to boos the overall mean is a pretty time consuming task…and that too without a promise of success).

If on the other hand the teacher had been asked to raise the median by 20, then it would not have been easy. For the median to increase, the performance of at least half of the class needs to be improved. Half of the class has to score more than the set target.

In Customer satisfaction index, for example, it is better to focus on median than on mean.

So, median is a better representation of a set of data compared to mean.

Formula for median in Excel =median(a2:a12)

Formula for quartiles:

Q1 = Quartile(a2:12, 1)
Q2 = Quartile(a2:12, 2)
Q3 = Quartile(a2:12, 3)

Standard deviation gives a measure of dispersion (the extent to which values vary from the mean). Therefore standard deviation is a good measure for process performance rather than mean, median or mode.

2. Measure

This is the second phase in the DMAIC phases of Six Sigma. A measurement system is created, which helps in knowing Ys and identifying potential Xs for the Six Sigma initiative. A measurement system is established to ensure that the data collected for the six sigma project is accurate.

In Define Phase – the phase prior to Measure, the potential projects (problems/opportunities (Ys)) are identified. Approximations of the size of the six sigma project are taken to draft a schedule. In the Measure phase, the actual indicators are identified and the quantum of work is identified. This gives the correct estimate of the volume of work on hand, which helps in accurate estimations.

The data collected for the six sigma green belt project should have the following characteristics:
  • Accurate (Observed value should be equal to the actual value), no matter how many times the task is performed.
  • Repeatable (When a person performs the task twice, he should be able to yield the same results)
  • Reproducible: (When two persons measure the same item, the results should be identical.). An example of reproducibility is software estimates. No matter who does it, the estimates should be in close proximity to one another (i.e. they should not vary much).
  • Stable: The results should be stable over a period of time.
The roadmap for "Measuring" is as shown in the diagram above.

Note: that the first three steps could have been done in the Define phase itself. In the define phase, an approximate of Y’s volume is taken, while in the Measure phase, the actual volume of Y is calculated. If in the Define phase, only the approximate idea of size of Y is known, then the first three steps are required in Measure phase, otherwise not. On the other hand if you know the size in the Define phase itself, then the first three steps in the Measure phase can be avoided.

To summarize, we carry out the following under the Measure phase:

1. To select the appropriate Y, we use the following:

a. Sigma Level (Performance of Y)
b. RTY (Rolled Throughput Yield)
c. CP(Inherent process capability), and CpK (Resultant process capability)

2. Identify the Xs and prioritize, we use the following:

a. Process mapping
b. Fish bone diagrams
c. Pareto analysis
d. FDM (Function Deployment Method)

At the end of step 2, we have the list of prioritized Xs.

Y = f(X)

An alternate way to look at Six Sigma

Y = f(X)

Here we shall talk about what this straight line function is all abouta and how it leads us to DMAIC.

Y is a function of X. Its value depends on the value assigned to X. Y, thus is dependent, while X is not. Y is called KPOV (Key Process Output Variable). X is called KPIV (Key Process Input Variable).

Y is the output. X is the input. To get results, we should focus on inputs (Xs), not on outputs (Ys). For example, commonly, companies focus on sales target, but not variables / processes that affect the sales target. When variables / processes that control the sales target are identified, and fine tuned, the sales target is automatically brought under control.

Talking in terms of software defects, if all causes of bugs are identified and addressed (all Xs ), then there is no need to test the final product! The final testing can be ignored. Though this is a an idealistic statement, this is what six sigma tries to achieve - reduce the causes of errors so that final inspection can be ignored.

Inspections, manual in particular are never error free. So, no matter how many cycles of review a code undergoes, possibility of error oversight still remains. Therefore, inspections dont really help.

Dell computers for example packages its computer components such that there is no chance of wrong fittings of parts - incompatible system elements would just not fit. Error proofing is done. (Dell call center handlers therefore are confident of letting their customers open the system and repair it as per their instructions given online...)

In software industry, this means modular programming, which yeild good benefits. Modules are pretested, self-containing entities that just need to be integrated and a final system integration test done.

Let us for example say to improve process performance, Eureka Forbes has several Ys to choose from :- Sales, Number of products sold per month, etc. Of them lets consider Sales.

Y = Sales
For this Y, following are the possible Xs
X1 = Product Quality
X2 = Product Features
X3 = Price
X4 = Advertisement Effectiveness
X5 = Sales Force Effectiveness

Of these Xs, lets pick up X5 (Sales force effectiveness) and consider this as Y. Now, for this Y, the possible Xs are:

X1 = Training Effectiveness
X2 = Recruitment and Selection Effectiveness
X3 = Attrition

Next, lets take X1 (Training Effectiveness as Y). For this, the possibel Xs are:

X1 = Trainer Competence
X2 = Duration of Training
X 3 = Training Content

Thus, Y = f (X) helps us in drilling down from output to input to help us select green belt projects. Green belt projects usually have fixed time frame. They have to be chosen such that they are completed well within the time frame. Y = f(X) helps in choosing the Xs, and the corresponding Ys that are dependent on those Xs.

The challenges faced while drilling down for Xs are:

1. Identification of Ys (Which Ys to choose)
2.

a. Measurability of Y
- Current Y
- Target Y

b. Identification of Xs

3. Identification of vital Xs among the identified Ys: Focusing on all Xs may not be yielding. There could be vital Xs whose fine tuning would give results.

4. Improve vital Xs and verify their impact on Y

5. Sustaining the improvements

The above five points are nothing but D-M-A-I-C.

1 = D
2 = M
3 = A
4 = I
5 = C

Thursday, February 08, 2007

Process performance and sigma levels...

Some processes might have to operate at levels above 6 sigma. For example mission critical applications like satellite launch, etc. So, the expected performance (and also the tolerance for defects) depends on the task on hand. Mission critical applications cannot afford to have even a single defect.

Same mean diff sigma, diff mean and same sigma...examples

Same mean different sigma

In a normal curve, most of the values tend to be crowded at the mean. Curves with the same mean and different standard deviation are as shown in the first diagram. (would upload the diagram later). Since the mean for all these curves coincides, the resultant figure looks like one curve mounting on top of the other.

The process representing curve with the steepest incline is the most capable because less values fall outside of the USL, LSL. So, the more sharper the nromal curve is, the better are the processes representing it.

Different mean, but same standard deviation

A good example of this is the heights of defense recruits in Russia, Japan, and India. Russian are the tallest, Indians the average, while the japanese are the shortest. When plotted, the curves (will upload the figure later) look like mountain ranges of same height. The curve representing the Japs will be to the extreme left, the Indians in middle, and the one represting the Russians to the extreme right.

* Note that in this example, the standard deviation in all the divisions would be the same.

Another view of six sigma

How many times of standard deviation is specification limit to the mean? If it is 2 times, then it is at 2sigma, if 3 times 3sigma, 4 times 4sigma, 5 times 5sigma.

If six standard deviations can be fitted between the mean and USL, then the process is at 6sigma. So, lesser the standard deviation, more number of them can be fit between those mean and USL thus pushing up the sigma level.

ISO Vs. Six Sigma

Will write later...

TQM Vs. Six Sigma

Will write later...

Green belt and black belt projects

Both green and black belt initiatives are process centric and driven towards process optimization. Green belt projects cover a single functional area, whereas black belt projects are cross-functional.

Green belt and black belt projects are taken up to achieve the process goals as demanded by business objectives. Identify the internal tasks where the variation can be controlled. Take them up as green belt projects…for example, the time lag between a “ready pizza” and its “pick up” can be focused and management techniques applied to optimize it.

USL/LSL & UCL/LCL

USL is the upper specification limit, while LSL is the lower specification limit. USL and LSL are dictated by / based on customer expectations. Some processes have only USL, some others have only LSL, and still others have both USL and LSL. (for example, the pizza example has only USL). Software processes have both USL and LSL.

Companies may have different kinds of processes for different customers, or may have the same processes for different customers. The types of processes to be followed are dictated by business demands, as customers have varying expectations.

UCL is the upper control limit, LCL the lower control limit. Control limits are three times the standard deviations (3Sigma always) on either side of the mean.

Pizza Delivery Example

Imagine there are two companies delivering pizzas in a city. Their average delivery times (in minutes) are seen in the table. Say, the upper specification limit is 30. That is the pizza has to be delivered within 30 minutes no matter where the client resides (within the city). (This limit is self-defined by the pizza outlets)
The Average for both the outlets is 20. But, as can be perceived by looking at the values, Outlet A seems to be more consistent, and shows less variability in cooking & delivering pizzas compared to Outlet B. Mean therefore is not a proper measure for comparing variations. A better way is thru standard deviation.

Instead of comparing process variability thru Mean, compare the sigma levels, which give a better insight into the process variability.

For Outlet B to better its process performance, it can target these: 1. Mean. 2. Standard Deviation 3. Sigma Level. Note that the specification limits cannot be changed as they are derived from customer expectations. By focusing on internal processes that are responsible for delays (such as cooking time, time lapse in dished out pizza and its pick up for delivery, etc.), Outlet B can improve on variability in delivery time.
Sigma Level = Diff bet'n mean & spec limit / Sigma


Note: In this example, the Simga Level we get is Zlt. To get Zst, add 1.5. So, as per Zst, the process of outlet A is at (7.91+1.5), and that of outlet B is (1.41+1.5).

Variability and Bell-shaped Curves

Every human activity has variability. Natural patterns of data of any process are bell-shaped curves. Most of the human processes follow a bell shaped curve. Take for example, the internet connectivity speed. Even though the connectivity speed may be 64kpbs, it is not at that speed at every point of time. It keeps on varying. At some time it may be overshoot the specified speed, and at others, remains below that. On an average, however, the connectivity is 64kbps.

Mean is the area around which most of the data points tend to cluster. Going away from the mean on either sides of the curve, values the clustering gradually come down.

Six Sigma - Green & Black Belts

Green & Black Belts: Green belts can handle most of the common situations, while black belts can address even the complex situations.

Six Sigma - Introduction

Six sigma, as is widely known, is 3.4 defects in a million products / operations / opportunities. Sigma levels can at 2, 3, 4, 5 and 6. The corresponding percentages of sigma levels, defects per million and their corresponding percentages are shown in the table.

Six Sigma has two views: one as a Measure of performance, and second as a methodology / philosophy to bring in process improvements. The first view of Six Sigma as a measure of performance is the myopic view, where current process performance is scaled to match sigma-levels (e.g. this statement - “The current process is operating at 4 sigma level”). The broader view of seeing six sigma is as a Methodology. It doesn’t mean you have to map it to sigma levels. Six Sigma methodology can be used to measure current process performance and scale up to a targeted level of “acceptable process performance” (USL/LSL). Not all companies would like to go for sigma levels of maturity (may not be aligned to business goals). Six Sigma is a philosophy that changes the way of thinking within a company. It brings in process awareness, helps in understanding problems at process levels, and inculcates process thinking at organization level.

Six Sigma is strictly a business improvement methodology. It uses the concept of normal curve (also known as Gaussian curve / Bell curve) + Shewart’s control charts + Ishikawa diagrams (fish bone diagrams) + other management and statistical tools & techniques to bring down defects.

Bill Smith is the father of Six Sigma. This term was coined by him.

Six Sigma methodology is to be applied where there is a likelihood of error occurrence (i.e., not on final inspection, but at intermediate stages before the final delivery of the product). Thru six sigma, we try to resolve problems permanently so that they never recur. This makes it possible for us to focus our time on planning futuristic projects / foreward thinking instead of being in an endless loop of working and reworking.

Usually, results of six sigma implementation are guaged by the financial gains, which are direct indicators of effectiveness of the program. However, sometimes improvement directly in terms of financial gains may not be possible to show. Alternate key performance indicators (KPI) are monitored to evaluate the effectiveness of a six sigma program.
Companies ARE NOT CERTIFIED SIX SIGMA. A company's processes are of six sigma level, not the company itself. Companies focus on critical processes and then take it upto six sigma level.

Monday, February 05, 2007

Customer Satisfaction Index

Customer satisfaction has to be driven by the solution providers - not by the client. Customer satisfaction can be better tracked thru a web-based interface so that frequent exchange of emails can be avoided. In over 90% of the cases client is non committal on the feedback. So, moot up the issue while on a call discussing on technical issues. Fill up the feedback yourself in consultation with client, and send a copy to the client; baseline the data.

Risks & Issues

Risk is something that is likely to occur and has a direct bearing on cost, schedule, or quality (or all the three) of a project. Risks can be prevented from becoming issues through proper Risk Management. Risks should be identified, categorized, prioritized, and discussed on a regular basis (team meetings).

Issue is something that's already there. It is a "risk that has occurred". Issues can only be dealt with, but cannot be prevented. Issues are limitations that we have to live with or we have to find a suitable workaround for them.

Friday, February 02, 2007

Sunset of CMMI V1.1

The SW-CMM and related products (e.g., CBA IPI and SCE) were fully retired by December 31, 2005.

SW-CMM appraisal results from CBA IPI and SCE appraisals expire on December 31, 2007.

Sunset of CMMI Version 1.1

The sunsetting period for CMMI Version 1.1 will commence when V1.2 is released. To allow the user community a reasonable amount of time to upgrade to Version 1.2, a measured approach will be used for retiring training and appraisal materials, with the sunset period ending on December 31, 2007.

Fixed Bid and T&M

Courtesy : http://weblogs.sqlteam.com

Thursday, February 01, 2007

Configuration Management

Software Configuration Management is a set of activities designed to control change by identifying the work products that are likely to change establishing the relationship among them, defining mechanisms for managing different versions of these work products, controlling changes imposed, and auditing and reporting on the changes made (Roger Pressman).

It consists of the following 4 activities:

1. Configuration Identification
2. Configuration Control
3. Configuration Status Accounting
4. Configuration Audits

Configuration Identification: Project teams (and also the SEPG, OT, OID, QAG teams) are required to identify the work products (in their respective teams) that need to be put under configuration control. Data required / used in a project can be placed under two categories: configurable and non-configurable items. Configurable items are those work products, which are likely to undergo changes and will have multiple versions at any given time during the project execution. Project plans, CM Plans, code, etc. are configurable items. On the contrary, data like emails, client chat scripts, audit reports, etc. do not change. They need not be version controlled. Such items are put under Data Management Plan.

Configuration Control: is the systematic evaluation, coordination, approval / disapproval and dissemination of proposed changes and implementation of all approved changes in the configuration of any item after formal establishment of its configuration baseline.

Change requests to process / products have to be routed through configuration control board (CCB) for approval before they can be used. Product change requests are analyzed for impact by the CCB. The mandated changes are implemented in the respective artifacts and then baselined. The CM issues a communication to the team on the baselined artifacts.

As the project advances, multiple versions of the baselined configurable items will exist. Configuration control is essential to keep the latest approved set (by CCB) of the work products floating. For code, it ensures that all developers work on the same baselined version.

Configuration Status Accounting: the recording and reporting of the configuration information is called configuration status accounting. This activity includes:

1. List of identified configurable items. (nos, and names)
2. Changes / Deviations / Waivers to configuration.
3. Implement status of approved changes. (configuration control status)
4. Version, baseline status

Configuration Audit: are necessary to verify that the integrity of work products is being maintained. Checks would be done on: baselining, configuration item identification, configuration control status, etc.

Run Charts and Control Charts

Run Charts

Run chart gives the trend of processes - its performance over a period of time; against previous performance. Run charts help in spotting aberrations in process performance and its progression over time. We can add an average line (parallel to the X axis) to the Y values to see the data deflection from average.

We can also have multiple run charts where the trends of process compliance of several projects can be compared.

Since run charts depict process trends, significant trends or patterns can be identified and investigated for the root causes. Similarly, special variations (significant deviant data points) can be spotted and their causes identified & addressed.

The above chart does not tell anything about the tolerance limits of PCI (that the organization would put up with). It merely tells us about the PCI trend for the months stated. So, without a guiding process performance limits, it is not of much use.

Tuesday, January 30, 2007

CMMI and ISO

ISO is a standard. Companies have to comply with standards. ISO has eight principle clauses.

CMMI is a process meta model. The process model is a guideline for organizations to improve their processes. CMMI is a collection of best practices in the s/w industry. CMMI tells you what to do, the HOW is left to the choice of the ogranization implementing it.

CMMI has 25 process areas (including IPPD, SS) as per v1.1

Common metrics for maintenance projects...

1. Client satisfaction index
2. Effort metrics
-Overall effort variance
-COQ
-COPQ
-Rework effort index
3. Schedule metrics
-Overall schedule variance
-On-time delivery index
4. Quality metrics
-Overall defect density
-Residual defect density
5. Performance metrics
-PCI of last audit
6. Support metrics
-FTR

Bi-directional Traceability

Traceabilities are of two kinds:

1. Vertical Traceability
2. Horizontal Traceability

Bidirectional traceability is tracing requirements both forward and backwards (in Horizontal) AND Top to Bottom, & Bottom to Top (in Vertical) through all the phases of software development lifecycle.

Vertical traceability is tracing customer requirements from requirements phase thru System Test / UAT phase and from System Test / UAT phase back to customer requirements phase. These traceabilities can be established if the requirements are well managed. Vertical traceability helps determine that all the source requirements have been completely addressed and that all lower level requirements and selected work products can be traced to a valid source.

Forward traceability ensures all requirements have been captured, while backward traceability ensures no additional functionality (gold plating) has not been introduced into the system.

If a requirement is added or withdrawn, the development team must be aware of the design elements / development modules / test cases, etc. where the impact would be felt and changes need to be carried out.

How does bidirectional traceability help?

  1. To measure the impact of requirement changes. One requirement may spawn multiple design elements. In this case, each such design element should be traceable backwards to the same requirement.
  2. To know that all requirements have been implemented in the system. (Forward traceability ensures this).
  3. To ensure that no unwanted functionality has been incorporated into the system. (Backward traceability ensures this).

Vertical Traceability: Within a work product (BRS, SRS, HDL, LDL, Test Plan, etc.), the inter-relationships between various requirements is called vertical traceability. Each requirement is related to other requirement/s by virtue of functionality. Vertical Traceability ensures that all requirements have been captured, gaps in functionality have been identified, and that there is no duplication of requirements.

Monday, January 29, 2007

The Mythical Man-month

Frederick Brooks in his 1975 book The Mythical Man-month contends that adding more people to a behind-schedule project doesn't actually speed it up. On the contrary, ramping up resources adds up to the communication complexity (an overhead in fact) within the development group. This is because inducting new people involves a certain amount of time and resources to orient before they can be deployed for production. Weaning away time and resources from the project towards trainings / orientation results in further delay.

Saturday, January 27, 2007

Function Points

Function points are a measure of the size of computer applications and the projects that build them. The size is measured from a functional, or user, point of view. It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application.

In the late seventies, IBM felt the need to develop a language independent approach to estimating software development effort. It tasked one of its employees, Allan Albrecht, with developing this approach. The result was the function point technique.

In the early eighties, the function point technique was refined and a counting manual was produced by IBM's GUIDE organization. The International Function Point Users Group (IFPUG) was founded in the late eighties. This organization produced its own Counting Practices Manual. In 1994, IFPUG produced Release 4.0 of its Counting Practices Manual. While the GUIDE publication and each release of the IFPUG publications contained refinements to the technique originally presented by Albrecht, they always claimed to be consistent with his original thinking. In truth, it is still very close considering the nearly two decades that have elapsed since Albrecht's original publication!

During the eighties and nineties, several people have suggested function point counting techniques intended to substantially extend or completely replace the work done by Albrecht. Some of these will be briefly discussed in this FAQ. However, unless otherwise specified, information in this FAQ is intended to be consistent with IFPUG Release 4.0.
The fact that Albrecht originally used it to predict effort is simply a consequence of the fact that size is usually the primary driver of development effort. The function points measured size.

It is important to stress what function points do NOT measure. Function points are not a perfect measure of effort to develop an application or of its business value, although the size in function points is typically an important factor in measuring each. This is often illustrated with an analogy to the building trades. A three thousand square foot house is usually less expensive to build one that is six thousand square feet. However, many attributes like marble bathrooms and tile floors might actually make the smaller house more expensive. Other factors, like location and number of bedrooms, might also make the smaller house more valuable as a residence.

Read more about Function Points here:
http://ourworld.compuserve.com/homepages/softcomp/fpfaq.htm

PCMM

People Capability Maturity Model® Framework

The People Capability Maturity Model ® (People CMM ® ) is a tool that helps you successfully address the critical people issues in your organization. The People CMM employs the process maturity framework of the highly successful Capability Maturity Model ® for Software (SW- CMM ® ) [Paulk 95] as a foundation for a model of best practices for managing and develop- ing an organization™s workforce. The Software CMM has been used by software organizations around the world for guiding dramatic improvements in their ability to improve productivity and quality, reduce costs and time to market, and increase customer satisfaction. Based on the best current practices in fields such as human resources, knowledge management, and organizational development, the People CMM guides organizations in improving their processes for managing and developing their workforce. The People CMM helps organizations characterize the maturity of their workforce practices, establish a program of continuous workforce development, set priorities for improvement actions, integrate workforce development with process improvement, and establish a culture of excellence. Since its release in 1995, thousands of copies of the People CMM have been distributed, and it is used world wide by organizations, small and large, such as IBM, Boeing, BAESystems, Tata Consultancy Services, Ericsson, Lockheed Martin and QAI (India) Ltd.

The People CMM consists of five maturity levels that establish successive foundations for continuously improving individual competencies, developing effective teams, motivating improved performance, and shaping the workforce the organization needs to accomplish its future business plans. Each maturity level is a well-defined evolutionary plateau that institutionalizes new capabilities for developing the organization™s workforce. By following the maturity framework, an organization can avoid introducing workforce practices that its employees are unprepared to implement effectively.
Courtesy: SEI CMU

Cost of Quality

Cost of Quality (COQ) includes all costs incurred in the pursuit of quality or in performing quality related activities. Cost of quality studies are conducted to know the current COQ and to find out the opportunities for reducing the costs of quality and to provide a basis for comparison.

Types of COQ

Quality costs may be divided into:

1. Preventive Cost
2. Appraisal Cost
3. Failure Cost

QA Vs. QC

Software Quality Assurance (SQA) is defined as a planned andsystematic approach to the evaluation of the quality of andadherence to software product standards, processes, and procedures. SQA includes the process of assuring thatstandards and procedures are established and are followed throughout the software acquisition lifecycle. Compliance with agreed-upon standards and procedures is evaluated through process monitoring, product evaluation, and audits. Software development and control processes should include quality assurance approval points, where an SQA evaluation of the product may be done in relation to the applicable standards.

Quality assurance consists of the auditing and reporting functions of management. The aim of quality assurance is to provide management with the data necessary to be informed about product quality is meeting its goals.

How is it different from Quality Control?

Quality control is the process of variation control. Quality control is the series of inspections , reviews and tests generated throughout the development lifecycle to guarantee that each work product meets the requirements placed upon it.

Quality control activities may be fully automated, manual or combination of automated tools and human interaction.

Friday, January 26, 2007

PPQA Vs. VER

PPQA provides staff and management with objective insight into the performance and compliance of the defined process. E.g. Audits look into process adherence and process performance.

Verification ensures that the selected work products meet their specified requirements. E.g. Peer review of SRS ensures that all the details captured thru BRS are appropriately addressed in the requirement specs.

These two process areas address the same work product from different perspectives - PPQA from process perspective, and VER from the technical aspect.

Typical Audit Process

  1. Prepare and publish the yearly audit calendar (in conslutation with project teams).
  2. Publish quarterly audit schedule.
  3. Issue audit notification to all relevant stakeholders and the final schedule (1 week prior to audits).
  4. Review previous findings and check the project artifacts before the actual audit. Skim thru the VSS folders of each project looking out for discrepencies and note them down (Note down variations in naming conventions also).
  5. Take printouts of audit checklist. (Always better to have a checklist for audit rather than a random non-focused check.)
  6. Actual Audit

a. Check closure of previous NCs.
b. CM audit check (change requests, CI, )
c. Internal audit report check
d. Check against the organization process

  1. Get the affirmation of auditees on the noted NCs.
  2. Send the filled in checklist to auditees giving a scope to them to come back with their objections, if any.
  3. Prepare audit report. Do a self review. Check for Process Area mappings.
  4. Get the audit report peer reviewed.
  5. Publish the audit report to project teams, delivery head, sponsor
  6. Prepare and conduct follow up audits.
  7. Conduct review with higher management once a month and report deviations in process.

Thursday, January 25, 2007

PA Categories


Project Indicators

An indicator is a metric or combination of metrics that provide insight into the software processes, a software project, or the product itself. The insight helps the project manager or software engineers to adjust the process, the project and make it more efficient.

Project indicators enable a project manager to:
  • Assess the status of an ongoing project
  • Track potential risks
  • Uncover problem areas before the go critical
  • adjust workflows or tasks
  • evaluate the project team’s ability to control the quality of software work products

    I made use of the following project indicators for my projects:


Process Institutionalization

Institutionalization is embedding a process within the organization as an established norm or practice. The process is ingrained in the way the work is performed and there is commitment and consistency to performing the process.

The degree of institutionalized is expressed by the names of Generic Goals. The following table gives an overview of institutionalization:

For a Level 2 organization (Staged representation), all projects have their own framework (derived from org-level policy) for carrying out the processes. But each one differs from the other. For a Level 3 Organization however, the processes are tailored from the organization’s QMS as per the defined tailoring standards. Institutionalization of a process is accomplished at Level 3.
At level 4, only special causes of variations are addressed, while at level 5 even common causes of variations are addressed.

Requirements Management & Requirements Development

Requirements Management (RM) is at Level 2, while Requirements Development (RD) is at Level 3 in a staged representation. A quite often asked question is how can one manage requirements without first developing them. Logically, requirements are to be developed (both implicit and explicit customer needs are elicited), and later on managed (thru horizontal and vertical RTM).

However, from a staged representation perspective, an organization at Level 2 need only to manage requirements. Different projects would take up this activitiy in their own way (No Institutionalization). Also, customer needs are not elicited at this level. So, requirements from customers pour in, and the organization merely manages them thru a bi-directional traceability matrix.

For an organization at Level 3, however, requirements ought to be developed first. Both implicit and explicit customer needs are elicited. The requirements are then managed thru a bidirectional traceability matrix. The needs development & management activities at this level are uniform across all projects in the ogranization (Institutionalization).

Interpretation of Level IV & V in CMMI V1.2

Level IV and V process areas remain the same in CMMI V1.2. The way these process areas should be interpreted, however, has changed.

OPP and QPM are at Maturity Level IV, while OID and CAR occupy Maturity Level V. At maturity level III, the focus is on institutionalization. For a company moving from level II towards level III, it would simply mean practicing the same processes across all the projects and bring in uniformity. CAR is done to identify the causes of any repeating problem. This is the CAR at Level III. Measures are taken to contain it.

Through OPF (At level III), the company defines at a macro level what the EPG responsibilities are; through OPD the company defines how its QMS is established and functions. Process performance is monitored using M&A. Note that there is no process stability at this stage. Projects are quantitatively managed when the processes driving it are stable.

At level IV the project’s quality and process performance objectives are defined and only those processes are picked up, which are stable (using past data…in our case it is the Process Capability Baseline sheet.). Sub-processes of the Process are defined & monitored (depending on the organization’s and project’s quality objectives); variation is tracked using statistical techniques. The statistical tools help in identifying variation, and setting up corrective action.
Spikes (significant deviation from the normal) in the process performance are identified; those sub-processes are picked up (for improvement) which are vital, and contribute to process control and improvement. CAR is done to find out root cause of the variation. Measures are taken to ensure that these are not repeated. [This is level IV CAR].

Level V CAR is done to shift the mean. Shifting the mean is a process improvement activity. After shifting the mean, the sub-process performance is monitored for stability and is fine tuned.

Verification & Validation


SCAMPI - A

  1. Briefing by lead appraiser to appraisal team, sponsor, delivery head, and appraisal participants.
  2. Forming of mini teams based on PA (Project Management, Engineering, and Support Process Areas)
  3. Document reviews: Review of Filled-in PIID (Practice Implementation Indicators Description). Gather evidences of Direct Artifacts, and Indirect Artifacts. Report strengths and weaknesses.
  4. Conduct individual and group interviews, note down affirmations, and tag the notes.
    - Individual interviews for Delivery Head, followed by Project Managers, OT Head, OID Head
    - Group Interviews for Project Teams
  5. Characterize Practice Implementation (Fully Implemented/Largely Implemented/Partially Implemented/Not Implemented/ Not Yet Implemented)
  6. Aggregate practice implementation to OU (Organization Unit) Level.
  7. Report Preliminary findings to project teams (excluded sr. management) and addressed concerns/objections if any.
  8. Rate Maturity Levels.
  9. Presentation of Final Findings (to all the members including sr. management)
  10. Executive Session
  11. Sign Appraisal Disclosure Statement (as per SCAMPI V1.2)
  12. Fill SCAMPI – A Feedback form in the SEI Appraisal System: http://sas.sei.cmu.edu/AppSys/ (a web application through which SEI sets up and tracks SCAMPI-A appraisals).

Software Quality Gap Analysis

Software Quality Gap Analysis is based on customer's Process Improvement requirements mapped to Quality Models and the Desired/Targeted State. The customer may want to benchmark against a quality model, or may want to target a specific area like PPQA, and restrict to that area alone. Gap Analysis in such case is performed against the best practices and industry standards in that area.

Gap Analysis can be done against models like CMMI, ISO, COBIT, PCMMI, ITIL, etc. apart from targeting specific process areas.

Sometimes, in-house quality gap analysis is also carried out. SEPG studies the process improvement plans and comes up with changes (or in a few cases redefines the existing process) or deploys a new process. The feasibility is studied and plans are made for a pilot deployment. Measures are defined; projects are selected on which the new process would be experimented. Subsequent to training the selected project teams, and implementation of the process, metrics are collected regularly to study the process performance. If it is along expected / projected lines, then the process is institutionalized.