Monday, November 22, 2010

Configuration Management Audit...

There are two meanings for the project management process of configuration management.
  1. It can be used for the process of identifying, tracking, and managing of all the physical assets of a project. The items that you track under configuration management are called “configuration items” in the Capability Maturity Model (CMMI). 
  2. It can also refer to the process of identifying, tracking and managing of all the characteristics of the assets of a project. These characteristics can also be referred to as product “metadata.” This is closer to the definition of configuration management in the Project Management Body of Knowledge (PMBOK®) from the Project Management Institute.
The following model describes the five major aspects of configuration management.

Planning. You need to plan ahead to create the processes, procedures, tools, files, and databases for managing the project assets or the metadata. You also may need to gain an agreement on exactly what assets are important, how you will define them, how they will be categorized, classified, numbered, reported, etc. The results of this up-front planning are documented in a Configuration Management Plan.
Part of your planning process should be to assign configuration tracking numbers to each type of configuration item.

Tracking. It’s important to understand the baseline for all configuration items. In other words, for each configuration item, you need to understand what you have at the beginning of the project. In many cases, you may have nothing to start with. In other cases, like physical assets, you may have some assets to begin with. The purpose of your tracking processes is to ensure that you can track all changes to a configuration item throughout the project.

You need processes and systems designed to identify when assets are assigned to your project, where they go, what becomes of them, who is responsible for them and how they’re disposed of. Since a project has a beginning and end, ultimately all the assets need to go somewhere. This could be in a final deliverable, into the operations/support area, scrapped, etc. You should be able to dissect each major deliverable of the project and show where all the pieces and parts came from, and where they reside after the project ends.

Managing. Managing assets means ensuring that they’re secure, protected, and used for the right purposes. For example, it doesn’t do any good to track purchased assets that your project doesn’t need in the first place. Also, your tracking system may show expensive components sitting in an unsecured storage room, but is that really the proper place for them? Managing assets has to do with acquiring what you need and only what you need. You also have to make sure you have the right assets at the right place at the right time.

Reporting. You need to be able to report on the project assets, usually in terms of what you have and where they are, as well as financial reporting that can show cost, budget, depreciation, etc.

Auditing. Auditing involves validating that the actual configuration elements (whatever they are) at any given time are the same as what you expect. Many projects get in trouble when they start to lose track of physical assets (for instance, material, supplies, code or other configuration items) or if the physical characteristics (metadata) of your deliverables is different that what you expect.

The auditing process is used to validate that the configuration elements match up with your expectations. These expectations are based on the original baseline, plus any change requests that you have processed up to the current time.

Source: http://blogs.techrepublic.com.com/tech-manager/?p=370&tag=content;leftCol

Wednesday, August 18, 2010

Stat tools - when to use what

  1. Box Plots: used to tell if the data set is skewed by looking at the relative positions of the median, quartiles and the tail. Easy to spot outliers.
  2. Scatter Plots: when we are interested in the relationships between two attributes. Offer visual assessment.
  3. Control Charts: to know whether the data is within acceptable bounds.
  4. Correlation: measure the correlation between two variables. Measure the strength of relationship / association.
  5. Linear Regression: express relation as a linear formula.
  6. Multivariate Regression: investigate relationship between one dependent variable and two or more independent variables.

Tuesday, March 02, 2010

M&A Needs

M and A requires you to identify needs:

1. Management needs
2. Technical needs
3. Project needs
4. Process improvement needs
5. Product needs

To start an M and A program, identify what your management requires as measures from the following categories:

a. Schedule & progress
b. Resources & cost
c. Product size & stability
d. Product quality
e. Process Performance
f. Technology Effectiveness
g. Customer Satisfaction

1. Once you have established these measures, you might need to be more granular, may lead to further decomposition of the measure
2. You will have to review the measurements as some of it might not be as useful you might have thought earlier

Monday, March 01, 2010

Sources of Variation

Example: How many text messages on an average a 19-year-old sends in a day.

  1. Natural variation: Not all students will send the same number of text messages in a day. Taking one sample will not give all the information we need.
  2. Explainable variation: Are there any other reasons why asking one student will not give the average for all 19 years? There are different types of students. Girls seem to send more texts and maybe rich students send more tickets. This is explainable variation.
  3. Sampling error: I take a sample of 5 students. You take a sample of 5 students. Do you think the same mean/average will be the same? We will likely not, because we will choose different people. There are infinite different samples we can take from a population. Sampling error exists because a sample is not a true representative of the entire population. Sample size matters; the larger the sample size greater is the confidence.
  4. Biased sampling: Bias happens when you pick up your samples based on prejudices like not including poor students in the above example.

Thursday, February 25, 2010

CMMI V1.3 CONSTELLATIONS

Adapted from SEI's V1.3 status presentation...

RD: Requirements Development
RM: Requirements Management
TS: Technical Solution
VAL: Validation
VER: Verification

SSAD: Solicitation &
Supplier
Agreement
Development
ARD: Acquisition
Requirements
Development
ATM: Acquisition
Technical
Management
AM: Agreement
Management
AVAL: Acquisition
Validation
AVER: Acquisition
Verification

SD: Service
Delivery
CAM: Capacity and
Availability
Management
SSD: Service
System
Development
SSM: Service
System
Development
SST: Service
System
Development
SC: Service Continuity
IRP: Incident
Resolution &
Prevention

Wednesday, February 24, 2010

CMMI V1.3 Webcast Notes

Source: Kishore Budde...

-
Changes are mostly based on CRs and the intention was not to raise the bar
- To Bring clarity to High Maturity Practices (finalized version will be ready by end of March or early April)
- Simplify Generic Practices
- Increase Appraisal efficiency
- Improve Commonality across Constellations (Dev, Svc, Acq)
- Officially CMMI v1.3 is expected to be launched by Novemmber 2010
Within the model: (cannot recollect changes to individual PAs at L2 and L3, the speaker was quickly moving between topics due to limited time)
  • We will get to see informative material on Customer Satisfaction ( there was no mention of it in earlier and current versions). However, no specific direct requirement in any PA
  • Elimination of GG4 and GG5 (proposed - final decision is pending with SEIs CCB)
  • Informative material will be added on confusing/dicy to interpret words such as: Process model, Business Objective, Sub Process etc
  • While number of PAs in Level 2,3 and 4 will remain as is, we can expect an additional PA OPM - Organization Process Management at L5 (part of OPP will be here, no additional requirements)
  • SPs in QPM are modified to link them well with CAR
  • 'Common Cause' concept is shifted from L5 to L4
  • We can expect to see informative material on Agile in about 9 PAs (so Agile officially takes its place in CMMI model)
  • Words in GG1 and GP3.2 to be simplified for better understanding
  • Changes V1.2 V1.3
    Pages 560 461
    GP 17 13
    SG 50 48
    SP 173 165
    PA 22 22/23
    GG 3 3

Appraisal related: (Few are listed below. More clarification will be provided on 18th March in a seperate webinar that focusses only for Appraisal tasks)

  • There will just be artifacts and Affirmation (concept of DA and InDA will be eliminated)
  • Expect Minimum Scoping rules
  • First Appraisal based on CMMI V1.3 can be expected in November 2011
  • During the one year sunset period (Nov 2010 - Oct 2011), Organization may chose to go for implementation based on v1.2 or v1.3
  • v1.2 appraisal result taken anytime before October 2011 will be valid for 3 years
  • CMMI v1.3 appraisal rating validity period may be increased (under consideration)
  • SEI plans to bring guidelines for SCAMPI A,B and C in one book. If thats not feasible; changes will be made to B, C handbook to reflect v1.3 changes

Transition & Training related:

  • Transition from v1.2 to v1.3 will first be launched (later v1.1 to v1.3 to be launched). One take online upgrade course to complete this transition
  • Lead Appraisers and Instructors in addition to taking the online course, should attempt and clear a test
  • 3 day Introduction to CMMI SVC will be made available and DEV will become supplement (It is the other way round till now)
  • Training material may not undergo too much of a change
  • Training costs are expected to remain constant
  • SEI looking at conducting 3 day Intro to CMMI SVC training through its partners (will be finalized by SEPG Conf in NA next month)
Note: All the above mentioned pointers are just to give a quick feel on what to expect in CMMI v1.3. Some of them may not see the light of the day and few new surprises can be expected.

Sunday, February 21, 2010

Statistical available tests


Courtesy: Intuitive Biostatistics, Harvey Motulsky. Copyright © 1995 - Oxford University Press Inc.

CMMI V1.3

  1. Kind of stable release compared to the previous versions (software terminology)
  2. Clarifying and modernizing L4 and L5 practices - Increase depth and clarity of CAR and OID practices, Clarifying connection between statistical management of
    subprocesses and project management
  3. Create a lower ML version of CAR as it's felt CAR is valuable at any level of maturity.
  4. Release scheduled for 1st Nov, 2010
  5. Significant overlap in its adoption (v1.2 to v1.3 in an year)
  6. Consistency of practices across constellations - CMMI Dev (22 PAs), CMMI-Acq (22 PAs), CMMI-SVC (23 PAs) presently. Proposed - Common 16 PAs to all, Shared PAs bet'n constellations.
  7. Expanded coverage of model to include Agile, Lean Six Sigma
  8. IPPD will not be optional.
  9. Try to bring down GPs
  10. Appraisal methodology changes to make it more effective
  11. Changes to appraisal methodology so that Dev + SVC can be appraised in a single appraisal - Multiple model implementation...
  12. Reduce offsite effort for appraisals.
  13. REQM going back to PM

Wednesday, February 17, 2010

Unit Testing

What is a Unit Test Plan?


This document describes the Test Plan in other words how the tests will be carried out.

This will typically include the list of things to be Tested, Roles and Responsibilities, prerequisites to begin Testing, Test Environment, Assumptions, what to do after a test is successfully carried out, what to do if test fails, Glossary and so on

What is a Test Case?

Simply put, a Test Case describes exactly how the test should be carried out.

For example the test case may describe a test as follows:
Step 1: Type 10 characters in the Name Field

Step 2: Click on Submit


Test Cases clubbed together form a Test Suite

Test Case Sample


Test Case Test Case Input Expected Result Actual Result Pass/Fail Remarks
ID Description Data

Additionally the following information may also be captured:

a) Unit Name and Version Being tested
b) Tested By

c) Date
d) Test Iteration (One or more iterations of unit testing may be performed)

Steps to Effective Unit Testing:


1) Documentation: Early on document all the Test Cases needed to test your code. A lot of times this task is not given due importance. Document the Test Cases, actual Results when executing the Test Cases, Response Time of the code for each test case. There are several important advantages if the test cases and the actual execution of test cases are well documented.

a. Documenting Test Cases prevents oversight.
b. Documentation clearly indicates the quality of test cases

c. If the code needs to be retested we can be sure that we did not miss anything
d. It provides a level of transparency of what was really tested during unit testing. This is one of the most important aspects.

e. It helps in knowledge transfer in case of employee attrition
f. Sometimes Unit Test Cases can be used to develop test cases for other levels of testing

2) What should be tested when Unit Testing: A lot depends on the type of program or unit that is being created. It could be a screen or a component or a web service. Broadly the following aspects should be considered:

a. For a UI screen include test cases to verify all the screen elements that need to appear on the screens
b. For a UI screen include Test cases to verify the spelling/font/size of all the “labels” or text that appears on the screen

c. Create Test Cases such that every line of code in the unit is tested at least once in a test cycle
d. Create Test Cases such that every condition in case of “conditional statements” is tested once

e. Create Test Cases to test the minimum/maximum range of data that can be entered. For example what is the maximum “amount” that can be entered or the max length of string that can be entered or passed in as a parameter

f. Create Test Cases to verify how various errors are handled
g. Create Test Cases to verify if all the validations are being performed

3) Automate where Necessary: Time pressures/Pressure to get the job done may result in developers cutting corners in unit testing. Sometimes it helps to write scripts, which automate a part of unit testing. This may help ensure that the necessary tests were done and may result in saving time required to perform the tests.

Integration Testing

Even if a software component is successfully unit tested, in an enterprise n-tier distributed application it is of little or no value if the component cannot be successfully integrated with the rest of the application.

Once unit tested components are delivered we then integrate them together.

These “integrated” components are tested to weed out errors and bugs caused due to the integration. This is a very important step in the Software Development Life Cycle.

It is possible that different programmers developed different components.

A lot of bugs emerge during the integration step.

In most cases a dedicated testing team focuses on Integration Testing.

Prerequisites for Integration Testing:

Before we begin Integration Testing it is important that all the components have been successfully unit tested.

Integration Testing Steps:


Integration Testing typically involves the following Steps:
Step 1: Create a Test Plan

Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases

Step 4: Once the components have been integrated execute the test cases
Step 5: Fix the bugs if any and re test the code

Step 6: Repeat the test cycle until the components have been successfully integrated

What is an ‘Integration Test Plan’?

As you may have read in the other articles in the series, this document typically describes one or more of the following:


- How the tests will be carried out
- The list of things to be Tested

- Roles and Responsibilities
- Prerequisites to begin Testing

- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails

- Glossary

How to write an Integration Test Case?

Simply put, a Test Case describes exactly how the test should be carried out.

The Integration test cases specifically focus on the flow of data/information/control from one component to the other.

So the Integration Test cases should typically focus on scenarios where one component is being called from another. Also the overall application functionality should be tested to make sure the app works when the different components are brought together.

The various Integration Test Cases clubbed together form an Integration Test Suite

Each suite may have a particular focus. In other words different Test Suites may be created to focus on different areas of the application.

As mentioned before a dedicated Testing Team may be created to execute the Integration test cases. Therefore the Integration Test Cases should be as detailed as possible.

Sample Test Case Table:

Test Test Case Description Input Data Expected Result Actual Result Pass/Fail Remarks
Case ID




Additionally the following information may also be captured:

a) Test Suite Name

b) Tested By
c) Date

d) Test Iteration (One or more iterations of Integration testing may be performed)

Working towards Effective Integration Testing:

There are various factors that affect Software Integration and hence Integration Testing:

1) Software Configuration Management: Since Integration Testing focuses on Integration of components and components can be built by different developers and even different development teams, it is important the right version of components are tested. This may sound very basic, but the biggest problem faced in n-tier development is integrating the right version of components. Integration testing may run through several iterations and to fix bugs components may undergo changes. Hence it is important that a good Software Configuration Management (SCM) policy is in place. We should be able to track the components and their versions. So each time we integrate the application components we know exactly what versions go into the build process.

2) Automate Build Process where Necessary: A Lot of errors occur because the wrong version of components were sent for the build or there are missing components. If possible write a script to integrate and deploy the components this helps reduce manual errors.

3) Document: Document the Integration process/build process to help eliminate the errors of omission or oversight. It is possible that the person responsible for integrating the components forgets to run a required script and the Integration Testing will not yield correct results.

4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked correctly. Each defect should be documented and tracked. Information should be captured as to how the defect was fixed. This is valuable information. It can help in future integration and deployment processes.

System Testing

System Testing: Why?

Why System Testing is done? What are the necessary steps to perform System Testing? How to make it successful?

How does System Testing fit into the Software Development Life Cycle?


In a typical Enterprise, ‘unit testing’ is done by the programmers. This ensures that the individual components are working OK. The ‘Integration testing’ focuses on successful integration of all the individual pieces of software (components or units of code).

Once the components are integrated, the system as a whole needs to be rigorously tested to ensure that it meets the Quality Standards.

Thus the System testing builds on the previous levels of testing namely unit testing and Integration Testing.

Usually a dedicated testing team is responsible for doing ‘System Testing’.

Why System Testing is important?

System Testing is a crucial step in Quality Management Process.

- In the Software Development Life cycle System Testing is the first level where the System is tested as a whole
- The System is tested to verify if it meets the functional and technical requirements
- The application/System is tested in an environment that closely resembles the production environment where the application will be finally deployed
- The System Testing enables us to test, verify and validate both the Business requirements as well as the Application Architecture

Prerequisites for System Testing:

The prerequisites for System Testing are:

- All the components should have been successfully Unit Tested
- All the components should have been successfully integrated and Integration Testing should be completed
- An Environment closely resembling the production environment should be created.

When necessary, several iterations of System Testing are done in multiple environments.

Steps needed to do System Testing:

The following steps are important to perform System Testing:

Step 1: Create a System Test Plan
Step 2: Create Test Cases
Step 3: Carefully Build Data used as Input for System Testing

Step 3: If applicable create scripts to
- Build environment and
- to automate Execution of test cases

Step 4: Execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle as necessary

What is a ‘System Test Plan’?

As you may have read in the other articles in the testing series, this document typically describes the following:

- The Testing Goals
- The key areas to be focused on while testing
- The Testing Deliverables
- How the tests will be carried out
- The list of things to be Tested
- Roles and Responsibilities
- Prerequisites to begin Testing
- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails
- Glossary

How to write a System Test Case?

A Test Case describes exactly how the test should be carried out.

The System test cases help us verify and validate the system.

The System Test Cases are written such that:

- They cover all the use cases and scenarios
- The Test cases validate the technical Requirements and Specifications
- The Test cases verify if the application/System meet the Business & Functional Requirements specified
- The Test cases may also verify if the System meets the performance standards

Since a dedicated test team may execute the test cases it is necessary that System Test Cases. The detailed Test cases help the test executioners do the testing as specified without any ambiguity.

The format of the System Test Cases may be like all other Test cases as illustrated below:

· Test Case ID
· Test Case Description:

o What to Test?
o How to Test?

· Input Data
· Expected Result
· Actual Result

Sample Test Case Format:

Test How to Expected Actual
Case What To Test? Input Data Pass/Fail
ID Test? Result Result


Additionally the following information may also be captured:

a) Test Suite Name
b) Tested By
c) Date
d) Test Iteration (The Test Cases may be executed one or more times)

There are various factors that affect success of System Testing:

1) Test Coverage: System Testing will be effective only to the extent of the coverage of Test Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered by the test cases are sufficient. The Test cases should “cover” all scenarios, use cases, Business Requirements, Technical Requirements, and Performance Requirements. The test cases should enable us to verify and validate that the system/application meets the project goals and specifications.

2) Defect Tracking: The defects found during the process of testing should be tracked. Subsequent iterations of test cases verify if the defects have been fixed.

3) Test Execution: The Test cases should be executed in the manner specified. Failure to do so results in improper Test Results.

4) Build Process Automation: A Lot of errors occur due to an improper build. ‘Build’ is acompilation of the various components that make the application deployed in the appropriate environment. The Test results will not be accurate if the application is not ‘built’ correctly or if the environment is not set up as specified. Automating this process may help reduce manual errors.

5) Test Automation: Automating the Test process could help us in many ways:

a. The test can be repeated with fewer errors of omission or oversight

b. Some scenarios can be simulated if the tests are automated for instance simulating a large number of users or simulating increasing large amounts of input/output data

6) Documentation: Proper Documentation helps keep track of Tests executed. It also helps create a knowledge base for current and future projects. Appropriate metrics/Statistics can be captured to validate or verify the efficiency of the technical design /architecture.

User Acceptance Testing

User Acceptance Testing


Once the application is ready to be released the crucial step is User Acceptance Testing.

In this step a group representing a cross section of end users tests the application.

The user acceptance testing is done using real world scenarios and perceptions relevant to the end users.


What is User Acceptance Testing?


User Acceptance Testing is often the final step before rolling out the application.

Usually the end users who will be using the applications test the application before ‘accepting’ the application.

This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.

This testing also helps nail bugs related to usability of the application.


User Acceptance Testing – Prerequisites:


Before the User Acceptance testing can be done the application is fully developed.

Various levels of testing (Unit, Integration and System) are already completed before User Acceptance Testing is done. As various levels of testing have been completed most of the technical bugs have already been fixed before UAT.


User Acceptance Testing – What to Test?


To ensure an effective User Acceptance Testing Test cases are created.

These Test cases can be created using various use cases identified during the Requirements definition stage. The Test cases ensure proper coverage of all the scenarios during testing.

During this type of testing the specific focus is the exact real world usage of the application. The Testing is done in an environment that simulates the production environment.

The Test cases are written using real world scenarios for the application


User Acceptance Testing – How to Test?


The user acceptance testing is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects. It is generally assumed that the application would have already undergone Unit, Integration and System Level Testing.

However, it is useful if the User acceptance Testing is carried out in an environment that closely resembles the real world or production environment.


The steps taken for User Acceptance Testing typically involve one or more of the following:


  1. User Acceptance Test (UAT) Planning
  2. Designing UA Test Cases

  1. Selecting a Team that would execute the (UAT) Test Cases
  2. Executing Test Cases
  3. Documenting the Defects found during UAT
  4. Resolving the issues/Bug Fixing
  5. Sign Off


User Acceptance Test (UAT) Planning:


As always the Planning Process is the most important of all the steps. This affects the effectiveness of the Testing Process. The Planning process outlines the User Acceptance Testing Strategy. It also describes the key focus areas, entry and exit criteria.


Designing UA Test Cases:


The User Acceptance Test Cases help the Test Execution Team to test the application thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all the scenarios.

The Use Cases created during the Requirements definition phase may be used as inputs for creating Test Cases. The inputs from Business Analysts and Subject Matter Experts are also used for creating.

Each User Acceptance Test Case describes in a simple language the precise steps to be taken to test something.

The Business Analysts and the Project Team review the User Acceptance Test Cases.


Selecting a Team that would execute the (UAT) Test Cases:


Selecting a Team that would execute the UAT Test Cases is an important step.

The UAT Team is generally a good representation of the real world end users.

The Team thus comprises of the actual end users who will be using the application.


Executing Test Cases:


The Testing Team executes the Test Cases and may additional perform random Tests relevant to them


Documenting the Defects found during UAT:


The Team logs their comments and any defects or issues found during testing.


Resolving the issues/Bug Fixing:


The issues/defects found during Testing are discussed with the Project Team, Subject Matter Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users.


Sign Off:


Upon successful completion of the User Acceptance Testing and resolution of the issues the team generally indicates the acceptance of the application. This step is important in commercial software sales. Once the User “Accept” the Software delivered they indicate that the software meets their requirements.

The users now confident of the software solution delivered and the vendor can be paid for the same.


What are the key deliverables of User Acceptance Testing?


In the Traditional Software Development Lifecycle successful completion of User Acceptance Testing is a significant milestone.


The Key Deliverables typically of User Acceptance Testing Phase are:


1. The Test Plan- This outlines the Testing Strategy

2. The UAT Test cases – The Test cases help the team to effectively test the application

3. The Test Log – This is a log of all the test cases executed and the actual results.

4. User Sign Off – This indicates that the customer finds the product delivered to their satisfaction

Regression Testing...

What is Regression Testing?

If a piece of Software is modified for any reason testing needs to be done to ensure that it works as specified and that it has not negatively impacted any functionality that it offered previously. This is known as Regression Testing.

Regression Testing attempts to verify:

1. That the application works as specified even after the changes/additions/modification were made to it

2. The original functionality continues to work as specified even after changes/additions/modification to the software application

3. The changes/additions/modification to the software application have not introduced any new bugs

When is Regression Testing necessary?

Regression Testing plays an important role in any Scenario where a change has been made to a previously tested software code. Regression Testing is hence an important aspect in various Software Methodologies where software changes enhancements occur frequently.

Any Software Development Project is invariably faced with requests for changing Design, code, features or all of them.

Some Development Methodologies embrace change.

For example ‘Extreme Programming’ Methodology advocates applying small incremental changes to the system based on the end user feedback.

Each change implies more Regression Testing needs to be done to ensure that the System meets the Project Goals.

Why is Regression Testing important?

Any Software change can cause existing functionality to break.

Changes to a Software component could impact dependent Components.

It is commonly observed that a Software fix could cause other bugs.

All this affects the quality and reliability of the system. Hence Regression Testing, since it aims to verify all this, is very important.

Making Regression Testing Cost Effective:

Every time a change occurs one or more of the following scenarios may occur:

- More Functionality may be added to the system

- More complexity may be added to the system

- New bugs may be introduced

- New vulnerabilities may be introduced in the system

- System may tend to become more and more fragile with each change

After the change the new functionality may have to be tested along with all the original functionality.

With each change Regression Testing could become more and more costly.

To make the Regression Testing Cost Effective and yet ensure good coverage one or more of the following techniques may be applied:

- Test Automation: If the Test cases are automated the test cases may be executed using scripts after each change is introduced in the system. The execution of test cases in this way helps eliminate oversight, human errors,. It may also result in faster and cheaper execution of Test cases. However there is cost involved in building the scripts.

- Selective Testing: Some Teams choose execute the test cases selectively. They do not execute all the Test Cases during the Regression Testing. They test only what they decide is relevant. This helps reduce the Testing Time and Effort.

Regression Testing – What to Test?

Since Regression Testing tends to verify the software application after a change has been made everything that may be impacted by the change should be tested during Regression Testing. Generally the following areas are covered during Regression Testing:

- Any functionality that was addressed by the change

- Original Functionality of the system

- Performance of the System after the change was introduced

Regression Testing – How to Test?

Like any other Testing Regression Testing Needs proper planning.

For an Effective Regression Testing to be done the following ingredients are necessary:

- Create a Regression Test Plan: Test Plan identified Focus Areas, Strategy, Test Entry and Exit Criteria. It can also outline Testing Prerequisites, Responsibilities, etc.

- Create Test Cases: Test Cases that cover all the necessary areas are important. They describe what to Test, Steps needed to test, Inputs and Expected Outputs. Test Cases used for Regression Testing should specifically cover the functionality addressed by the change and all components affected by the change. The Regression Test case may also include the testing of the performance of the components and the application after the change(s) were done.

- Defect Tracking: As in all other Testing Levels and Types It is important Defects are tracked systematically, otherwise it undermines the Testing Effort.