Software quality may be defined as conformance to explicitly mentioned functional and performance requirements, explicitly documented development standards and implicit characteristics that are expected of all professionally developed software.
Software quality measures how well software is designed (quality of design), and how well the software conforms to that design and requirements (quality of conformance). While quality of conformance is concerned with implementation, the quality of design measures how valid the design and requirements are in creating a worthwhile product.
Software quality has different dimensions derived from perspectives of Producers and Consumers. From a Producers view, Quality is defined as Meeting the requirements of software. From a consumers view, Quality is defined as Meeting customers’ needs.
Software Quality can be achieved only through continuous improvement which is best illustrated in Deming’s cycle also called as PDCA cycle. The plan–do–check–act cycle is a four-step model for carrying out change. Just as a circle has no end, the PDCA cycle should be repeated again and again for continuous improvement.
Software Quality Attributes
Software Quality is commonly described in terms that are known as Quality Attributes. A Quality Attribute is a property of a software product that will be judged directly by stakeholders. Quality attributes are—and should be—quantifiable in specifications by appropriate and practical scales of measure. The ISO software-quality model ISO – 2001 defines six quality-attribute categories: functionality, reliability, usability, efficiency, maintainability, and portability. The categories, in turn, are further subdivided into sub-characteristics.
• Functionality – A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.
• Reliability – A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time.
• Usability – A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
• Efficiency – A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.
• Maintainability – A set of attributes that bear on the effort needed to make any changes to the existing software
• Portability – A set of attributes that bear on the ability of software to adapt to various operating environments
How Testing improves Software Quality?
Testing is essential to develop high-quality software and to ensuring smooth business operations. It can’t be given less importance; the consequences are too dire. Businesses—and, in some cases, lives—are at risk when a company fails to adequately and effectively test software for bugs and performance issues, or to determine whether the software meets business requirements or end users’ needs.
Testing helps to measure the quality of software in terms of the number of defects found, the tests run, and the system requirements covered by the tests. Testing helps to find defects and the quality of the software increases when those defects are fixed thereby reduces overall level of risk in a system.
Testing helps to improve the quality by:
• Meeting the conformance standards & Guidelines,
• Meeting the performance standards
• Providing stability to the system
Verification is the process of ensuring that software being developed will satisfy the functional specifications and conform to the standards. This process always helps to verify “Are we building the product right?” (According to the functional and technical specifications).
The major Verification activities are reviews, including inspections and walkthroughs.
- Reviews are conducted during and at the end of each phase of the life cycle to determine whether established requirements, design concepts, and specifications have been met. Reviews consist of the presentation of material to a review board or panel. Reviews are most effective when conducted by personnel who have not been directly involved in the development of the software being reviewed.
a) Formal reviews are conducted at the end of each life cycle phase. The acquirer of the software appoints the formal review panel or board, who may make or affect a go/no-go decision to proceed to the next step of the life cycle. Formal reviews include the Software Requirements Review, the Software Preliminary Design Review, the Software Critical Design Review, and the Software Test Readiness Review.
b) Informal reviews are conducted on an as-needed basis. The developer chooses a review panel and provides and/or presents the material to be reviewed. The material may be as i
- Informal as a computer listing or hand-written documentation. An inspection or walkthrough is a detailed examination of a product on a step-by-step or line-of-code by line-of-code basis. The purpose of conducting inspections and walkthroughs is to find errors. The group that does an inspection or walkthrough is composed of peers from development, test, and quality assurance
Validation is the process of ensuring that software being developed will satisfy the user needs. This process always helps to verify “Are we building the right product?” (According to the needs of the end user).
Difference between Verification & Validation:
Verification testing ensures that the expressed user requirements, gathered in the Project Initiation Phase, have been met in the Project Execution phase. One way to do this is to produce a user requirements matrix or checklist and indicate how you would test for each requirement.