Chat with us, powered by LiveChat ITECH7409 Federation University Software Testing and Standards Research | Abc Paper

According to Standards Australia: “Standards are published documents setting out specifications and procedures designed to ensure products, services and systems are safe, reliable and consistently perform the way they were intended to. They establish a common language which defines quality and safety criteria.” There are several standards, international and national, that relate specifically to software testing. Standards formalize industry best practice and they are agreed upon by professionals in the industry in which the standards apply to. This assignment is an investigation into those standards. The purpose of the assignment is to help you to:- •improve your research and comprehension skills •develop a good understanding of professional industry standards for software testing appreciate the value of various processes and methods used in industry to test and evaluate software systems


Unformatted Attachment Preview

Software Testing
Assignment 1
Research on Software Testing and Standards
According to Standards Australia:
“Standards are published documents setting out specifications and procedures
designed to ensure products, services and systems are safe, reliable and
consistently perform the way they were intended to. They establish a common
language which defines quality and safety criteria.”
There are several standards, international and national, that relate specifically to software
testing. Standards formalize industry best practice and they are agreed upon by
professionals in the industry in which the standards apply to.
This assignment is an investigation into those standards.
The purpose of the assignment is to help you to:•

improve your research and comprehension skills
develop a good understanding of professional industry standards for software testing
appreciate the value of various processes and methods used in industry to test and
evaluate software systems
Timelines and Expectations
Marks and Percentage Value of Task: 100 marks
Due: Thursday, May 2, 2019 @16:00 (Week 7)
Minimum time expectation: 10 hours
Learning Outcomes Assessed
Critically evaluate software requirements as inputs toward testing of the final
Analyse and critically evaluate appropriate tools and techniques to support the
testing process;
Develop a good understanding of selecting and applying measures and models
used for testing that are compliant with the appropriate professional industry
standards such as the ACS and IEEE;
CRICOS Provider No. 00103D
Analyse and critically evaluate software requirements and proposed solutions;
Apply complex decision making to select appropriate testing techniques;
Write professional level management, planning, quality assurance and testing
documentation for a software system;
Page 1 of 5
ITECH7409 Assig 1 Sem 1 2019-07
Apply effective testing techniques to software systems of varying scales and test
Develop and maintain plans for scheduling quality assurance tasks including
software testing;
Assessment Details
You will need to:

locate a research paper related to software testing that refers to at least one standard.

research, comprehend and analyse each document (both the paper and the chosen
standard) to find relevant details to answer a set of questions, and

prepare a written summary report of findings
As a suggestion, commence your search for a research paper at the Federation University
website. There is a QuickSearch link on the library home page:
Type your
query here
CRICOS Provider No. 00103D
ITECH7409 Assig 1 Sem 1 2019-07
Page 2 of 5
There is also a listing of Databases A-Z ( ) which has a
link for Australian Standards Online.
Questions for the standard:

What is the standard name?
Who holds the copyright for the standard?
Amongst the acknowledged contributors to the document, which universities were involved
(if any)?

What is the scope or intent of the standard?
What are key terms and understandings needed for the standard to be understood and

In your own words, what does application of the standard result in? Or, in other words,
what does the standard do?

Finally, what specific relevance to software testing is the standard?
Discuss the paper and how it relates to the standard. For example: does the paper suggest how
to improve the standard? Does the paper highlight issues in applying the standard?
Attached is a sample paper (Wichmann and Cox 1992) which refers to the ANSI1/IEEE2
Standard 8293 and ANSI/IEEE Standard 1008. Although this paper is somewhat outdated, it’s
serves to illustrate the task for this assignment.
Prepare a report of no more than 1,500 words answering all questions. The report should have
the following structure.

an introduction to standards and a brief description of the research paper and chosen

responses to questions for the standard

listing and a discussion of commonalities and differences between the two documents

a conclusion summarizing the report findings
American National Standards Institute
Institute of Electrical and Electronic Engineers
3 Current versions of both these standards are available online at FedUni library
CRICOS Provider No. 00103D
ITECH7409 Assig 1 Sem 1 2019-07
Page 3 of 5
Marking Criteria/Rubric
Student ID:
Student name:
Assessment component
1. Introduction
2. Responses to questions
3. Listing and discussion of commonalities and differences between the research paper
and the chosen standard
4. Conclusion
5. Spelling, grammar and report presentation
Your assignment should be completed according to the guides for your assessments:
You are required to provide documentation, contained in an appropriate file, which includes a front
page indicating:

the title of the assignment
the course ID and course name,
student name and student ID
a statement of what has been completed
acknowledgement of the names of all people (including other students and people
outside of the university) who have assisted you and details on what parts of the
assignment that they have assisted you with
a list of references used (APA style)
CRICOS Provider No. 00103D
ITECH7409 Assig 1 Sem 1 2019-07
Page 4 of 5
Using the link provided in Moodle, please upload your report as a Word file. Name your Word file in
the following manner:
e.g. Aravind_ADIGA_30301234.docx
Also, upload a copy of the standard used by the research paper
Assessment marks will be made available in fdlMarks, feedback to individual students will be
provided via Moodle or as direct feedback during your tutorial class
Plagiarism is the presentation of the expressed thought or work of another person as though it is
one’s own without properly acknowledging that person. You must not allow other students to copy
your work and must take care to safeguard against this happening. More information about the
plagiarism policy and procedure for the university can be found at:
Your support material must be compiled from reliable sources such as the academic resources in
Federation University library which might include, but not limited to: the main library collection, library
databases and the BONUS+ collection as well as any reputable online resources (you should confirm
this with your tutor).
Federation University General Guide to Referencing:
The University has published a style guide to help students correctly reference and cite information
they use in assignments. A copy of the University’s citation guides can be found on the university’s
web site. It is imperative that students cite all sources of information. The General Guide to
Referencing can be purchased from the University bookshop or accessed online at:
Wichmann, B. A. and M. G. Cox (1992). “Problems and strategies for software component testing standards.”
Software Testing, Verification and Reliability 2(4): 167-185.
CRICOS Provider No. 00103D
ITECH7409 Assig 1 Sem 1 2019-07
Page 5 of 5
Problems and Strategies for Software
Component Testing Standards
National Physical Laboratory, Teddington, Middlesex, W l l OLW , U.K.
What does it mean to say that an item of software has been tested? Unfortunately, currently accepted
standards are inadequate to give the confidence the user needs and the meaningful objective for the
supplier. This paper assesses the currently available standards, mainly in the component testing area
and advocates that the British Computer Society proto-standard should be taken as a basis for a
formal standard. The paper is intended for those concerned with software quality.
KEY WORDS Component testing Quality assurance Standards
This paper considers the issue of software testing in the narrow sense, i.e. the execution
of the code of a system to attempt to find errors. The objective is to quantify or assess
the quality of the software under test, particularly in an objective manner which can
therefore be agreed by both a supplier and customer.
The reliance placed upon dynamic testing varies significantly from sector to sector. It
appears from current drafts that the revision of the avionics standard for safety-critical
software depends almost exclusively on testing (RCTA, 1993), while the U.K. Interim
Defence Standard (MOD, 1991) places much greater emphasis on static analysis.
Dynamic testing has been widely used within industry since the advent of the first
computers. However, the contribution that testing makes to software quality in practice
is hard to judge. Everybody producing software will claim that it is ‘tested’, and therefore
one must look beyond such superficial claims.
The classic work on testing is that of Myers (1979). In the author’s view, little progress
has been made in the practical state of the art since the publication of Myers’ book. For
more recent summaries of the art of testing (see Ince, 1991 and White, 1987). This paper
is an attempt to revisit the issues to see if some advance can be made; preferably so that
those who undertake testing can claim some measurable quality objective.
Nomenclature in this topic is not uniform and it is unfortunate that an Alvey document
covering this point has not been formally published (Alvey, 1985). However, this gap
will be filled by a BCS proto-standard (Graham, 1990), assuming this is published.
Although it is known that the absence of bugs can never be demonstrated by testing,
it is equally known that effective software testing is a good method of finding bugs.
01992 by John Wiley & Sons, Ltd.
Received 22 October 1992
Revised 26 February 1993
Would any academic, even those who discount testing, be prepared to fly in an aircraft
in which the software had only been subjected to static analysis? On the other hand,
there are good statistical reasons for questioning figures for the reliability of ‘highly’
reliable software (see Littlewood and Strigini, 1993). Professor Littlewood claims that
dynamic testing alone can only justify a reliability figure commensurate with the amount
of testing undertaken. This implies that the reliability requirements of some applications
areas, such as the most critical avionics systems, must be justified in terms of the quality
of the development process.
The paper concentrates upon component testing since this is the most incisive testing
method, and the method which is best understood, again from Myers. Of course, other
testing methods have an important part to play at different points in the life-cycle.
The central thesis of testing is that if sufficient effort is put into the testing process,
then confidence can be gained in the software, although it cannot guarantee correctness.
The confidence, depends, of course, on new tests being undertaken without faults being
Specialized aspects of testing are not considered in this paper. Examples are as follows.
(1) Testing the ‘look and feel’ of a system. This can involve the use of a mouse, a
windows environment, etc. This is being studied under an ESPRIT project (MUSiC,
(2) Testing performance. With complex modern architectures, an analysis of performance can be quite difficult.
(3) Testing real-time and concurrent systems. These require special techniques, such
as in-circuit emulators, which are too specialized to be considered here.
2.1. Research
Much of the research in software testing has focused on the coverage of various aspects
of software, such as control flow features or data flow features. For example, Hennell
et al. (1976) proposed three levels of component testing based on control flow:
(1) the execution of statements;
(2) the execution of branches (both ways);
(3) the execution of ‘linear code sequence and jumps’ (LCSAJs).
The degree of testedness of each level is the percentage of items at that level which
have been executed during the tests. This hierarchy of so-called ‘test effectiveness ratios’
(TERs) was later extended in the work of Woodward et al. (1980). Rapps and Weyuker
(1985) proposed various levels of component testing based on data flow features, specifically the interaction between definitions and uses of variables. A good critique of coverage
metrics and an analysis of the theory of testing is to be found in the work of White
(1987). In this context, the point of interest is that coverage metrics clearly provide
objective measures and can be applied on a routine basis in industry.
Considering control flow testing further, the purpose of the LCSAJ is to provide an
achievable test objective beyond that of branches. Since a program with ‘while’ loops will
have infinitely many paths, it is not useful to measure a percentage of paths covered.
However, there are a finite number of LCSAJs in a program and hence 100% coverage
is feasible (in simple cases, at least). This is the conventional metric for statement, branch
or path testing, which is the ratio of items executed to the total in the module. If 100%
statement coverage were to be a requirement, this would have to be taken into account
in design and coding, since it would exclude defensive programming methods. In practice,
the coverage that can be obtained depends critically upon the nature of the code and is
not easy to predict in advance.
The company called Program Analyses provides the LDRA TestbedTMsystem which
allows the above TERs to be determined. Verilog’s LOGISCOPEm and Software
Research’s TestWorksTMare able to provide similar testedness metrics. Since these tools
work in many environments, there is little impediment to the above measures, or similar,
being used in practice.
It appears that relatively few organizations require or quote testedness metrics. Those
that do are mainly in the safety-critical or security sectors where a major driving force
is certification by an independent body for which the testedness metrics have an obvious
The main limitation to the wider use of testedness metrics seem to be that of the lack
of an obvious value to the purchaser. If company A purchases software from company
B, what value is it to require a specific level of testedness? This question is addressed
further in section 6.
2.2. Accredited Testing
At first sight, testedness metrics seem to provide an excellent basis for objective testing,
which is the main requirement for accredited testing, as administered by NAMAS (the
U.K. national service for accreditation of testing services).
Several years ago, NAMAS commissioned a study (NATLAS, 1985), performed by
Data Logic, to see if such a scheme would be viable. The study concluded that from a
theoretical viewpoint, both statement and branch coverage measures would be feasible.
The results of the study were presented by NAMAS to industry to see if it required
such accredited test services. The conclusion was negative for reasons which have never
been totally resolved. The following points indicate some aspects of concern.
(1) It is important to check that the output from each test is correct, which can be a
significant cost factor.
(2) Devising tests to execute all statements or branches can also be expensive.
(3) If less than 100% coverage is obtained, the user may think that the software has
poor quality. In fact, it merely constitutes inadequate evidence of high quality.
(4) If 100% coverage is obtained, the user may place undue reliance on the correctness
of the software.
This experience seems to indicate a substantial gap between academic research and its
application in industry. This topic is addressed further in section 6.
2.3. Guidance
The British Standard BS 5887 (BSI, 1988) is a guidance document, as are both (ANSI,
1983) and (ANSI, 1987). Other more general guidance material is available under the
TickIT scheme (DTI, 1992), but this does not cover the area in depth. Such guidance
material is useful for those undertaking testing but cannot be used for objective measurement or independent tests.
An interesting application of testing is in the assessment of an organization using the
SEI Maturity Model (Kitson and Humphrey, 1989). The overall aim here is to provide
a method of determining the maturity of the software engineering process used within an
organization on a five point scale. Whereas TickIT is restricted to the ISO-9OOO concept
of quality management, the SEI model is specific to software engineering and therefore
could be expected to address testing in some detail. The initial questionnaire used to
assess the maturity level of an organization used two questions concerned with regression
testing. Although this is probably the most important single aspect from the perspective
of overall project management, it does not meet the needs of an objective measure of
testedness for software quality.
A revision of the SEI Model (Paulk et af., 1991) handles testing in greater depth, but
only at level 3 in the model. (In fact, the lower levels are confined to management issues.)
This implies that no requirements are given for the lower levels of maturity, within which
the majority of companies actually fall. The main issues of testing are addressed at level
3, but there is no clear indication as to how a company can be assessed. For instance,
the key questions are as follows.
‘The adequacy of testing is determined based upon:
(1) the level of testing performed;
(2) the test strategy selected; and
(3) the test coverage to be achieved.’
At this point, a classic dilemma arises-if the test coverage is high, so will be the costs,
but if it is low, software quality may be jeopardized.
3.1. General
It is not easy to assess current practice, since reports from suppliers naturally reflect
best practice. From questionnaire responses given by attendees at a testing conference,
Gelperin and Hetzel (1988) concluded that only 5% provide regular measurements of
code coverage and only 51% regularly save their tests for reuse after software changes.
Gelperin and Hetzel note that the results of this small survey are biased, but that ‘some
observers believe that general industry practice is much worse than the survey profile’.
The Institute of Software Engineering (in Northern Ireland) reports that only about
one quarter of companies use regression testing on a routine basis (Thompson, 1991).
Similar, rather less quantified experience has been reported to the author by the Centre
for Software Maintenance at Durham University (U.K). They also state that since testing
is at the end of the life-cycle, there is a tendency for the amount of testing to be reduced
to allow the project to complete within budget.
The automation of regression testing is a substantial benefit in retaining the quality of
a software product over a long period. The problem is that setting up an appropriate
automatic test facility can be a significant investment. However, unless this automation
is undertaken, the temptation to cut corners by not performing testing on a ‘small’ change
is irresistible.
No information is available about other forms of testing, such as module and integration
testing. It is reasonable to assume that th …
Purchase answer to see full

error: Content is protected !!