This question is asked many often. However there is no one answer. Reading a lot of reactions â€“ in one of the Linkedin community groups – I noticed that the answer is strongly related to the (test) experience of the responder. A number of the reactions endorse this question and of course other people have another opinion.
There is a definition of software testing â€œSoftware testing is the process of execution a program or system with the intent of finding errorsâ€. According to this definition there is no difference between verify working and try to break the software. The goal in both situations is to find bugs.
Another point of view the software must follow the business processes and the software should be error free. Therefore the purpose of testing is to find bugs and to prove correctness, thus not to break it. I found terms like the software should be â€˜workingâ€™ and should be â€˜robustâ€™. But these terms are too vague. First you have to describe what means â€˜workingâ€™ and what means â€˜robustâ€™ en how to determine those?
In another comment I read some about â€˜sunny dayâ€™ and â€˜rainy dayâ€™ testing scenarios. Sunny day means test all the common situations (including the possible / expected wrong situations). In a rainy day scenario will be tested the behavior of the system in exceptional situations. And so on.
So you noticed there are a lot of reactions with different approach about this subject (more than 80 reactions!)
I will add another point of view. In the Netherlands TMap Next Â© (=Test Management Approach, Sogeti) is one of the test standards. TMap defines test as â€œTest is a process that gives information about and advices over the quality and the related risks.
According to this definition we have to focus â€˜onlyâ€™ on quality and risk. But what is quality and what is risk? Quality is defined as â€œThe total of properties and characteristics of a product of a service that is important comply with the given and the obviously needs, ISO 1994) (TMap next Â©).
The given needs – (business) requirements etc. – are mostly described in the Functional Design (FD) and are more or less complete and clear. The â€˜obviously needsâ€™ or NON-functional specifications (ISO 9126) are the hardest part. For instance, NON-functional specifications are performance, availability, usability.
Risk is defined as: â€œthe possibility that a failure occurs in relation to the expected damage, when the failure real occurs (Risk = change of failure X damage).
For this risk-based test approach you have to determine the product risks first and check if the system meets the risks description.
Another approach is to split the question into the two subjects, namely verify and to break it.
The verification of the software is mostly a task for the tester and will be done in the FAT (Functional Acceptance Test). The UAT (User Acceptance Test) can be used for the second subject: break it. These tests are mainly executed by one of the future user. The user follows the business processes thus more or less his daily work. During these tests the user â€˜testâ€™ also his â€˜feeling of the system. Therefore he must not only test the functionality but also situations â€˜besides the linesâ€™. Thus try to break it with the remark that the situations are common not exceptional.
Back to the question. It is not possible to make 100% error free code and neither to test 100%.
Yes, I think verification of the software is the main goal. The future user must be able to use the system to do/support his daily work.
But to break it? Yes, but only related to â€œwhat can happen his daily work â€œ.