Over the years I have managed and/or been involved in every level of testing of applications known to man J , for most of our SAP projects we normally adopt the internal testing strategy dictated by our customer so it’s simple to say that every project is different. In many cases we provide input to our clients, peers on how to make testing more effective for the project, development team, companyand thought I would share some ideas so hopefully others can benefit as well. These have proven for us to provide costs and quality benefits and of course for me personally to improve the chances of success of the project I am managing. To get us grounded here are the different types of testing types that are typically dealt with on systems project. Check the links to get the formal definition and understanding.
The most common approach to testing
The main source of problems I see in typical SAP software development is that testing is not considered up front, what normally happens on a typical development project is as follows:
- Blueprint (SRS) is developed,
- Using the Blueprint the functional spec(s) are developed (I hope it is anyways)
- Functional spec(s) are turned over to development
- Development occurs and when completed turned over to the functional team for testing
- Functional team develops test scenarios and performs tests till no errors are found
- End user are trained (maybe)
- Development placed into production
Of course this is an oversimplified view of the overall process but ok for our discussion. Some of the problems that occur with this typical process are:
- Missed requirements – typically the test scenarios if actually developed are put together sometime after the development is complete or almost complete. The purpose of the blueprint is to document the user’s requirements (truly it is) but there tends to be a disconnect between the blueprint and the development process. This tends to surface with development groups feeling isolated and not part of the process. In this typical scenario there is some degree of missed requirements because the validation of requirements is not linked to a test scenario, this tends to happen more often in larger software development projects where there are 10’s or 100’s of requirements vs. a software app/report with a couple of requirements.
- Rework before and after delivery – In the above scenario rework is common. It is possible that this is considered the normal way things should work. We develop, then test, changes are kicked back to the developers and then we test again. But this should not be the normal process. This happens because of missed requirements (above). Usability or integration factors if caught before delivery or possibly occurs after delivery once the users start to use the application they provide feedback and then inevitably changes are needed.
- Missed project dates or poor quality– there has been a lot blogged, written, spoken about the development estimation process, the thinking is that there is complex calculation needed to develop estimates for a given software development project, this maybe but I also think that in many cases the impact to software development is the fault of the above process. In informal analysis of projects we have been involved with we have found accuracy of development estimates for an experienced team are in the ~85-90% of original estimated delivery time. So this would appear to mean that we can predict how long development for a typical project will take but if we add rework because of the points made above you can see how difficult it can become for project to be delivered on time. Having to make a decision whether to deliver less quality then expected or deliver late is never fun.
So I think you probably see where I am going with this and I am hoping that this probably just validates what others think. So how do we make the testing more effective?
How to make testing truly effective
-
Make Test scenarios part of the functional spec – unit and integration test scenarios should be developed as part of the functional spec before it is handed off to development. This provides 2 benefits
- The tests scenarios are developed to ensure that the requirements are all satisfied, this provides also validation to the business analyst that they have addressed all the open issues, functional requirements, business rules etc. A requirement to test matrix should be used to document which test covers which requirements.
- The test scenarios are communicated to the developer(s), developers informally always test their work whether it is realized or not (sometimes considered a unit test), with a listing of test cases the developer can validate that the functionality will successfully provide the expected results and allow communication of unexpected results prior to delivery, this saves time and rework.
- The tests scenarios are developed to ensure that the requirements are all satisfied, this provides also validation to the business analyst that they have addressed all the open issues, functional requirements, business rules etc. A requirement to test matrix should be used to document which test covers which requirements.
-
Incorporate User Acceptance testing should be incorporated into every software development project. User Acceptance Testing is a critical step that many consulting companies (such as us) will use to receive formal acceptance of an application. User Acceptance can be a subset of the integration testing scenarios or a list of scenarios that the end users develop on their own but this provides several benefits including:
- End user buy in, you are treating them as a customer and asking them to accept delivery of the end product, they will test it how they believe it will be used (there might be surprises here but hopefully not)
- Training , training sometimes is very informal (not saying this is right but being realistic), the user acceptance test provides the end user or power user that represents the end users the opportunity to play with the application, ask questions and generally get to know the application before it is delivered. Note that user acceptance testing does not need to be very formal but it goes a long way to reduce rework after delivery.
- End user buy in, you are treating them as a customer and asking them to accept delivery of the end product, they will test it how they believe it will be used (there might be surprises here but hopefully not)
- Automate Testing – some might say that automated testing is the realm of larger companies but in the case of SAP since there is no additional cost for testing tools (SAP eCATT is part of the system) there is no excuse. I have heard the argument that using testing tools will not save any effort and though I won’t speak to that specifically what I would argue is that the incorporation of testing tools provides the ability to perform more comprehensive and thorough testing then would be practical without. An example of this is a recent project where we had over 50 test scenarios to be tested over a period of a week if we tested all the order types possible our list would expand to 150+ scenarios, there was little incentive to test all the scenarios with all the order type but because we were able to automate portions of the test steps we did in fact test 200+ scenarios in the same time frame, in addition the customer was able to perform negative impact tests as well with no additional costs then originally budgeted. The other benefit will be in the case of a needed modification the same battery of tests can be executed to ensure that the results were not affected with little effort/cost.
These are simple enough the interesting thing for me is that one of the hardest things to get across is the development of test scenarios as part of the functional spec. I hope this gives you something to think about.
Later…