Insight Consulting Partners - Sensible Solutions to Your Unique Problems






grey.gif



Comprehensive and Effective Testing of SAP HR

Successful HR systems rely on successful and thorough testing. Because human resources is so complex, SAP is complex. Changes made to SAP HR may have many implications in areas other than the specific module. Problems can be very obvious or they may go unnoticed for months or years especially for larger companies. Incomplete testing can result in emergency situations that need quick resolution or extensive inconsistencies that require many hours of reconciliation and repair. Up front testing can eliminate post-live wasted resources.

(Download this in ebook format)

Common Testing Pitfalls

Not a representative sample

Often the scope of testing is too limited. The scope of the testing should match the complexity of the change. Suppose there is a production support issue where an error message is issued when an employee's master data is updated. In a perfect world, the problem would be corrected and you would simply test whether the error occurs after the correction has been applied. However, problems are not always isolated. The changes that work to fix one employee may break another employee. Think of this like side effects of a certain medication. You don't want to nauseate the end user while curing his headache, so to speak.

A larger view has to be taken with respect to testing. Think of enlarging the test plan in a linear and a radial fashion. By linear, determine the processes which precede and follow the specific problem. Some of the processes in the same line as the master data entry would include interface programs supplying the data, relationships in PD, the employee's time and the payroll. On the other hand, by thinking radially, you expand the situation beyond that specific employee. How are other employee groups handled? Basically, what is the effect on the rest of the population?

No review of results

Perhaps a very thorough test plan is followed. All the appropriate situations are tested, but the results aren't reviewed. Just because a process runs without errors does not indicate integrity of the resulting data. If results are not reviewed, the desired outcome may be observed, but other data is affected negatively.

Not enough testing

This is quite similar to the first pitfall. Say there is a major change to the payroll and part of the test plan is to run through all payroll areas once. Although this will account for all employees, it will not account for time dependent situations. Say deductions only occur once a month but the payrolls are biweekly. Accruals only happen once a month also.

Special situations not accounted

Holidays, retroactivity, tax exemptions, customer specific changes to delivered schemas, rules, programs, etc. Review the whole year and identify time dependent or company specific situations that demand attention. Also review custom copies of standard programs to ensure recent changes by SAP are incorporated into the custom program. Keep a list of rules, schemas and programs that have been copied from the standard version. When applying support packages or upgrading, compare the custom version to the standard version for modifications.

Integration ignored

Similar again to the first pitfall, this relates more to the people involved in creating the test plan. If a specific individual creates the test plan, she may know many of the dependencies on her part of the system. Payroll gets information from master data and time management, but sends information to FI/CO, Treasury, tax reporter and so on. However, this person usually won't know the complete picture. It is important to include representatives from each module in creating the test plan if warranted. Granted, some changes are very specific to a module. Changes to the earnings statement and read-only reports won't really affect other parts of the system.

Not enough time

The last thing the project team wants is to spend the whole night before a go-live reviewing test results. What happens when something wrong is found and there is not enough time to make the changes and redo the testing before the impending deadline?

Not double-checking with the user departments that the changes are as intended

So this exquisite payroll rule is configured to change certain wage types through the payroll driver. The person configuring the rule reads the specifications, determines the desired outcome, implements the changes and thoroughly tests the rule. The results match the desired outcome in all situations. However, when the change is sent to production, the user initiating the changes calls wondering whether the change was ever tested. The end-user's expectations did not match the developer's view of the desired results. It is important to have each part of the testing reviewed by the requestor. Most times it is helpful to have a different person than the developer test the changes.

Important Testing Methods

Unit testing

Although processes are rarely performed within a specific module without affecting other modules, it is necessary to establish integrity within each individual module. As is commonly said, garbage in, garbage out. If the outcome of the process is tainted within its originating module, the integrity of the secondary processes in the dependent modules is compromised. In addition, it will be difficult to distinguish whether the problem in the secondary module is from the original process or actually unique to the secondary module.

For instance, say taxes are paid through third party remittance and each time there is an accounts payable check run, no tax payment checks are produced. The problem could be in accounts payable or payroll. Perhaps the third party configuration is incorrect, the vendor is blocked for payment or the tax wage type is incorrectly configured. If unit testing had been done on the third party and wage type set up in the first place, the problem would probably have been identified before reaching accounts payable.

Failure or negative testing

It isn't enough to test the normal processes for accuracy. Human resources processing is full of unique situations and exceptions. This type of testing tries to "break" the system. Exceptions may result from unique circumstances or incorrect data entry. In a programmer's world, this means enhancing the source code to anticipate errors or bad data. What happens when an employee is terminated and rehired in the same payroll period? What if an employee changes payroll areas? If a report pulls data from the payroll clusters, it isn't enough to verify the report against the regular payrolls. What happens with voids, correction checks, etc.?

Integration testing

This ties each of the modules together to see the big picture. Basically, does the whole process work from start to finish? For instance, an applicant is recruited, a position is created, the recruit is hired, benefits added, training and events recorded, time entered, payroll is run and posted, checks cut and cleared, travel expenses paid, vendors paid and reports/extracts contain the correct data. Is there connectivity between the steps of the process or does it break down between modules?

Parallel testing

Parallel testing guarantees that things haven't gotten any worse from a change. It may even reveal inconsistencies from the original source. Parallel testing compares the before and after pictures of a change. Having two environments with the same data or a process or program that can be run with the same data before and after the change is necessary. In a case where there are differences, they should be reconcilable to the intended results. In other words, the only changes between the parallel data should be the intended changes. If upgrading from one release to another without adding any functionality, the data between the two environments should be the same. A common way to parallel test is to copy the productive box and run the same processes in the test box where the changes have been applied. Be certain any productive data entered after the copy is made is also recorded in the test box to eliminate unnecessary differences.

Load or stress testing

Can the new environment handle the production capacity? Especially during an initial installation, a smaller number of users are using the system than in the productive environment. What happens when all the usual users log on, maintain transactions and run programs? Like a dress rehearsal, coordinate a load test to closely simulate a productive environment and have the users perform their normal duties. Be sure the users are trained by this point so they are actually performing meaningful transactions rather than staring blankly at the screen.

Security testing

How many times does the new configuration work perfectly during testing, but when the productive user tries, the desired result is not achieved because security isn't tested? The persons configuring the changes will have different security than the end users. Either a test user should be created with the same security as the end user or the end user should test the changes with the same security she has in production. Security problems may not always be so obvious as "No authorization for transaction XXXX." In addition, positive and negative situations should be tested. Essentially, ensure the user can do what is necessary and not do what is restricted according to company policy.

Steps for Successful Testing

Verify the changes and intended results - catch the problem at the source by double-checking the changes were made according to the intended results. This is more obvious with straightforward table changes than with adding new functionality or creating custom configuration. If the annual 401k deduction limit is increased, recheck that the changed table entry corresponds to the increased amount. In other words, make sure there wasn't a keying error during configuration. For more complicated changes, discuss the intention of the changes with the person requesting the changes. Make sure you both are on the same path.

Determine types of testing and backup plan - depending on the complexity and type of changes, different methods of testing may be used. Also devise an emergency backup plan according to the amount of risk in case testing does not go as planned or unforeseen scenarios cause glitches.

Choose a representative sample - determine types of situations and employees to test.

Outline a test plan - formally decide how the test should be carried out. Test plans can be reused and modified for similar changes.

Perform test - test according to the plan revising it as needed. If critical situations weren't accounted in the original plan, revise the plan and continue.

Review results - if the outcome doesn't match the intended results, make changes and retest. Involve the end user in the review.

Log issues - keep a record of notable issues that may arise in the future. This is especially helpful for reoccurring changes like upgrades and applying support packages.

Even the best-laid plans may go awry, but focused and intentional testing will go far in preventing band-aids, late nights and frustrated employees. Although testing may seem tedious and costly, it usually costs less than repairing bad configuration. By following these steps, not only will your deliberately planned testing process increase in efficiency, but you will also maximize the benefits of testing and reduce the frustration of production problems.