By Mario Matthee
For all the benefits of performance testing your software, most (rigorous) performance testing is not done on your actual production systems. This somewhat mitigates the benefits, as if you were running ‘what-if’ scenarios on a simulator rather than the real thing.
While limited performance testing can safely be attempted on production systems (and sometimes is), there are good reasons for not running extreme performance testing (stress tests) on your production systems. Where basic performance testing might put a small load on your systems, stress testing is just that – stressful – on you and your systems. It necessarily pushes your systems to the limit, hitting them from different angles and pushing obscene amounts of traffic through narrow bandwidth pipes in a concerted effort to break them.
For most organisations, it is therefore impractical at best – and negligent at worst – to run anything even resembling a stress test on a live system. Except you can, and you should.
Don’t get me wrong; I’m not advocating the risky scenarios you’re probably imagining. In fact, while most commercial performance testing tools could perceivably run on production systems, they generally can’t work through the highly encrypted and secure tunnels that organisations like banks use to protect their systems from any and all forms of malicious intrusion.
But there is a way to performance test your production systems. Not only that, there is a way to automate the performance tests on your production systems, so that as soon as performance drops below a certain level, you will know about it. I call it Automated Functional Performance Testing, which very cleverly overcomes the limitations of traditional performance testing using very mainstream automated test scripts not necessarily designed to test performance.
Let me explain, using the aforementioned banking system as an example. As a bank, you use expensive proprietary software and secure protocols to guarantee your customers’ safety and verify the integrity of your transactions. The same software prevents performance testing tools from seeing – and therefore diagnosing – any problems that may occur inside the secure code.
In other words, if something is broken while a secure transaction is taking place, and that something is slowing down your system, a performance testing tool is not going to help you troubleshoot the problem because it can’t see it, so to speak.
However, install an automated script that keeps tabs on the time it takes the system to process a transaction request – the time between the customer issuing the request (before it goes through the security tunnel) and getting a result (on the other side of the tunnel) – and suddenly you have eyes on a critical part of your system’s performance without compromising security or adding any overhead to the process.
Not only that but the automated script you would use to ‘test’ the performance of this aspect of your system will also cost significantly less than a typical performance testing tool which, as you probably know, is not cheap.
The same technique can be used in different ways on different systems to the same effect. For example, run an automated script on your web server to test the response times on your newsletter signup form, or run a test script to let you know when a page click takes longer to return a result than your SLA demands.
Performance testers establish SLAs to make sure that apps reach certain benchmarks of performance. For example, a typical might include a clause that requires the first page of search results to be returned within three seconds when performing a product search.
You can even extend the script to send alerts to any number of people responsible for maintaining your business systems, giving you an early warning system to proactively rectify any faults before they start impacting your customers’ experience.
If all of this sounds very much like the results you expect to get from performance testing, that’s because it is. No, you’re not using testing tools that have been specifically developed to isolate and remedy serious performance issues, but then you’re going one step further and working with production rather than development systems.
That said, I’m not advocating that one is necessarily better than the other. Dedicated performance testing and performance testing tools play a vital role in generating quality software from the very start of the development process right through its lifecycle. Performance testing tools are also increasingly becoming mainstream, and therefore becoming more cost effective to run.
This approach is almost like production monitoring from an infrastructure perspective without focusing on the detailed performance of the memory usage and CPUs, but rather on the customer experience from a speed/response perspective. It’s also handy for monitoring the uptime/response times of third party system the application integrates with. In most cases, if these third party systems are not available, the business transaction cannot be processed.
Automated Functional Performance Testing is not performance testing in the traditional sense, but gets you many of the advantages of performance testing with the additional advantages of live information from your live systems at a lower cost.
This is a deceptively simple question because expensive means different things to different people. For a project manager, it tends to be that testing mostly is done at the wrong time, at the end of a project when budgets are depleted and fixing errors is time-consuming. For the call centre, it is the number of support calls from frustrated users.
In other words, do you and your stakeholders have the same definition of what expensive is?
My real point is that the concept of value is multi-dimensional. Thus, in order to determine whether testing is expensive, we need to consider all the dimensions. I think there are five key dimensions, which we can call the five Ws:
- What in general makes testing expensive?
- When is it expensive?
- Where is it expensive?
- Who makes it expensive?
- Why do we even require testing in the first place?
I am no expert on how to run a profitable business and satisfy all the stakeholders involved financially, but I have to admit that, as a quality assurance professional, I am surprised to find that those five Ws are not asked. It is as if business schools only teach people how to be successful when things are going well, and not to assess and mitigate risks. Maybe it’s easier and cheaper just to say “sorry”. I suppose I shouldn’t complain because this is probably why people like me have a career in testing and quality assurance in the first place!
To get an idea what I am ranting about, let’s look at a typical process which is generally ignored during planning because it is very unlikely that there will be sufficient control over information to measure effectively the tasks performed. The process I am referring to is the defect (bug) life cycle. How much does it really cost to fix a bug found during acceptance testing or in production?

Looking at the rough sample diagram above we can see about nine possible stages to a bug life cycle. On the right are lists of typical tasks performed during each stage. If you measure the effort spent, and the cost of the people involved you might be surprised with the numbers. Assume 25 bugs out of a total of 250 bugs need to be fixed by the end of a two week period. Will you be confident to release to production without measuring the current quality through testing?
Many times we are being challenged and struggle to show quality because it might only be visible in getting to production. Over the course of development, developers will be less likely to test all of the things that can go wrong. I think it is muscle memory by constantly focusing only on getting the code to work. This explains the theory behind the graph that is part of all test related training - bugs become exponentially more expensive to manage during the SDLC.

“How much is that bug worth to the people that matter?”
The irony is that little attention is assumed to guarantee quality according to the context of the system - until something goes wrong and it negatively affects someone. During crunch periods, there tends to be an increase in the total number of testers involved on the project. Sometimes the testers with the wrong skill sets are pulled in just to get numbers up and to add more eyes to the system. However, is it sustainable, effective, efficient, or repeatable? At times, you do require the rare skills of test specialists or a tester with a broad range of skills, but the contracting fees cannot fit in your already limited budget.
Many times I run into situations where people were employed for their personality and not for their skills to make the project a success. Add to that the costs for performance testing tools and environments. Security testing is not cheap either. There are design risks if globalisation is not considered and can result in a big rework. There is also that dreadful maintenance risk to keep the system running for the next 3 to 5 years. How much is red tape crippling the efficiency and effectiveness of testing in some businesses?
Remember too that some people using the product will exploit your system. If you are still not convinced then have a read on sites like CNET.com and search for the term “glitch”.
“Could it be an intentional blindness forced onto decision makers by society to ignore the negativity of asking “what if” questions?”
Obviously there will be solutions to many challenges, at a price of course. However, changing cultures and processes over a short period seldom go according to plan. No matter how much money you throw at problems, they cannot get solved in the desired timeframes.
I believe testing can become cheaper or at the very least become better. Bring the minds of the right people together from the beginning to ensure quality throughout the SDLC with the required levels of testing. Involve all the stakeholders, including the clients, to collaborate, set realistic goals and take responsibility for the quality.
Respect the role and responsibilities of each other in the team but also understand and accept the team’s capabilities.
Designated as the ‘automation specialist of choice’ by Old Mutual S.A., veteran software testing group DVT is setting its sights on the UK, eager to help new clients in a post‑Brexit world.
Now that 2017 is fully underway, Editor of TEST Magazine Cecilia Rehn caught up with Chris Wilkins, CEO, DTH and Bruce Zaayman, Director: DVT United Kingdom, to discuss how this South African powerhouse is poised to help UK businesses optimise automation this year.

CEO of Dynamic Technology Holdings

Director: DVT United Kingdom
DVT is well known as one of the largest, privately‑owned software testing groups in the southern hemisphere, but can you give us an introduction for our European audience?
Chris Wilkins: DVT started in Cape Town in 1999 and we have built up our group to a staff of 600 professional software developers, testers, business analysts, project managers, architects.
At heart we are a software development company and over the last 10 years we’ve recognised that software testing is becoming more and more important, so we built up a very large and very competent testing team. This is made up of 200 - 250 testing professionals, which includes our Global Test Centre facility in Cape Town, one of the largest, specialised testing facilities in the southern hemisphere.
Our clientele spans from large finance and insurance firms and media companies, down to smaller organisations such as Doddle.
Our focus in testing is automation; we believe that the world will slowly move towards automation, and we believe that outsourcing software testing and commoditising it, and making life easier and allowing enterprises to focus on the more specialised, and possibly more interesting, QA jobs is the way to go.
What are the main services that DVT provide?
Bruce Zaayman: We provide agile software development, testing, consulting and training.
We have also built our own test automation framework, which means our clients don't have to pay any license fees for their testing projects. The main reason we developed the java‑flavour, UTA‑H (Unified Test Automation – Hybrid) framework is because a lot of companies don’t want to spend the money on the big players. You know, the HP titles or the CA type tools. For that reason, this is based on Selenium web driver, saving costs as we’re not limited by a license for one individual machine.
If we need to run through a massive amount of work in a short amount of time we spin up a couple of VMs and we can run on double, triple, the amount of machines in order to reduce the time. So that’s a major selling point and I think that that’s something our clients look for.
CW: Everybody wants flexibility; everybody wants scalability. We, as a company, are pragmatic delivery specialists; we’re not trying to play in that big generic space. We’re not looking for these massive deals; we’re just saying ‘we can get the job done for you.’ The framework’s been built with that in mind, to get the job done, and it’s 80/20. Once the process is more or less 80% complete, the learning curve has been so dramatic that it makes that last, more challenging 20%, that much quicker and easier.
BZ: We also use other tools for automation frameworks. We are agnostic, if a client has a tool, then we are more than happy to augment that team with our service offerings and our skills. To us, an automation specialist is not just a functional software tester with some tech background; we have a java‑development type of resource that is useful. We run test automation from a development point of view, and find that this flexible and scalable approach works very well.
What can a South African venture offer to the UK/European market?
CW: Post‑Brexit, we think Britain is looking at being more of a global citizen again, and we believe South Africa is a culturally and economically sound partner.
In terms of IT outsourcing, we believe the Indian model, although effective for some companies, is not specialised, nor boutique enough for most. And when you consider the euro’s recent increase, other Eastern European options have become more costly. In contrast, the South African rand is extremely competitive, which means that there is a strong case to say that partnering with a Cape Town‑based firm can be a cost reduction and cost mitigation, strategic exercise as well.
However, we consider our strengths to be based on more than economics. When it comes to cultural familiarity, there’s a strong link between Britain and South Africa. We’re part of the same Commonwealth, share a common language, and the same time zone so you can pick up the phone and talk to someone straight away. A lot of Brits travel to and from South Africa, and a lot of them have families there as well. So there’s a strong sense of it being part of the British framework.
And of course there are loads of South Africans working in London and in the UK. These cultural links are so important for IT outsourcing in particular, when miscommunication could have huge ramifications for a project. South Africans' first language is English, they are educated in a system that reflects the British educational system and our best practice, the way we do things, the way we work, the methodologies, the jargon, they are all exactly the same.
On the whole I think communication is as easy as it can get. We are a much easier country to work with than any of the other primary sources of offshore work at the moment in Eastern Europe and India.
A key part of DVT’s business is your Global Testing Centre. How does this support your offerings and clients?
CW: The Global Testing Centre is a natural extension of our testing service. It’s all about having your testing carried out remotely, so you don’t have to hold onto the headache of staff, you don’t have to manage your peaks and troughs as large projects come and go in quick succession. Our clients don’t have to worry about finding very specialised skills for 10 hours a month; we’ll find them internally.
So the logistical benefits are enormous, it just takes away the nuisance.
We will also make sure that the bridge between the clients and the test centre is built and that it is maintained, and that there is just the right flow of communication that goes on between them. Every client is on a different maturity curve when it comes to software testing, and we provide a tailored, bespoke service.
Because DVT’s focus and expertise has been on automation, we can consult and advise on how to tackle the more emotional aspects of automation with your staff, how to take them down that road, how to get them onto that first rung of the ladder, and then how to continually invest so that over time your automation gets faster and faster.
We ensure clients can get product to market faster, and most importantly that no one is holding up all of the expensive software developers who keep waiting for testing to finish.
Our global test centre can facilitate all of that.
BZ: The GTC is structured into pods of 30‑50 odd people, run by senior technical managers. This structure ensures that there’s always senior technical knowledge onsite, in close contact. All resources allocated to clients have senior oversight. South Africans, in general, are very positive to working with international clients and forging global business links. So we ensure we have talented staff onsite with the technical knowhow to support clients, and the enthusiasm to go above and beyond.
CW: Enterprise firms like the GTC because we have the size and the scale of a larger organisation structure and start‑ups like us because we have agility in that centre and we can move around quickly. Also, the really good news is that we always have 10 to 15 people available at short notice. We would encourage any new client to work with us on an initial proof of concept, which we can often turn around in a couple of days or weeks. This would be an investment by DVT into a client, to demonstrate the way we work, the kind of experience they might get if they signed us up as a more strategic partner.
You’ve recently partnered with British TSG, what does this partnership look like?
CW: We were initially introduced through mutual acquaintances 18 months ago. This partnership makes sense: TSG went through an MBO last year, so with new ownership and invigorated management, they are tackling the market with fresh eyes. They’re British owned, British managed with blue chip clients.
As specialists in the UK market and with high‑end consultative skills, TSG really complements our proposition as an outsourcing destination. Partnering with TSG allows us close proximity with the client and senior boots on the ground, whilst we give TSG scale, flexibility, and dynamism, all in the same language and same time zone.
Working closely together from TSG’s City offices, we serve as the preferred offshore partner. I think every British software vendor or testing specialist needs this flexibility for their clients. To stay competitive, it’s an absolute necessity.
TSG is our partner of choice. We don’t want to have to have a shotgun approach to partnerships. We’d rather have just one very good partner, and of course, we want to accelerate this business together now and win new UK clients.
What are your thoughts on trends in outsourcing for 2017 and beyond?
CW: It is clear that organisations will need to invest in various different avenues to tackle testing challenges, including cost‑effective outsourced partners and a serious focus on automation.
What it boils down to is that we need less and less actual people to do more and more testing work. With the legacy that surrounds an enterprise today, there’s an enormous amount of software, lines of code. I think automation is critical otherwise costs, time and effort will balloon out of proportion. You will not be able to keep up with more agile competition.
We’re offering a specialised solution; we’re not trying to do mass‑produced stuff. South African outsourced staff have opinions, they’re not just going to sit and do as they’re told and say ‘yes’. They will question and talk. So I think we could be a very refreshing option for people wanting to outsource.
We’re a good company to work with if you want to slowly, first of all, outsource and have a manual oriented approach, and then transition across into a more automated environment. So, we tick the boxes on both sides, and over the 10 years of developing our QA competency, obviously being software development specialists we’ve introduced all the learnings, and techniques, and best practice. But not best practice in just a global generic sense, but best practices in the way that we feel what is the right way to test software.
Another key concern for organisations is how to cope when you need some specialised skill or opinion or consulting for a just a few hours a week or a month? If you’re not outsourcing, you’ve got to go find that skill somewhere if you don’t have it in‑house. We’ve got a big test team, an extended team through the company as well, so we’re more than likely to find it internally. Increasingly, organisations are finding out that this can be a big advantage. Immediate access to specialised knowledge and insight can clear log jams very quickly.
You’ve been in the industry for a long time, how do you think testing and QA is changing?
CW: I don’t think it’s changing fast enough. I think the extraordinary high amount of software code out there means that actually regression testing is, or should be, one of the primary focuses for enterprise, not just for quality, but also for speeding up the entire delivery lifecycle.
Automation testing products are reaching a better level of maturity. We’re seeing for the first time in the last few years that these products really can do the job, which means that testing automation will start coming into its own in the next five years.
So we believe in automation and offshoring, but with a more boutique flavour; not mass production ‘throw 20 more people at the project’ ideology that’s been adopted by other jurisdictions. This is a tired tactic, and we need a sharper, more adaptable approach now. And of course Brexit is going to introduce its own peak of regression testing where small code changes are going to have to be made to accommodate compliance for whatever Brexit regulations are agreed upon.
So, where is it going? There’s more formality around it, I think everyone agrees that getting an expensive java developer to test is crazy. You actually need to make sure you have a separate team with a separate responsibility with people who are trained to test not to code. There’s going to be some type of tension in post‑collaboration between those two teams. Software developers also write their own codes, so they’re more inclined to test it and say it’s okay quite quickly.
And it’s not only the code; it’s the UX as well, which is also becoming more important as the end users’ expectations change.
We’re looking forward to showing the UK what we’ve got, and helping this market navigate post‑Brexit uncertainty with a strong, neighbourly partner!
For more information about DVT please visit: www.dvt.co.za
Source: This article was published in the March 2017 edition of TEST Magazine.