Is there a middle ground when it comes to the developer-tester scenario?
When Internet software giant Google released a seminal book on how it tests its own software – aptly titled: "How Google Tests Software" – it set the cat among the proverbial pigeons of software developers and testers alike.
Until then, the two disciplines coexisted rather peacefully side-by-side, each knowing its place in the software development universe, and each quite comfortable with its own rules and responsibilities.
But, then (Google book co-author) James Whittaker blogged about his book thus: "Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Better yet, plan the tests while you code or even before. Test isn't a separate practice; it's part and parcel of the development process itself."
Picture thousands of developers and testers around the world exclaiming a collective: "Say what!?"
Testing 1, 2, 3
The truth is this testing conundrum has been brewing for a while, especially in SA, where both development and testing skills are in extremely short supply to begin with. So, asking developers to suddenly take on a tester's mind-set (and skillset), and vice versa, seems a stretch too far.
Much of this developer-tester thinking stems from a concerted effort by many large companies – not only software development companies and not only Google – to automate as much of the testing process as possible. To achieve this means testing while developing (as opposed to developing then testing then developing some more), so it follows that developers themselves would be doing most of the testing.
The net result of all this not-so-insignificant strategy shift was a bunch of big companies questioning the value they get from manual testing, which is all good and well, until they realise how difficult it is to actually find people with the skills they need to do both.
Again, this is particularly evident in SA (and other developing economies) where the skills base is already low. Local testers tend to be graduates and professionals who either didn't make the grade as developers, or didn't fancy development in the first place. At the same time, SA's developers tend to regard testing as something someone else does once they're done writing their code.
Scrambled or fried?
The mistake I see many companies make in trying to unscramble this developer-tester omelette is shifting test automation responsibilities onto manual testers. In other words, they're saying if developers that can test are impossible to find, they will get manual testers to add more value by automating.
This doesn't work. For all the romance of the perfect developer-tester ideal, manual testing remains a highly skilled and focused discipline, which will continue to play an important role in the software development life cycle, even if there is suddenly a glut of developer-tester skills flooding the market.
"Manual testing remains a highly skilled and focused discipline."
The same goes for test automation. An important factor often lost in the whole developer-tester debate is the value of regression testing. Because so many local companies outsource their development to offshore development centres in places like India and China, they're often left to deal with new code on a regular basis, which may or may not negatively impact their existing code.
This is where the real value of regression testing comes in, to ensure new code doesn't break old code. That said, companies can't expect their manual testers to work back through the legacy code to fix the features broken by the newly minted code.
Even if they could, with new code being updated almost daily in some cases – especially in the mobile application space – and companies increasingly favouring the agile model of frequent incremental updates, manual regression testing becomes as costly as it is impractical in most cases.
The answer lies somewhere in the middle. As always, it is beneficial to look for the best possible balance between what's possible today, what works, and what can be achieved in the shortest amount of time with the highest quality.
For example, it makes more sense for companies that outsource their development to also outsource their testing – but not all their testing, because that would be prohibitively expensive.
In environments like banking and insurance, where any breaks in existing software can lead to substantial downtime and lost revenue, the more fluent the testing, the lower the risk and cost. So, rather outsource the part of the company's testing that also carries the highest risk – regression testing – and stick with the tried, tested and trusted manual testing processes already in place.
That way, companies are not compromising on quality while reducing risk and managing costs. In the meantime, they can work towards a more modern and robust testing culture through the development and advancement of their existing skills and new hires, which benefits everyone in the long run.
To stay relevant, software testers must focus on what makes manual testing essential to the software being tested: humanisation.
By Mario Matthee, COO at DVT Global Testing Centre
Software testing has evolved, along with the advent of highly efficient and progressive software development methodologies like agile and its derivatives.
Gone are the days when a software developer would ‘test' his own code and send it into production, where it would be subjected to further testing. Iterative development has eclipsed this antiquated form of manual software testing with automated alternatives, and the role of the software tester itself is shifting, with business analysts increasingly taking on testing responsibilities for their own projects.
Despite the fractures, I'm not calling ‘time of death' on manual testing anytime soon. In fact, manual testers will play an increasingly important – albeit far more specialised – role, as software development fragments into specific niches, from enterprise-scale mainframe to consumer-grade mobility.
The advances in test automation have taken the spotlight off the bare-knuckles world of manual testing, but automating for the sake of it can be counterproductive and costly. While automation is ideally suited to the fast-paced world of agile development, and is increasingly critical for regression testing, it falls short in two scenarios: new builds and human experience.
It can be prohibitively expensive to design test automation scripts for new products that are still in development – and therefore constantly changing. Keeping automation scripts current could end up costing more than the software is worth, so the only way to get the application stable enough for automation is through manual testing.
Secondly – and particularly in the mobile space, where new applications are a dime a dozen these days – there's no substitute for human experience when it comes to usability testing. Sure, once an app has been out in the wild long enough, it can be stated fairly confidently that test automation will keep its future iterations fresh and bug-free, but by that time most of the usability issues will have been ironed out, and software testers are testing for stability and legacy compatibility rather than usability.
Hammer blow
But, for new apps, or apps that have been completely redesigned, there's no automation scripts I know of that can see the ‘bigger picture' or predict with any degree of certainty how real humans will react to it. For business-critical applications, leaving usability decisions to a script is like polishing glass with a hammer – it can be done for a while, but ultimately, it's going to crack.
Another aspect of manual testing that's difficult to automate is field testing. This is true for many mobile applications, but more so with the growing popularity of app-driven wearables like watches and fitness devices. Stability is one thing, but automation can't possibly account for the diversity of human demographics these devices are designed for.
This brings me to an interesting crossroads. Manual testing, at least in the traditional sense, is clearly in decline, while automation is in the ascendancy, and the blurred lines between developers, project owners and business analysts are becoming even more so.
Automation is very much a work in progress. It's ideal for simple applications, or in cases where new features are added to older applications that have already been tested to death. Manual testing, on the other hand, is the old curmudgeon in the room; it's been there, done that, and has all the experience, but is rapidly being replaced by the younger, fitter, better-looking model.
As such, there's only one clear path for manual testers to travel to avoid extinction: specialisation. By that, I don't mean taking up abstract roles limited to exotic devices or industries, but rather focusing on what makes manual testing essential to the software being tested: humanisation.
Handmade
The human touch is synonymous with quality, and quality assurance ultimately requires the human touch. Automated tests are just that; they don't necessarily react to a design feature or usability cue as a human would. A manual tester will be using the software as it would work on launch, and therefore provides a real-time view of any human issues an automation script might miss. Any flaws that are only likely to be triggered by organic human use will likely also be found by manual testers.
Manual testing is the old curmudgeon in the room.
Of course, this is nothing new; history has shown up hundreds of examples of traditional roles that have been overtaken by progress, only to resurface in unexpected ways and in places the people that replaced them least expected them to. The shoemaker was replaced by the factory, but people are not going to send their shoes to the factory when the stitching comes loose, and there's a reason why the world's most coveted shoes are still lovingly made by hand.
Manual testing is also far more flexible than it is given credit for. When a developer (or client) has one of those ‘lightbulb moments' that fundamentally changes the course of an application or project, ideally, the change should take place (and be tested) immediately. Doing so without completely retooling the automation scripts and processes for the previous iteration of the software is almost impossible without significant cost and time delays. Not so with manual testing; just test and see the results right there, without delay.
Despite the benefits, manual testers are still an endangered species. Modern training and courseware tends to steer testing into the developer's domain, and manual testing is often considered an afterthought or consequence of inadequate development skills. More resources are also funnelled into sophisticated test automation, with major advances in artificial intelligence set to drive the wedge between automation and manual testing even wider.
For manual testing to survive these changes, it needs to adapt quickly and show its value where it matters most: in the hands of everyday users.
The more people depend on their smart devices to think and act for them, the more they need real people to build software applications in their own image. Likewise, the more they rely on autopilot to manage every aspect of their lives, the more manual testers are needed to ensure they're heading in the right direction.
Many companies have the misperception that an app is part of any attempt to enter the mobile space.
By Mario Matthee, Chief Operating Officer of the DVT Global Testing Centre.
The mobile software testing market is experiencing somewhat of a Jekyll and Hyde complex at the moment.
On the one hand, mobile device growth is continuing apace, with predictions that mobile users will overtake PC and laptop users for most business and web-related tasks having long since been realised. On the other, the app boom seems to be well and truly over, with recent figures in the US showing users are downloading an average of zero apps per month.
This isn’t difficult to explain, it simply means the mobile market, which began its upward trajectory more than eight years ago, has reached saturation, and most users are now familiar with the apps they need and use every day. There are some exceptions – like Snapchat and Uber – that defy the trend and are still growing at a phenomenal rate, but unless you’re very good or very lucky, it’s going to be difficult to get your new app noticed and downloaded among the crowd.
How does this affect mobile testing, you ask? In two fundamental ways. First, the drop-off in new app development means companies have a decision to make when it comes to reaching out to their customers through mobile platforms. Apps are no longer the first step in creating a mobile presence; for many companies a responsive mobi (mobile-oriented) site makes more sense.
Secondly, device selection is vital, and increasingly so. The rapid growth and maturity of mobile devices in general and smartphones, in particular, has seen the market ultimately settle on two major platforms – iOS and Android. Some smaller platforms such as Windows Mobile and Blackberry are shrinking and even (in the case of Blackberry) migrating their users to various flavours of Android.
Because of this polarity, and the loyalty of most users to one platform or another, developing and testing native apps is not always the smart choice, especially for newcomers to the mobile space. But if testing on a limited number of devices is counter-intuitive, testing on a very large number of devices is often prohibitively expensive.
And so the starting point for any conversation on mobility and mobile testing should always be a company’s digital strategy. Unfortunately, most companies I speak with today don’t have a fully formed digital strategy, and those that do are half-cooked or based on the perception that an app is part and parcel of any attempt to enter the mobile space. The truth is that a well-built, responsive and intuitive mobile website is almost as important – if not more so – and can also perform most if not all the functions of an app, depending on the type of business it’s used for.
Deciding between apps, websites, or a combination of the two is just one of the challenges. A comprehensive digital strategy also needs to cover factors like device management, device types, usability testing and automation.
From a testing perspective, device management is critical because mobile devices are susceptible to damage, loss and theft more frequently than almost any other device type. It may seem inconsequential, but given the high cost of devices and the risk of valuable IP taking a walk at a critical development stage is very real.
The issue of device types is important regardless of the software you’re testing, be it an app, a mobile site or a desktop site on mobile devices. Even a closed platform like iOS comes with the challenge of users with previous generations of iPhone and iPad, and multiple iterations of previous generations as well.
A modern, responsive mobi site or app might light up the screen of the latest iPhone, but could bring previous legacy iPhones with older versions of iOS to a standstill. And iOS is fairly straightforward compared to the permutations of the hundreds or thousands of Android devices from dozens of different manufacturers on the market today.
Once device management and types have been narrowed down to a manageable grouping, usability testing on these devices is a third major consideration. This is where mobility testing also differs the most from other forms of software testing, because it’s necessarily hands-on. Yes, developers can test their mobile apps or websites on simulators and the odd device, but neither of these options are anywhere close to sufficient for a proper functional test of a new (or new version) of the software.
It’s almost impossible to remotely test mobile software, not because the technology is lacking, but because usability is such a big factor in the success or otherwise of a mobile app or website. And when it comes to usability, that means testing by experienced human operators, not machines.
Which brings me to the last point, automation. Even if it were practical to automate some parts of the mobile testing process, the rapid rate of change in both devices and apps (and websites) means by the time you’ve invested in solid testing scripts for your software, a new version rolls out, your users have upgraded to new devices, and a new OS has been released. That’s not to say automation won’t play an important role in your mobility strategy, but you’ll probably find manual testing plays a much bigger one.
By now you’re probably getting the sense that jumping into the mobile space – or growing your current mobile presence – is a much bigger ask than you thought, and you’d be right. Navigating the mobile testing minefield can be a nightmare if you don’t have a solid, thought-out digital strategy that informs every decision you make based on the value of the investment to the business.
A good place to start would be finding a likeminded partner with the experience to guide you through the creation or refinement of your digital strategy, before giving you access to the resources you’ll need to make it the success you need it to be.
This article was published exclusively for ITWeb on 13 September 2016.