Chronicles Of A Software Tester Vol.1

Posted by | No Tags | Bread Crumbs Studio · Growth hacking · mobile apps · Product development · Websites

In today’s world, we rely heavily on software in various daily interactions. Some of those interactions are easy to identify, others not so much. One (hopefully) common factor in all software we use on a daily basis, is reliability. By software I am not only talking about our social apps, games and other computer applications, but also about integrated software that we take for granted. The airbags in our cars, the highly complicated and automated nautical systems in the planes we board, to the machines building medical implants patients use. All rely on software that simply cannot fail. There is no room for error. There are no acceptable failures.

How does one ensure such a feat? Can you imagine doing something with zero or near zero errors, that millions of people will use and possibly rely on with their lives? It surely is a mind-boggling task. The good news is that it is possible. Did you ever stop to think how the software in the ATM machines function? What kind of testing did such a software go through in order to be safely rolled out to the public? You wouldn’t be exactly thrilled if your bank transferred your funds to someone else by mistake (unfortunately, it does happen), or if your ATM decides that your pin code is wrong for no reason. Take a moment to think about the complicated software that runs in your computer to give you the luxury of reading this article. How is this software tested? How do tech giants like Apple and Microsoft decide that their software is ready for billions of people to use? What is an acceptable error variance, and what is not? When does one label a software as “buggy”?

Working in the software industry, I am constantly humbled and amazed by the talents of software developers. So many different technologies, so many lines of code, functions, conditions, statements, classes, objects and interrelations that would drive anyone nuts. But there is a hidden champion in each successful software. An unsung hero that many do not know of. Introducing: the software tester. He or she are burdened with the task of testing software to make sure it conforms with the specs needed to be launched and used by the intended public. This is no easy task. Assuring quality has to be one of the most demanding and mentally challenging aspects of developing and maintaining software. How does that even work? Facebook’s website is said to boast over 60 million lines of code. How do you even test this? And if Facebook has millions of lines of code, how many would the computer software of a Boeing commercial jet have? They say the devil is in the details- this has to be one of the mottos of software testing and quality assurance.

As with most complicated tasks, the one million (or 60 million in case of Facebook) mile journey starts with the first step. There are hundreds, if not thousands of standards, techniques and strategies to go about testing and assuring quality in the digital world. All of them share one common ground: a structured process. It would be suicidal to attempt to test a complicated piece of software in a random and unstructured manner. Throughout the years, the practice of software testinghas evolved to be an integrated part of development itself. Today, writing good code goes hand-in-hand with testing.

This article is intended to be the first in a series of volumes discussing what software testing is, and how it evolved, leading up to the latest and greatest in this specific and largely underrated practice.

To give you a bird’s eye view of testing, let’s briefly explore fundamental ideologies of testing; namely black-box, white-box and grey-box testing. Black-box testing is a process by which a piece of code is tested without actually going into the code itself. The intended functions are tested from the outside, making sure that the functions of the code is intended to serve actually do work. The name is derived from the notion of code being inside a black, inaccessible box. White-box testing on the other hand is different. It relies on technical and code-driven testing. Not only are the functions tested to make sure they are delivering the needed results, but also the actual code is tested for functional and optimal performance and structure. Grey-box is – as the name suggests- a combination of both functional and non-functional testing. It relies on testing certain aspects of the code without diving too much in the mechanics of the nuts and bolts of the software, while also making sure the functions are delivering the needed results. Each approach is used in different contexts, and it is not uncommon to see all three being used to test the same software by different teams.

Now that you have a vague idea of the over-arching types of testing; let’s talk about bugs. Bugs are a software tester’s mission in life. Bugs are flaws or defects in a software that render a part or whole of the software as faulty or erroneous. Bugs are the nemesis of a software developer. Bugs (as with the actual physical bugs) are unwanted, pesky, annoying little (or huge) mistakes or problems that arise due to a number of reasons: from bad planning, to wrong requirements to plain and simple mistakes done during development. It is the duty of a software tester (as well as the developer in various cases) to identify and report those bugs for them to be resolved.

How does a software tester even start to identify bugs? What tools do they use? How can they structure their work? In a nut-shell, a software tester uses a series of so-called “Test Cases”. Test cases are detailed descriptions of how each function in the software should work, the steps needed to be fulfilled, along with the expected outcome. Those test cases are grouped to test sets- a category unifying similar or related test cases together. Then comes pre-conditions; a set of steps needed to be present to go through a test set or case (Example: you need to have a debit, credit or ATM card to start using a certain type of ATM machine). Test sets are run or executed, and a report is filled with the result, to be sent to the development department for review and according action.

A logical question comes to mind: when would a software tester execute or run test cases? Would they do this in parallel to development? Or would they rather start after the full software is done? Do they wait until the cake (software) is backed to venture and give it a taste? Or do they take part in deciding what ingredients will be used and in what quantity to ensure the best bug free quality?

This question is part of a bigger struggle software houses face: the dreaded and ever-so debated topic of softwaredevelopment lifecycle (SDLC). While this topic will be addressed in greater detail in a separate article, a brief introduction is needed here to relate to software testing and how it is factored in the greater scheme of software development and release. Back in the 90’s up until the early 2000s, the most common SDLC was the so-called “Waterfall” (or in some cases “V”) method. This entails treating planning, development and testing as independent blocks of processes in chronological order. A software development house plans the software development cycle, including requirements and analysis. Then comes the development block, followed by testing, bug fixing and release. Make sense, right? Wrong. This is typically a recipe of endless ping-pong between testers, developers and the client (or end user). The reason being that the testing comes only after the full software (cake) is ready. This leaves little to no room for pre-emptive resolution. Instead of being proactive, the organisation developing the software retreats to firefighting and patching to fix whatever was reported by the softwaretesters. This is a costly, and in a lot of cases inefficient process.

In comes agile SDLC. This notion came to life to solve the above-mentioned inefficiencies. While this is no walk in the park, it does solve various hurdles that the Waterfall or V model fail to address. Namely, integrating testing and planning into each module of development. Instead of completing software development as one big chunk, test cases and sets are written and executed in parallel to development modules. Those modules are broken down into phases or functions of the full software. This way, each module developed is immediately tested and fixed, while new modules are being developed and tested in parallel. This enables the client or end-user to see and try out bits and pieces of the software, as opposed to waiting to the very end (sometimes when its too late) to see the final product.

There are tens if not hundreds of hybrid and mixed SDLCs between Waterfall and Agile. In each of them, software testingtakes a very different shape and form. However, the aim remains the same: making sure that the software developed and released is reliable.

The next volume of this series will explore the ever-increasing literature and opinions of what is (if any) an acceptable error variance (bugs) in software, and how the public perceives bugs. What would you consider as a reliable software? What was the most annoying let-down you ever faced because of a software bug or failure? Do you believe the software industry reached maturity in terms of total quality management? Or is there room for improvement? Let us know your option in the comments section or get in touch with me directly on yasser@b-c-studio.com

At breadcrumbs studio, we always aspire to continuously improve our methodologies and techniques, to better serve our partners and clients, while offering a uniquely attractive working environment. Only by achieving this would we claim to be a successful software development studio.

Yasser AbdelGawad

Yasser AbdelGawad

Yasser is a product developer & growth hacker, with a sincere passion for the outdoors. Managing digital products since 2009, he has experience in diverse industries ranging from browser-based games to adventure travel portals. When he is not at breadcrumbs studio, he can be found diving, kitesurfing (at least trying to) or camping in the desert.
Yasser AbdelGawad

Latest posts by Yasser AbdelGawad (see all)

No Comments

Leave a comment