Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Testers are the first users of new software, and they are essential in making it usable. In the end, everyone has the same goal of delivering the best product they can, so letting testers explore and discover new bugs is always good — the more bugs found, the better! Encouraging exploratory testing at the beginning stages of the software development lifecycle shifts bug-finding activities earlier, when they are easier and cheaper to fix.
Of course, many of the bugs I find aren’t related to functional requirements. Performance issues are a common example. In most cases, requirements don’t say how long something should take, but it’s easy for a tester to tell when something isn’t right. If I get impatient waiting for our software, our customers will, too. And wouldn’t you rather hear that from me when we can still fix it, rather than later from our customers?
It’s 8:30 a.m., and our product manager walks into our office and asks, “Where is the project lead?”
“He just stepped out,” the principal developer says. “How can we help you?”
“What is the status of the user story for migrating the database from MySQL to MariaDB?”
“We’re running behind because some key elements of the MySQL primary tables aren’t easy to migrate to MariaDB,” the principal developer answers.
The product manager’s tone of voice immediately becomes sharper. “How much behind? Days, weeks?”
Our principal developer answers truthfully: “At least four more days.”
There is silence in the room. Finally, the product manager says, “Can you tell the project lead to come by my office? I need to talk to him.” He turns around and leaves.
It’s clear the product manager is not happy with our user story progress, and all the developers and testers now feel stressed out.
During our planning meeting later that day, the team considers all possible paths: the happy path, the unhappy path, and corner and edge cases. Afterward, I am sitting in my cubicle testing the user story, and even though most of the tasks are still in progress, I decide to do some negative testing. Driven by curiosity, I start to navigate to areas not related to the database changes, and I find a critical defect.
At this point, the project lead gets back from the product manager’s office, and he doesn’t look happy. I go over to the project lead and inform him that I found a critical bug in the login page while performing negative testing.
“You are testing something other than the user story?” he replied. “Please don’t try funny, negative stuff just to break the application. We are running behind, and I don’t think a normal user will run into that defect.”
“Okay,” I say, “I will file the bug and move on.”
Privately, though, I wonder: Who or what is a “normal user?”
The misconception that a software quality engineer breaks the product still exists. Testers themselves will exclaim, “See? I broke the software — it breaks when you click here!”
Of course, they didn’t really do that. Software does not break; it simply does what it has been designed and coded to do, for better or for worse.
Speaking of design, another common myth is that all bugs are coding errors and programming mishaps, when in fact, a majority are introduced during requirements and design. Software quality engineers investigate systems, looking at what the system does, and then discover and report where and how the software is broken, identifying when the system will fail under load or stress or poking around as any user would do.
So it’s a tester’s obligation to go beyond the positive happy path and reveal the not-so-happy.
Positive testing is clicking in the right place at the right time. It’s unlikely that a user will do only that. Users click what they want, when they want. We can’t automate a user to do the same thing all the time in the same way, so we can’t rely on our automated testing to cover human interaction.
That’s why I don’t like the term negative testing — it’s not negative!
I prefer “real-world testing.” Every user uses the product in a unique way, and we can’t compare users to each other or expect them to navigate around the application using the same path. Users don’t follow the happy path. Users don’t follow directions or, honestly, usually even read the documentation. Users challenge the product.
So, as testers, it’s crucial for us to challenge the product, too. We must vary our tests to find out how the product responds. Excellent testing is not limited to showing that the product can produce an expected result; it means learning what the product does when users do something no one predicted.
Our duty as software quality engineers is to act and think like real users. We need to test outside our test plan and go off script. Developers and stakeholders might call that negative testing, but the result is a better product, and that’s all positive.
Any software has potential risks to not perform as expected, so it’s crucial to validate at the very minimum that software will not crash when someone logs in. I wasn’t performing negative testing when I found the bug in the login page; I was investigating the software.
So it’s up to me to communicate this in a positive way. Our words have a high impact on how others perceive and understand our job.
When I told my project lead that I had found a bug while performing negative testing, it’s understandable that his reaction was not agreeable. If I had instead said, “While I was testing the login page, I discovered a critical bug,” his reaction probably would have been, “Go and file the bug, and we will look at it later.”
So I think we should stop using positive vs. negative terminology. Instead, let’s talk about “discovery” and “investigation.” It’s less confusing, more explicit, and avoids the potential problem of developers and managers saying something cringeworthy like, “Oh, you’re just being negative.”
Shifting my vocabulary has helped me improve my communication with stakeholders and developers. I can see a different angle of the equation, and I’ve been able to talk to developers without any friction. Now the team sees my work as positively improving the product instead of negatively trying to break the software.
Try changing your vocabulary from “positive” and “negative” toward more descriptive verbs that explain your exploration. The team will be more receptive in conversations, and they might even value your work more.
Jessica is a Software Quality Assurance Engineer at Parasoft, where she enjoys testing new and preexisting features to satisfy users.