The Testing Show: The Deeper Things of Testing, Part 1

November 23, 2022
/
Transcript

In this first of a two-part series, Matthew Heusser and Michael Larsen welcome Perze Ababa, Jon Bach, and Jen Crichlow to discuss the broader ideas of testing, and to ask, “Are there areas of software testing that deserve greater attention?

itunes

rss

Panelists:

References:

Transcript:

Michael Larsen (INTRO):

Hello, and welcome to The Testing Show.

Episode 128.

The Deeper Things of Testing, Part 1

This episode was recorded Sunday, October 16th, 2022.

In this first of a two-part series, Matthew Heusser and Michael Larsen welcome Perze Ababa, Jon Bach, and Jen Crichlow to talk the broader ideas of testing, and ask, “Are there areas of software testing that deserve greater attention than what we are currently giving? If so, what are those areas?”

And with that, on with the show.

Matthew Heusser (00:00):
Well thanks as always, Michael, for the show introduction. And this week we have something a little bit different. If you listened to the show I did with John Bach, we talked about this problem where you reduce the job to its most visible outputs. It reminded me of islands of the South Pacific in the Second World War. The Americans came and built airstrips and built radio towers and built refueling centers and gave things to the local population. They gave them food, they gave them clothing, they made sure they were successful. And then the war ended and the planes left and the runways were fallow. And on some of those islands, the indigenous population who had never had that kind of exposure to technology who thought it was more like magic. They built their own runways with torches to light the way. They built their own radio towers.

(00:51):
They built straw people to be radio operators, waiting for the supplies to drop from the air. Of course, they didn’t come. This is not the way it works. And the phrase “cargo cult” was born sociologically. I think that’s a valid story. It’s happened. I think it’s a reasonable comparison to think about what happens in software, where we just understand the most visible parts of the job. Testers, they read documents, they click buttons, and then they report test results. And organizations that do that, that kind of miss the point they get bad testing. It’s probably not even cheap. They pay too much for testing that isn’t very good. The code goes to production and there’s all kinds of bugs. And we say testing isn’t very good. So I wanted today to assemble a special panel of people I respect to talk about, “Let’s break testing down into its component parts. What does good testing look like? How do we recognize it? How do we do it? How is it differentiated from that cargo cult idea of testing?” And to do that, we’ve got sort of an all-star cast. Welcoming back to the show for the first time after a long break. We have Perze Ababa. Welcome, Perze.

Perze Ababa (02:03):
Yes. Hey Matt. Thank you very much. It’s always a pleasure for me to be a part of the show.

Matthew Heusser (02:08):
And Perze is, I think, still a test manager at Johnson and Johnson?

Perze Ababa (02:13):
I’m currently a senior manager for our commercial technologies group, primarily responsible for a ton of the user-facing technology that we have. So that’s varying degrees of responsibilities within websites, APIs, as well as mobile applications across the board.

Matthew Heusser (02:31):
And before that, it was the New York Times, and before that…

Perze Ababa (02:35):
And before that, I was head of test engineering for Viacom Media Networks. And before that, I was a director of quality assurance for a division of NBC Universal. And then prior to that I was the, head of testing for invite.com.

Matthew Heusser (02:48):
So we’re so excited to have you back. A lot of practical test management, test leadership experience there. Speaking of practical test management, test leadership, we’ve got Jon Bach, too. Welcome back, Jon.

Jon Bach (03:00):
Thanks Matt.

Matthew Heusser (03:02):
And I think John and Jen were both very recent episodes. Jon a program manager for eBay prior to that an Agile coach. Prior to that, a quality architect, I think?

Jon Bach (03:14):
Before eBay, which was 2011, I was a manager of corporate intellect for Quardev, who is a test vendor services provider here in the Seattle area.

Matthew Heusser (03:27):
And John I first met at 2004, 2005 STAR East where he was… I don’t know if you were doing tutorials, but you were one of the more recognized speakers at that time and I was just starting to come up. So it’s so great to have you here.

Jon Bach (03:40):
Yeah, thank you. It’s great to be here.

Matthew Heusser (03:42):
Of course, we have Michael, your ubiquitous show producer.

Michael Larsen (03:47):
I think a lot of times people just accept the fact that I am here and that I produce the show. I do have a day job and that day job is, I am an active senior quality assurance tester/engineer, whatever title you want to give me. And I work with Learning Technologies Group/PeopleFluent. The reason I say both companies is, depending upon what project I’m working on, I work for one or the other primarily with PeopleFluent, which is located in Raleigh, North Carolina. I’m actually calling from my house in San Fran… well, just south of San Francisco. I have been at this game for, now, 31 years. I first started off in the testing world I guess officially. I started off in the testing world in 1994 when I got moved over to the test group proper when I was at Cisco Systems. I was a network lab administrator prior to that, but I would argue that as a network lab administrator I did plenty of testing.

Matthew Heusser (04:42):
<laugh>. Thanks Michael. And last but certainly not least, we have Jen Crichlow, who’s vice president of program delivery at Savvi, was on the show very recently. Welcome back, Jen.

Jen Crichlow (04:53):
Thank you. Thank you, Matt.

Matthew Heusser (04:55):
So excited to have the panel here. I know that someone is gonna say, “We’ve already done this work. Testing is a technical investigation into software with the intention to discover quality-related information.” Some really intelligent people… Dr. Cem Kaner wrote that 20 years ago. But let’s just start over again from the beginning. What did testers do? How do we know that? That’s good. We’ll probably end up with the same definition, but I think there’s a lot of room for nuance in there. Who would like to talk about it?

Jon Bach (05:27):
I think testing is a discovery and I’ve been listening and watching a lot of court cases on YouTube for some reason. I was off this week from eBay and just been fascinated with various court proceedings, both criminal and civil. And as I watch witnesses take the stand and the prosecuting attorneys and defense attorneys do their job, it seems to me that software is innocent until proven guilty. And you have to make a case of where it might be guilty. That means asking questions, which means testing. A test is just a question in disguise. You have an expectation or a scenario or a conjecture in your head of malfeasance, you know of a crime that either has been committed or will be committed and you have to prove it with evidence. And just like in a court case, there are rules of evidence. To me, testing is discovery.

(06:25):
It’s a process through which you lay a foundation in a case for when, how and if software will not meet implicit or explicit expectations by its users, by its customers, or its intended audience. That’s the oracle or the method or principle by which things are supposed to function to manifest expectations in the public or in customers. Testing is a process of applying those oracles in a way that can demonstrate whether or not the software is meeting expectations. I hope that’s a fair analogy. It really seems to me that this skill of an attorney is very akin to the skill of testing; making a case, laying a foundation, asking questions, submitting evidence, following procedure, making a report, all that stuff is the pursuit of truth. What is actually the state of this thing that we’re creating?

Matthew Heusser (07:21):
I like the idea of that pursuit of truth. I think that analogies can be helpful if they are more familiar to us than the thing that we’re talking about. So to say the role of the tester is like the role of the lawyer to the independent fact finder, which is the judge. There’s another one that was, I think it was from James, “Testers are the headlights of the project. We let you know what’s coming and make what is in front of you clear when it might be dark and raining and you can’t see.” Those are good metaphors, I think. Does wanna build on that?

Michael Larsen (07:53):
Yeah, I would like to take a shot at this. Agreeing with everything that John has said. Interestingly, and I’m just coming off of speaking at the Pacific Northwest Software Quality Conference. One of the things that I’ve noticed, at least from the talks that I have been giving, there’s the testing part which is important and that’s the nuts and bolts going through and making sure that you have done due diligence on what you need to do. But there’s another level of testing that I think needs to be there, if not more so than the actual steps of testing. And the best word that I can use for that is advocacy. So when we are actually testing, we have to have a framework, we have to have a goal in mind. Why are we testing? Are we just testing to make sure that a product works? Because that can be done fairly straightforward.

(08:51):
A plus B equals C. Okay, that’s fine. A plus B does equal C, based on what you defined there, It’s tested, right? I would argue that no, it’s not tested. All that you’ve done is you’ve confirmed something. Now if you want to test, and I’m gonna borrow one of Jon’s phrases that I love in the sense that if we have requirements, is it enough for us just to look at them and say we’re going to follow it or do we wish to provoke the requirements? And I love that phrase because that’s where you get the advocacy angle in. Are we in agreement that that’s the right thing that we should be doing? What if in the process of making this function work we’re depriving a number of our users the ability to work with that? Granted, you know that accessibility and inclusive design is going to come up anytime I make a conversation about testing. But the point is that is specifically advocacy related. I am looking at an audience and I want to make sure that that audience is being represented. So it’s important for me to say, not just I’m going to test the product, but I’m going to also make sure that these requirements make sense.

Jon Bach (10:08):
I would agree with that. I think, Michael, that evidence can be circumstantial. This happened to me yesterday. So I’m driving my truck, I just bought an A&W root beer and I’m drinking and driving. I am drinking the root beer while I’m driving. It is a brown container so it could look to a bystander like I’m drinking a beer but it’s a root beer. Someone could have called 9 1 1 and said there’s a guy drinking a beer as he is driving down the road. So that is true to that degree. During a test, you could find evidence that explains something else, not what you intended. In other words, how can A and B equal D? And that’s as I watched these court cases, even though there’s evidence, and there’s video, and it’s clear. The defendant did this. How could it explain another circumstance? It looks like he or she is guilty, dead to rights, right on the video. But is there another explanation that explains the evidence in a different way? That’s tricky. When it comes to advocacy, I think that’s what you’re saying too, is there could be circumstances where A and B could equal D and that’s a good thing and A and B could equal D and that’s a bad thing cause we want it to equal C, but what if it doesn’t?

Jen Crichlow (11:25):
I’ve been thinking as well. Initially, testing does start off with what were the net new requirements for this particular feature or for this particular release overarching, right? And there’s a desired user experience, and navigation, and everything else, and you’ve ideated on it with your team I assume, right? <laugh> and crafting those requirements and hopefully refining them along the way. So I’m talking about the product management team, perhaps some stakeholders, design team, back-end development, whatever that looks like. Maybe front-end development, whatever your team’s makeup looks like. And so you’ve collected all that information and so now when you’re trying to validate it, it’s like from all of these joint perspectives of what you had assumed would be the case. But I think advocacy is interesting, especially when you start to think about what AI is doing. So I’ll take the initial example, expected requirements and maybe your users actually end up doing it different than you were expecting and it still benefits them.

(12:22):
So you couldn’t have tested for that cuz you didn’t know that they were gonna take that different journey. I think AI has the same kind of challenge, where it can produce patterns or identify patterns that you had not been anticipating. And I think the advocacy lens is helpful because then you’re approaching the problem slightly differently and saying, “Okay, if A and B equals D… Why? How? What informed that?” It almost forces you to pull out a little bit and asks different questions about maybe what’s in the landscape that the software hadn’t accounted for initially. Or at least your AI had begun to identify but is a pattern that you need to better attribute in some way. So I think both are helpful. I’m realizing as we’re having this conversation is that I think a lot of testing started off as a binary, is this right or wrong? And what we’re getting to when we start talking about advocacy is what’s beyond that binary?

Matthew Heusser (13:18):
Yeah, I think it’s about levels. There’s a couple of different ways to think about the levels of testing. Lee Copeland told me once that a junior tester is gonna take a couple things and say, “Looks good to me!” and a mid-level tester will be able to break it and make you cry and wrap you up in red tape and we’re never gonna release this thing. And it’s at the senior level of tester where you really start to say, “Okay, what is the best use of my time to find the most important bugs that actually matter to our customers?” And I would layer over that with my thought that testing is empirical feedback to process, where we say, “Hey, did you know that if I click the checkout button it’s really, really slow And that’s just for one user cuz we haven’t released it yet. And if I get so bored waiting that I click it again, it restarts the process.

(14:11):
And when I eventually wait for the thing to load, three books got dropped onto the request queue to go to the warehouse?” So personally I see myself doing advocacy in a very different way, which is just providing information and then the customer gets to own that decision. Eventually, If I’m right 15, 20, 30 times in a row, I think we gotta fix this. I have worked in organizations that have said, “You understand the commander’s intent so well that you can just set the priorities on the bugs and we’ll just go fix ’em, and we don’t have to have this loop where we go up to the product manager to decide.” And at that point, I think, your tester is becoming a little bit of a product manager cuz they’re saying things like, “This is what the error message should be. The error message is terrible so here’s what you should say instead.” I think that’s a relatively mature role. I think that’s more valuable than your button clicker. Should we be aspiring to do that?

Jen Crichlow (15:11):
I like that question a lot because I do feel like even in my own career path, that tracks <laugh>. I don’t know if that’s the progression of everyone that’s testing or validating always. I think there’s probably a couple different avenues that you could take. I wanna give another example. I have a colleague who was initially in vendor management, pivoted into kind of like a quality assurance role to some extent, and then pivoted into customer success, which I think when we talk about advocacy is dead on the head right there. That was their transition. That was their advocacy point, where they can then be in a role ultimately and continue to skyrocket within that path. But having that technological awareness and experience, I know how to validate the quality of this product. I can be part of conversations with other members of a software development team, thus advocating from that lens. But I don’t think it just ends with testing. Perhaps when we really start to think of our collective teams. Maybe I’ve just been lucky cause I’m on small enough teams that I can have those kinds of conversations. But looking at that other department, that other workstream, as having some stake in the quality of the system and then figuring out a way to develop a quality assurance process that also includes that lens.

Matthew Heusser (16:25):
I think you’re onto something there in that it might look different. If you’re in a quality assurance team of team of teams at scale in the enterprise and there’s 75 QAs, i- might be very difficult for you to get product management to trust you with those things because maybe you’re that good but the rest of the team isn’t. And that’s when you get into leadership and test architecture and having a different role involving the requirements or all of the different dance-around quality activities. Because what I just described I think really probably only works easily with a small team. What do you think, Perze?

Perze Ababa (17:00):
There’s a lot of stuff that are brought up. It reminds me of the flood of information that you get when you take the first three weeks of the Black Box Software Testing Foundations class. Cause now we’re talking about not just the advocacy part but it’s very difficult to be an advocate of something if you don’t have very clear information objectives. And then in relation to that, I know there’s a lot of tangential references towards the people that we’re working with, the people that we’re partnering with. I know Michael pretty much mentioned it from the get-go that, “Are we in agreement that this is what we’re gonna do?” The “we” part is definitely key there. I know Jon also mentioned, you know, some folks there which leads us back to Jerry Weinberg’s definition of quality, which is, “It’s value to some person who matters.” Matt, I appreciate that you mentioned that there’s some very particular contexts where the actual growth of a member of the testing team can be from,

(17:56):
“I’m just gonna look at conformance to requirements” towards, “I’m gonna get deeper into my domain and understand what is more risky, and being able to advocate for what those risks are by talking to the right people that you can talk with in very specific teams”. We’re trying to map something in a linear manner when we know there’s a 3D representation to advancements within our career. I do wanna bring up a fact that since I’ve been working in a heavily regulated environment, the idea of being able to provide value to a person, whether it’s to the regulatory auditor, or to the product owner, or even to the testers that we have in our team that need a specific technology so that we can make the system a little bit more observable. These are pretty far away from what our customers actually need. Activities that you perform leading towards making your customers happy, which is having, when you have multiple product squads that also has layers underneath them, I think that’s really where the challenges for defining where our advocacy will come in and the timing for that as well so that we can still deliver on time (hopefully on budget) moving forward.

Jon Bach (19:07):
Perze, you triggered something really interesting that struck me and I wrote it down. It seems to me that you’re saying that a part of advocacy is proving something has value. Cause I, too, am acquainted with the Weinberg quote about “Quality is value to somebody who matters”, but value changes and a customer’s perception is something we may not know and maybe can’t know before we release. So in that sense, we’re an advocate because we’re the customer’s proxy. Are you saying that as testers we can prove that the software has value in that role as their advocate?

Perze Ababa (19:51):
It’s a tricky “yes”. If there’s a way for me to measure that particular outcome of how our customer actually uses that piece of software, we can hypothesize as much as we can but until that solves the customer’s problem or they discover its serendipitous value of what you’ve just put out there, that would be one of the ways that we can really confirm. And that’s a pretty strong evidence in record. Whether you’re gathering that through behavioral analysis from analytics data that you gather or you actually go through this through an actual customer survey.

Matthew Heusser (20:27):
So I may be mis-understanding. What you’re saying is, “Get closer to the customer, figure out what they’re using, what they value, and then provide information more on those topics than what I think is, I’m gonna standard default in North America, software testing”. We really don’t know anything about how the customer does. I mean there’s exceptions but there’s a lot of, we really don’t… how the customer uses the software doesn’t influence our testing. We just kind of test everything about the same. Unless it’s hard, which by the way, when the software is hard to test, it’s gonna be buggy cuz it was hard to program and then we’d probably test it less. That’s the common default. And what you are saying is that as a value-added activity for testing, get closer to what values to the real customer and provide information on that. And if you don’t have the information on what is valuable to the customer, go get it. Cuz that’ll be valuable to your management team.

Perze Ababa (21:27):
Yes, definitely. Because there’s gonna be a time where the number of tests that you have, or at least the opportunities that you have for exploratory testing are gonna be so huge that is gonna be very difficult to prioritize even what is more important. So having that layer of prioritization, that way your either your test selection or just the ability to which area of the application do I need to dig deeper into? Having very specific signals on which areas in your application need to be looked at at this point in time is definitely important. So for example, we’re dealing with multiple applications at scale that are using pretty common microservices in the back end. If we ask our product owners, which microservice should we test first because we only have X amount of people available, it’s just the same as giving everybody knives and whoever comes out first gets to win. The challenge there now is if you don’t have that extra layer saying, “Okay, 80% of our microservice calls is actually just for this particular endpoint. Let’s make sure that that particular endpoint is up and running four nines (99.99%) just to make sure that we’re not gonna have an impact on our image. Because something that’s very important and used a lot is down.”

Matthew Heusser (22:50):
Wow. So riffing on that for just a second, there’s something that one could infer from what you said. And I really would love to get your perspective on this. I’ve done this I think once in my career. You say, “Go get the data. We happen to know that 80% of our customers travel through this one endpoint, this API gateway thing, which collects these five other APIs. We’re gonna test these six. That’s where we’re gonna start cuz we have the data.” Is it ever appropriate for a software tester who is talking to a product manager, a general manager, some kind of executive, someone in marketing, a dev lead to say, “Thank you for your input into how I should test the software. I’ll keep it in mind when I make my decision how I should test the software.” Is that ever appropriate or ever inappropriate?

Perze Ababa (23:38):
That goes back to the definition. It’s value to some person. And if the CEO of the company taps me on my shoulder and say, you should focus on this, then maybe I should focus on it. Of course, it’s not gonna be as simplified as that one, but there’s gotta be some weight to it. It’s not just gonna be my decision, it’s gonna be a team decision on where I’m gonna put my time in.

Matthew Heusser (24:00):
I think that’s fair. I think there’s the what and then there’s the how and the how is much more “whoaaah”. I think that it’s fair to receive direction on the what. Michael?

Michael Larsen (24:09):
So one thing I want to throw in here, and this could be a controversial element or it could be a tangent, but I wanna roll with this because a lot of time when we talk about how we go about the testing that we do and what we end up doing, I think a lot of the time, at least this has been my experience, testing is not as granular and finite as software development is. When you write code, you have a very specific existing item. You’ve made a function, you’ve made a method, you’ve created something, you’ve created a wireframe or whatever it is that people can interact with. It’s tangible. That is something that in our engineer-y type of environment, we love to look at stuff like that. Testing unfortunately isn’t really that cut and dry. Now, people could say, “Oh well you can automate tests.”

(25:06):
Yeah, but that’s a development effort. I don’t care who does it. It is a development effort. We are going through and we are creating tangible test scripts and test algorithms but let’s face it, those test scripts and test algorithms, unless under very rare circumstances do not go out to the paying customers. They don’t see that. They don’t know that it happens. They only know when it fails. To borrow (paraphrase) a Bill Gates quote, “testing is kind of like the plumbing inside of your house. You don’t recognize your plumbing when everything is working. You only recognize it when something goes terribly wrong.” And that’s when testing actually comes into the forefront for this. So a lot of what we do as testers is really hard to quantify. We understand it. We’re able to show the value and demonstrate what the purpose of what we’re doing is and why we want to recommend A, B or C.

(26:03):
But for others, they might not actually get that that’s important, because everything’s working right or they may not even be able to see what it is that we’re doing. Exploratory testing, sure, I can show you my charters, I can show you the things that I was working on. I can even tell you what I learned along the way and some of the areas that we tested, and if I happen to find something that was, “Wiz! Bang! Oh, my goodness, thank goodness that didn’t go out!” then yes, we’re definitely paid attention to at that point. But if we go through everything and everything looks basically okay, it’s sort of easy to think, “Well, are they really doing all that much?” Not when things are great, but they definitely notice when things go badly and if we for some reason didn’t ring the alarm bell about that particular issue, then we are put under a microscope as to, “Okay, what are you guys doing?” Thoughts?

Jon Bach (26:57):
I got a few thoughts about that. You could look at your smoke alarm and say that it’s there to monitor something, a particular condition and it will go off when it detects that condition. If it doesn’t go off when it detects that condition, that’s a problem. Right? I would say in many respects we’re kind of like that. Not that we’re always passive, but we can build monitors and alerts to aid our testing. We can also act like the police and pull over speeders, but we will miss other speeders as they speed by us when we’ve got someone pulled over. In other words, not all of our time, to Perze’s point, is equal. I think we have to focus on the serious risks and serious offenders or pathologies or potential problems first because our trend is finite and it’s got to the degree that we can, we should focus on new code, for example, or changed code. Cause those are inherently more risky. So there’s a couple of paradigms of when testing is “working”, which is meeting requirements to some degree. We should be invisible to the customer. I don’t know if there’s times when we’re… when testers are visible to the customer in a corporation, but certainly in when we hear bug reports in production, we can act on those and should in some cases with higher priority than maybe even our existing tests.

Michael Larsen (OUTRO):
That concludes this episode of The Testing Show. We also want to encourage you, our listeners, to give us a rating and a review on Apple podcasts, Google Podcasts, and we are also available on Spotify. Those ratings and reviews, as well as word of mouth and sharing, help raise the visibility of the show and let more people find us. Also, we want to invite you to come join us on The Testing Show Slack channel, as a way to communicate about the show. Talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at thetestingshow (at) qualitestgroup (dot) com and we will send you an invite to join group. The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen. Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to be a guest on the podcast, please email us at thetestingshow (at) qualitestgroup (dot) com.

Recent posts

Get started with a free 30 minute
consultation with an expert.