The Testing Show: The Deeper Things of Testing, Part 2

December 7, 2022
/
Transcript

In this second part of a two-part series, Matthew Heusser and Michael Larsen continue their conversation with Perze Ababa, Jon Bach, and Jen Crichlow to discuss the broader ideas of testing, specifically the areas of advocacy and addressing situations that are important to people who matter.

rss

Panelists:

References:

Transcript:

Michael Larsen (INTRO):

Hello and welcome to The Testing Show.

Episode 129

The Deeper Things of Testing, Part 2.

This episode was recorded Sunday, October 16th, 2022.

In this second part of a two-part series, Matthew Heusser and Michael Larsen continue their conversation with Perze Ababa, Jon Bach, and Jen Crichlow to discuss the broader ideas of testing, specifically the areas of advocacy and addressing situations that are important to people who matter.

And with that, we join our show already in progress.

Matthew Heusser (00:00):
I think that this customer’s (the customer, the person writing the check) shouldn’t see testing is… I’m gonna tell a story that’s outside of testing. Years ago, buddy of mine had a new driveway put in and he had special, I don’t know what to call it, enamel put down. So it was gonna look like the outline of leaves. It was gonna look like they were colored leaves in the driveway. But it was just an outline. And I think maybe they were actually gonna look like they had been footprints in the cement kind of a thing. So they’re gonna have some texture. And there was some kind of mistake. And one of the workers was like, “Oh, they screwed everything up. It’s terrible. This is gonna be awful. I don’t know how they’re gonna fix it.” And the owner came out and talked to him and the worker told him about how screwed up everything was and how awful it was.

(00:44):
And the owner was like, “When are you gonna fix it?” “I don’t know.” “What’s it gonna cost?” “I don’t know.” When the worker left, he just felt hopeless, stuck. And I think that if testing goes to the end customer and says, “Everything’s broken, it’s a big old mess. It’s terrible.” That’s not the best we can do. So the argument of being part of the solution, having the system work in a feedback loop, provides options early, provides information early in the process. So it can be either the decision made that this is okay, or decisions made that these need to be fixed or ideally feedback loop back into the process to prevent recurring general cause defects from occurring again. I think those are all better.

Jon Bach (01:28):
Yeah, you just helped develop a thought. You see it on social media sometimes when there’s bugs on LinkedIn or Twitter, whatever. “Who tested this, anyway?” You’ll hear that. “Who tested this?!” To Perze’s point that when testing fails, in some degree it is noticed. But remember we also get overruled. Bugs get deferred despite our best advocacy, <laugh> the business or the product owner decides, “Eh, we can live with it”, and it winds up being worse than previously thought. So to some degree it isn’t just testing that get exposed, it’s the whole process of the decision to which release or not release given other circumstances, what are some acceptable risks and bugs that will go to production? Because there’s an opportunity cost if we don’t.

Matthew Heusser (02:21):
Oh yeah, you mean like the actual user of whatever the software is, the website saying who tested this? Maybe they put the bugs in and the decision was made that expediency was more important.

Jon Bach (02:33):
That could be the case, yeah. Or all things, maybe there was 20 other bugs that were more important to fix or they decided that the mitigation was good enough until it wasn’t. PPE or personal protective equipment was cheap and plentiful before February of 2020. <laugh>. Right? Things changed. Quality, a mask that was considered high quality in February, 2020 was considered low quality in March that same year. So circumstances change, people’s idea of value changes. They think the air is good enough to breathe until they realize, “Hey, there’s a virus out there. The air is not good enough to breathe anymore.” Like we’re having this now it’s Sunday at 12:45 p.m. here in the Pacific Northwest and the air quality index is 150 at my house cuz there’s forest fires. It was fine yesterday, it’s not today. So the air quality changes on the hour and that causes us to reevaluate. The word “value” is right there. Testers are evaluators and that is a moving target, that notion of value. But we gotta do the best we can based on how we populate our head with, with reasonable expectations that customers might have.

Perze Ababa (03:48):
I mean John, you’ve been referring to things that we can’t control. The random event of a forest fire happening and impacting things within the area. Even the things that we can control are very difficult to address and adhere to. I know we talk a lot about issues that we encounter in like social media or LinkedIn or things like that, “Who tested this?” But we can never test the number or combination of plugins that you can use for your product. Especially now with the technology of being able to deliver a copy of your website on an edge network, where I tested it is not necessarily your edge network. You might have multiple hops leading into the Edge network that I have zero control and there’s no way for me to be able to test. The big challenge of testing against an infinity space. Which points am I gonna pick out on these ones? And hopefully that’s a reliable test result.

Matthew Heusser (04:42):
Yeah, there’s a good point in there. There’s something that we talk about with software like zero bug, zero bug policy. I guess it depends on your definition of bugs. I like, “A bug is something that bugs someone that matters.” I’ve been in the room where the VP of marketing has walked in and said, did you know blah blah blah happens on the website. Yep, it’s in the spec, blah blah blah blah. Yep, I know blah blah blah blah. Yep. Let me show you the email where I pointed that out and I said that was a problem. And the director of customer experience responded that he knows and that’s expected. And not to change it, bring it up on the screen. I’m exaggerating the story a little bit, but this kind of thing happened once and the response was, “Yeah, I know the director of customer service is fired. He worked for me. Now he doesn’t. Fix it!”

(05:31):
That was a while ago and I may have the story a little bit wrong. I’m certainly exaggerating a little bit, but something like that happened and what wasn’t a bug became a bug in 15 seconds due to the opinion of someone that mattered. So when we talk about we’re gonna uncover bugs, well the bug depends on who’s looking at it. And more expansive view would be to uncover quality related information. Like I’m gonna tell you things that are gonna help help you decide what you like or don’t like. And I think if we do that, the role of tester or the testing activity is much more expansive and is going to do much better away from the button pushing script, following bug filing stuff.

Jen Crichlow (06:12):
That definitely resonates. I think we maybe all have had a moment in our careers past where we said from the vantage point of being a tester to somebody else who was not immediately in testing, but maybe very close to the end user. I’ll say that, broadly, instead of “customer”, but whoever the end user is and said, “It’s got bugs and we’re gonna release it anyway.” And you know, like just face turns red. It’s been helpful to me in my career to help articulate that back, especially to the product team as being in three or four buckets. And by that I mean absolutely if there’s particular bug issues, we issue them back to the development team and kind of work with them to define how we might fix it. Cuz sometimes even from qa, it can be kind of hard to ascertain exactly what the root cause was.

(06:55):
But then there’s this other bucket, too, of potential enhancements. Things that we are already foreseeing as possibly beneficial to whoever those end users are. There’s another bucket though that has emerged in our own workflow that I tend to flag as “PIA:” and then whatever that particular thing is. And I mean PIA in the literal sense, this isn’t a weird acronym <laugh>, it’s just like little things that in the process of testing, keep being kind of a friction point as you’re trying to validate the feature and that you already have a hunch that if it were to be refined in some way, it would ultimately be beneficial so that it is to say like even though this would help us in testing, you don’t necessarily have to engineer it because we’re testing it in a way that might not actually be the use case in the wild.

(07:44):
And by that I mean just production. And so I think trying to reflect on what does QA deliver, like if we are thinking of the value that gets assigned to everything, it’s important, I think, that we articulate exactly more specifically what we mean even to those stakeholders. And Matt, your example was a perfect example of that, of, “Yep, that’s a known issue. Yep, we can live with it.” And it was something you had already arrived at. Having those initial conversations behind the scenes before anything is already in production helps give everyone that reassurance that as you’re releasing the product, each release truly is getting better and better over time.

Matthew Heusser (08:21):
I may have missed something. What does PIA mean?

Jen Crichlow (08:24):
Pain in the …

Matthew Heusser (08:25):
Oh, okay. <laugh>, I, that’s interesting. Another term we haven’t used in a while is “mipping”. I think Michael Bolton coined that one, which is “mentioned in passing”, which is a weird acronym, but that is, “You might care, I don’t know, it’s not on the requirements, but uh, I clicked this button and it took up 15 seconds to vote.

Jen Crichlow (08:46):
Yeah,

Matthew Heusser (08:47):
We don’t have SLA for that, but that seem like too long.

Jen Crichlow (08:49):
Yeah. Or you know, I know I’m supposed to go to this next page, but I, it was actually tricky to find that that button just doing regular exploratory testing.

Matthew Heusser (08:58):
Michael might say, “you turn the contrast all the way down the button disappears kind of stuff”. Although that probably is in a checklist for accessibility somewhere.

Michael Larsen (09:06):
Yeah, there are certain things that you can look at. And also the key that’s also important, and since you’re hitting up accessibility, I had made a mention of this during my talks that I gave was the fact that remember even with something like accessibility, you don’t necessarily have a clear cut benchmark. Yes, you can strive to make things work, you can put things into high contrast modes and that will help with a number of things. But remember what works for one user can be absolutely detrimental to another. And that’s the fine balance that we often play with when we’re looking at things from an accessibility standpoint. I can give you a site that is a hundred percent compliant, hits every single dot on the list and yes, you would pass an audit with flying colors and it would be a bear to use for the majority of people, even the people that you were really seeking to help out.

Matthew Heusser (10:03):
Yeah, I did an interview once, somewhere up there for cio.com was for someone from Liquidnet. He was a designer and he said that a lot of user interface design is actually trading off between power and discoverability. So if you can find a way to give all the little fiddly bits to your power users, or maybe you don’t have many power users, but also have a really super clean, beautiful interface. Or maybe you just need a super clean, beautiful interface. Maybe it’s the iPod 1.0 and you just need to click wheel and that’s fine. But making that trade off well, making that trade off well with good taste will lead to a better system. So what should testers do? When I’ve seen this with government sites all the time, it meets all the function requirements but, oh, it’s a PIA. What are we gonna do? Do we do a PIA note in Jira? Like what do you do?

Jen Crichlow (10:57):
Yeah, I can say, at least in our case, we have a couple different avenues for handling that, or I’ll say broadly triaging it. When it comes to bugs and other issues, we assign that priority directly in our existing task management system. But when it comes to things that we don’t need to immediately execute on in any way, shape, or form, like it doesn’t affect this current release that we’re engineering nor the subsequent one that we’re starting to ticket. It’s just something to be aware of and maybe we wanna prioritize it later or maybe we wanna wait for your user feedback to increase the priority of this. Or maybe this could help influence another feature that we’re already starting to ideate. I think it’s important, what I’m flagging underneath all of this is that the QA team is in conversation largely with the product management team and that when you’re starting to think of those things that are not immediate ticket issuance, it doesn’t immediately affect the development of the product right now or in the near term, but it’s something to to keep in mind.

(11:54):
There needs to be a conversation of, “Well, what does the tooling look like for product road mapping?” Sometimes that lives outside of the immediate ticketing system and how can we best capture that in their tool set? In our case, we’ve made sure that all of our tools integrate, so it’s very easy for the QA team, the dev team as they’re ideating on things like that. Cuz sometimes the dev team already foresees it, “Okay, we need to change S3 in x, y, and Z way.” That’s a DevOps flag, where we can put it in Slack so that other members of the team can see it. That’s our primary communication system and it we also have an integration where we can push that as what’s called an insight with product board. Anyone can send it, we can even send it from customer service tools. So again, what I’m flagging is that we’ve already started to have the conversation internally broadly the team that builds the software of what does your work stream look like and how can I help support you in that endeavor?

Matthew Heusser (12:51):
Cool. I think unfortunately we could talk about this for a very long time. I would like to give our audience key takeaways. So I’m gonna suggest we do final thought and then where to go to learn more about people and what they’re up to. One key thought that I have with, “hey, this is a total pain!” is something you might wanna look up. It’s called a comparable product “heusseristic”. Some people pronounce it heuristic, but it’s where you would say, “You know what? If you were on YouTube and were a video site. You know what? If you’re in Microsoft Word and we’re making a text editor or whatever, it can be metaphorical like we’re doing something social media-ee, if you were in Facebook and you tried to set it up, it would be these five clicks. But if I do it in our app, it’s this 150 clicks, is that okay?” Maybe we’re working with customer service reps on an internal project that only has five users and they only have to do it once a year. But I think that’s a good way of phrasing it to force people to make decisions about design. That is another way of adding value. And I’m gonna let that be my final thought. Perze?

Perze Ababa (13:56):
There’s definitely a lot of areas that we can dig deeper when it comes to testing. Of course it’s expected for us to have the level of being able to perform a technical investigation, but the ability to be able to simplify those tests according to the most important things that matters at this point in time is really key. For me, there’s multiple ways for us to be able to do that. One, we talk to the product owners, we talk to the people of the business that are paying for our time for us to perform these tests, and then ultimately we wanna layer that with how our users are actually using it. Data system telemetry among other things. These are key elements so that we could actually provide a better positioning of the time that we have in performing the tests that we need to do.

Matthew Heusser (14:47):
Thanks. Jon?

Jon Bach (14:50):
My takeaway for testers who may be listening to this in terms of the deeper side of testing is this really fascinating thread on advocacy, which I want to restate as “voice”. You are the voice of the customer or those consumers of which you think will be exercising the set of features and the platforms that you’re putting under test as a program manager. Now, most of my career, like Michael’s, I, I started in 1995, has been in testing until the last four years. I switched into program management, oh, which is very similar to testing. And then as I want to get to the truth of where we’re at, of who is doing what by when, and instead of finding bugs in the product, I find risks in the project or projects plural, which comprise a program. And it occurred to me just now that although the testers are in service to me, somewhat <laugh> for giving me some information on what they’re finding, I’m in service to people like Jen who is a delivery manager at scale and who has to make decisions based on not only information that testers give her, but how the suite of problems or patterns of problems materialize across a program.

(16:10):
Perhaps many projects or many products in a suite. So Jen may rely on someone like me in program. I see that in LinkedIn. Jen, I see you do have program management experience too, so you’ve been here before and so I want to be the advocate for internal customers like you, Jen, who have to make a business decision. And so I really love the story of advocacy. I’ll probably make a LinkedIn post on it, but I want testers to take away that your voice is the most important value that you bring to testing. And a voice could be a bug report or a test report or a concern about a risk. Without your voice, we don’t know. Me, people like Jen, other vice presidents and product owners have very little idea on where we are with how things are supposed to work to meet expectations better.

Matthew Heusser (17:05):
Awesome. Thanks Jon. And Jen, anything you wanna add? Or has Michael gone yet?

Jen Crichlow (17:10):
I was gonna say, Michael, do you wanna go first?

Michael Larsen (17:12):
<laugh>? Oh, sure. I mean, in part, Peze and Jon have already kind of summed up a lot of what I would add to this. So I’ll take a slightly different avenue here in the sense that when it comes to the advocacy position or when it comes to what it is that we bring to the table, I love the idea that we are the voice of those who can’t speak for us and who can’t make mention what we’re doing. Also, it’s important to realize that there are multiple different customers. So when you’re looking at a product from one perspective, you could say, for example, look at social tools, fill in the blank, which one you want to use. The use of those products by people is one component. That’s one customer. Having them be happy is absolutely important and you want to be able to advocate for them and make their experience a good one. But let’s not kid ourselves and think that, in many ways, the more important aspect and the other important customer for those tools are the advertisers that pay the real bills for those organizations to exist. Mind you, those are two, not necessarily cross-purpose goals, but they are two very separate goals and they’re very important ones. So it’s important to remember who is your customer and who that customer is, when you have one framework of testing, can change when you address another customer. That’s my takeaway.

Matthew Heusser (18:50):
Good point. Thanks Michael. And Jen?

Jen Crichlow (18:52):
Yeah, I think my takeaway, again, I’m thinking a lot about time and how much testing and validation and all these things have changed over time and I feel like we’re almost capturing the state of testing right now in terms of the role and the importance and the value. I think that will just continue to exponentially be true in a business. I do foresee more chief of X beyond just programming, beyond just quality assurance. I feel like the terminology will change over time. We have SREs, system reliability engineers, which is still in that quality assurance, performance enhancement kind of realm. I guess my takeaway is I want to make sure as we continue these conversations, we’re looking at other literature out there, we’re having more conversations and we get to meet in person. What are those roles? What are those duties? How does the industry of quality assurance continue to evolve? And what language are we using? What are we agreeing upon? We’re just capturing a moment right now and obviously it’ll keep going.

Matthew Heusser (19:55):
Thanks Jen. So now that we’ve done our final words, is there something new or exciting anybody wants to talk about or a website or something there? Either themselves or they just saw it and it was really neat before we let everybody go?

Jon Bach (20:08):
As for me, I’m at LinkedIn every day. It’s my favorite platform for talking about issues that matter to me. So find me there. I love meeting people like Jen, I just met her this week by virtue of this and now she’s a connection. I don’t make connections lightly. I want them to have meaning. So if you reach out to me for connection, just tell me why. Tell me what we have in common, what makes you unique, what important parts about testing or technology or program management matter to you? I encourage that. I just don’t want to have connections that don’t mean anything. So if somebody reaches out saying, “Hey, I see you know xyz”, I’d be like, “Yeah, they’re just a connection. I have no idea who they are.” I want to say, “Oh yeah! yeah, let me, let me tell you about this person!” So nothing new or special other than what you see on LinkedIn posts from me.

Matthew Heusser (20:57):
Thanks, Jon. I think Peze talked about the black box software testing courses. Do you wanna talk about that or something else?

Perze Ababa (21:03):
Sure. Well, I’m a lifelong member of the association for software testing, although I haven’t been as active as I want to be. But the group is still there. They’re still pretty active. They’re still running training. So if you’re interested in a practical, mentally challenging course in software testing, it is the recommended training for… not just beginners, but for the ones who want to rediscover their love about software testing. So check it out. associationforsoftwaretesting.org, you can search for the black box software testing courses. Of course, you can also find me in LinkedIn as well as Twitter, although I haven’t been as public as I was before because most of my activities have been focused internal to the company. I do facilitate and run uh, a couple of communities of practice within the organization, but I’m open for conversations, just reach out and I would make time.

Matthew Heusser (22:03):
Go ahead, Jen.

Jen Crichlow (22:04):
I was just gonna say, if you guys are looking for me available on LinkedIn, I think that’s probably the best bet to connect with me. I’m of course interested in learning more about whatever projects you’re working on, how you’re approaching your work is especially interesting to me. Or if you just have questions about software development, AI, machine learning, any of that stuff, I’m open to conversations.

Matthew Heusser (22:27):
Okay. Michael?

Michael Larsen (22:29):
My turn, since Perze has already mentioned AST, I will go a different tax since I’ve just been associated there and since I just spoke there, the Pacific Northwest Software Quality Conference has just celebrated its 40th year. And one of the cool things about PNSQC is the fact that they have proceedings that go back for those 40 years. So there’s a lot of great information that has been covered. Some of it has been written by some very active and influential testers. Jon, I believe is included in those proceedings. Matt is included in those proceedings. So am I. So I want to encourage anybody, if you are interested in picking up or trying to find a new angle for something with the 40 years that they have been actively involved, it’s a good bet you’re probably gonna find something that you might find thought provoking and actionable. So there’s my plug, also, Twitter, LinkedIn, general places.

Matthew Heusser (23:25):
All right, thanks. Well, two things I’d mention. The Association for Software Testing builds on and it puts it in there. I think it’s in their mission statement in there. Definition context driven testing, which is [https://context-driven-testing.com/], which are fundamental principles that define a way of thinking about software testing, where the tester is in the driver’s seat and responsible for the outcome of their work instead of, for lack of a better term, a fast food burger flipper who’s told how to do the work. And then if people get sick, because the food is undercooked, the system is responsible, not the individual. So this is worth looking into. Speaking of which, we can put this up only after the paperwork is signed, but I expect by the time you hear this, Michael and I will be co-authoring together a book on software testing for Packt Publishing, which will be released in the fall of 2023. Super excited. We are looking for people to add a couple of small bits. Qualitest has been super supportive of it, and I wanna keep it that way. If you want to be involved, let us know. What’s the email address, Michael? We talk about it every time, but what’s the email address for the show?

Michael Larsen (24:38):
For the show here? It’s the testing show (at) qualitestgroup (dot) com. And it is in every tail that we put up <laugh>.

Matthew Heusser (24:46):
But we do have a Slack we want to be active, frankly. And if you want to learn more about the book, contribute to the book or be on the Slack, drop us an email, we’ll have some conversations going there and we’ll get things going. That’s all I got, so I think it’s time to say, “Thanks, everybody.” We’ll see you soon.

Michael Larsen (25:06):
All right. Thanks very much.

Perze Ababa (25:08):
Thank you.

Jon Bach (25:08):
Thank You.

Jen Crichlow (25:08):
Thank you.

Michael Larsen (OUTRO):
That concludes this episode of The Testing Show. We also want to encourage you, our listeners, to give us a rating and a review on Apple podcasts, Google Podcasts, and we are also available on Spotify. Those ratings and reviews, as well as word of mouth and sharing, help raise the visibility of the show and let more people find us. Also, we want to invite you to come join us on The Testing Show Slack channel, as a way to communicate about the show. Talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at thetestingshow (at) qualitestgroup (dot) com and we will send you an invite to join group. The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen. Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to be a guest on the podcast, please email us at thetestingshow (at) qualitestgroup (dot) com.

Recent posts

Get started with a free 30 minute
consultation with an expert.