The Testing Show: Live From Agile and Beyond

August 15, 2023
/

Panelists

Matthew Heusser
Michael Larsen
Holly Bielawa
Damian Synadinos
Jeff McBain
Transcript

Recently, our show host Matt Heusser headed out to Detroit to participate in the Agile and Beyond Conference being held there. In the process, he gathered together several contributors and testing experts including Holly Bielawa, Damian Synadinos, and Jeff MacBane to discuss areas such as Emotions in Testing, Product Coaching and why its difficult, and how we can just be excited about doing testing well in this day and age.

Michael Larsen (INTRO):
Hello, and Welcome to The Testing Show.

Episode 139.

Live From Agile and Beyond

This show is a little different in that Matt headed out to Detroit to participate in the Agile and Beyond Conference being June 13 and 14th. In the process, he sat with Holly Bielawa, Damian Synadinos, and Jeff MacBane to talk about Emotions in Testing, Product Coaching and what makes it difficult, and why we should be excited about doing testing and doing it well.

And with that, on with the show.

Matthew Heusser (00:00):
Welcome back to The Testing Show, and we are live at Huntington Place, formerly known as Cobo Hall, in Detroit, Michigan, across the [Detroit] river from Windsor, Ontario. The conference is Agile and Beyond. I brought a few speakers to talk about what they’re learning, what they’re doing, and how that intersects with what’s new in test and quality. We’ll start with Holly Bielawa, who is a product management consultant for Jeff Patton and Associates.

Holly Bielawa (00:31):
Yes, I head up the coaching and consulting with Jeff Patton Associates. So we work with leadership when they realize that product needs to be involved, a lot more in quality and all of that. Jeff wrote the user story mapping book in 2014 and it’s still pretty relevant to folks.

Matthew Heusser (00:53):
So long-time friend of the show been on the show before. Welcome back… Michigan. You’re a Michigan, Michigan person.

Holly Bielawa (00:59):
Yeah, I am.

Matthew Heusser (00:59):
Most of us are. It’s a regional conference but is here from Columbus, Ohio. Welcome Damien.

Damian Synadinos (01:07):
Thank you. That’s me.

Matthew Heusser (01:09):
Tell us a little bit about Damien.

Damian Synadinos (01:11):
Damien is from Columbus, Ohio. Well, I did 25 years in software testing from the early nineties to about 2014 or so. In 2016 I started my company, Ineffable Solutions, to become a full-time public speaker at conferences and corporate trainings focused on people skills, human skills, and fundamental concepts. That’s the, uh, very short version.

Matthew Heusser (01:34):
And you were talking about human emotions and requirements and the placebo effect.

Damian Synadinos (01:39):
That’s right. The talk I just gave was called “The Hidden Requirements: Exploring Emotions with Placebos”. And it’s all about the importance of emotional considerations of your users and customers when building software and why it’s so important and that they should be considered when building that software.

Matthew Heusser (01:54):
So I’ll go to Jeff McBain next, who’s been a longtime friend of ours. software tester for… decades now.

Jeff MacBane (02:01):
Almost. Yes, correct.

Matthew Heusser (02:02):
Based in Lansing, Michigan. Let me ask you, would every manager you ever worked with agree that the emotional state of your users is important?

Jeff MacBane (02:11):
No, I don’t think,

Matthew Heusser (02:12):
no?

Jeff MacBane (02:13):
I, I don’t think that many managers would even know what emotional requirements are. They’re more focused on what the stakeholders want and it’s not emotionally based.

Matthew Heusser (02:25):
I think I’m more fortunate than most then. Most of the people I’ve worked with would admit that emotions were important but then take no action ever of any kind to do anything about it.

Damian Synadinos (02:34):
I pointed that out I think in my talk. I said people give it lip service, but no action.

Matthew Heusser (02:38):
So what have you done to think about the emotions we want to evoke? How has that impacted the way we’ve developed the software? Crickets.

Damian Synadinos (02:47):
Exactly.

Matthew Heusser (02:48):
But at least most of ’em would admit it. And it’s good that yours probably would admit that since they don’t do anything, it must not be that important.

Jeff MacBane (02:55):
Right. Although there have been some things as of late that want to induce that happiness emotion after completing something, that’s about the only time I’ve seen something provoking emotion comes up.

Matthew Heusser (03:09):
And that’s, I mean, can we talk about your job just a little? Sure. Because I know you work in consumer-facing credit. Yep. Finance. Correct. That would be like when we say you got the credit card, we want them to be happy,

Jeff MacBane (03:22):
Right. Correct.

Matthew Heusser (03:23):
Well, that’s better than nothing.

Jeff MacBane (03:25):
But then, the actual requirement wasn’t making them happy. It’s, they see a confirmation message saying their credit card….

Matthew Heusser (03:33):
Does it, does it ever appear anywhere in the documentation

Jeff MacBane (03:36):
That I’m not sure.

Matthew Heusser (03:37):
Oh, not so much.

Jeff MacBane (03:38):
I, I don’t unfortunately.

Matthew Heusser (03:39):
Room for improvement. Yes,

Jeff MacBane (03:40):
Definitely.

Holly Bielawa (03:42):
Well, I would talk to a lot of people who would say, “Oh, well that’s a UX thing”, and not think of it as anything other than that. Like a design problem rather than how do we measure customer happiness, customer satisfaction.

Jeff MacBane (03:57):
We typically look at NPS scores,

Matthew Heusser (04:01):
Net promoter,

Holly Bielawa (04:02):
Net promoter score. But it’s hard to map that back to requirements.

Jeff MacBane (04:05):
But I think a lot of the reasons why we don’t, from what I’ve seen, talk about emotional scores; we default to, it’s UX, or design, or something like that that you mentioned. So it’s on them to actually look at it and try to figure that out.

Matthew Heusser (04:20):
And net promoter score is where we basically look at the ratio of our wild likers to our wild haters and we throw out the middle. What you could do with that is maybe NPS scores keep going down, we gotta do something different, not much else. Right? Is that a quality thing? I think that speaks to quality.

Jeff MacBane (04:39):
It could be, but your scores could be going down maybe because of performance issues, which would invoke emotional issues. But they would look at it as, okay, maybe our software’s taking a long time to load. They would look at performance and not really think about emotion. There would be some underlying thing that would be covering the emotion.

Matthew Heusser (04:59):
Yeah, I’m just speaking of the NPS as an ability to have, we have this external quality metric that is a measure of our customer happiness that we can then use to drive our decisions.

Holly Bielawa (05:09):
I was just thinking about an experience I had when I booked a cruise for my dad and I and my kids on Carnival and I booked a cabana and the customer service rep couldn’t switch the cabana. I needed to switch it because there were stairs to get to the one that I booked

Matthew Heusser (05:25):
Oh, you need something.

Holly Bielawa (05:26):
And he couldn’t switch it. I was on the phone with him for two hours while he was trying to figure out and Google Googling the same stuff. Now, if I were in the profession that I was in where I was like, well this is awfully interesting customer experience I’m having right now. Okay, well I can’t do it. So you just need to cancel the one that you

Matthew Heusser (05:47):
Make a new one

Holly Bielawa (05:48):
And make a new one and it’ll take us seven to 10 days to refund the first one. Basically double paying for these things. And that’s something, when you think about it, how are they understanding… They’re not gathering the data. NPS? Okay, great. But how do you gather…

Matthew Heusser (06:06):
Would really be how you would… not just the score, but how you would process the comments. The comments is where the gold is.

Holly Bielawa (06:11):
Like if I would go make a comment online about it?

Matthew Heusser (06:15):
Yeah, like if you, if they did an NPS survey and you’d score ’em a two, like I would not use, again, that doesn’t really help them. But if you put a comment in, I was unable to do this or that, they could mine that data and look for commonalities or look for extremely low scores and they could take corrective action.

Holly Bielawa (06:32):
Yes. So that implies that there’s been a requirement for the customer service rep… not a software requirement. The requirement is, to make it so that the customer rep can put in that information and then the rep has to be trained. Every conversation you have,

Matthew Heusser (06:48):
Yeah.

Holly Bielawa (06:48):
You’ll need what happened and whether you rate the user’s satisfaction for us.

Matthew Heusser (06:52):
You need a process to capture that. And that’s how many times.

Holly Bielawa (06:55):
That’s kinda what we’re talking about. Right? If you’re not thinking about that, you’re not going to try to gather that data. You know, how do you get an MPS score? It’s not gonna help.

Matthew Heusser (07:05):
But the other thing in your talk, you mentioned that when we do root cause wants and needs analysis, things usually end up at an emotion.

Damian Synadinos (07:16):
That, or some philosophical unanswerable base kind of concept, but…

Matthew Heusser (07:21):
Right. Very often an emotion, I need to do this so I could be a good person. I need to do this so I can…

Damian Synadinos (07:26):
But often it comes up because I want to feel a certain way. I want to feel happy. I wanna feel safe. I want to feel secure. If you dig deep enough, it often bottoms out there as an emotion.

Holly Bielawa (07:36):
That is very interesting. I wish I’d seen your talk.

Damian Synadinos (07:39):
Thank you.

Holly Bielawa (07:40):
,That’s Very, very…

Matthew Heusser (07:40):
The other thing that I missed that maybe you could talk about was placebos. Most of us are familiar with the sugar pill placebo. How’s that gonna help us in our delivery?

Damian Synadinos (07:50):
Placebos are things that are primarily used to, but the way I focus on, in my talk, is around conditioning. Placebo comes from the root word placate, placate, which is to appease someone. Placebos, a reasonable definition, is something that is primarily intended to affect a person’s emotions without having any other function. Now mysteriously, sometimes they do have other functions, but that’s not the focus of this talk. It’s something that is solely or primarily used to affect motion. And the analog examples I give are door close buttons or crosswalk buttons,

Matthew Heusser (08:24):
Oh aren’t, don’t do anything.

Damian Synadinos (08:26):
They don’t do anything. But they give people a sense of control and people want to have control over their life. I talk about you don’t wanna be the floating feather at the beginning of Forrest Gump. You wanna feel like you have control over your fate and your destiny. And pushing a door-close button or a cross-the-street button makes you feel like you’re controlling your destiny to some degree. It’s interesting because many times those buttons are not hooked up to anything. They don’t actually have a function. There are no wires behind the button. So what are they? They’re placebos, they’re analog placebos that are intended just to make people feel a certain way. And you can do that in software as well. A fake save button is an example I use. This is one of many, many examples. There was an online website called Sketch and users could draw, and after they did some drawing, they’d click the save button and they draw some more, and click save with the constant update always online.

(09:15):
Version two came out and they abandoned the save button because every single stroke…

Matthew Heusser (09:19):
Everything is saved.

Damian Synadinos (09:20):
was automatically saved and users went crazy. They said, where’s the save button? How do I know that my stuff is being saved? They lacked trust, they lacked confidence. And they were complaining that it was a bug. It bugged them! So the makers of the product tried to explain that it’s always saved. Didn’t matter. People wanted their save button. So version three, they put back a save button, but it wasn’t hooked to anything. There were no wires behind it, So the end result was that in version three, they put a fake save button and users were happy again because they felt they had control.

Matthew Heusser (09:51):
Awesome. So wonder,

Holly Bielawa (09:52):
I wonder if they do that in the airports, you know how the bathrooms will be “smile/frowny” if that actually collects data, or if it just creates that same effect on the people who work there.

Damian Synadinos (10:01):
It wouldn’t surprise me. There are so many examples of fake wait times at theme parks when you’re waiting for the rollercoaster, there’s a Coinstar example…

Matthew Heusser (10:10):
But wait… fake wait times?!

Damian Synadinos (10:11):
To make people feel like they’re moving faster than they actually are. When you’re waiting in line, it says from this point, it’s an hour and 45-minute wait.

Matthew Heusser (10:19):
How long is it? Actually, they don’t,

Damian Synadinos (10:20):
Might be actually two hours.

Matthew Heusser (10:21):
They don’t measure it. That’s just…

Damian Synadinos (10:22):
But if it’s two hours, people feel miserable. It’s an hour and 45 an hour and 30. Okay, I have hope. And then if they get there after two hours, they figure, well, maybe it just took a little longer today.

Matthew Heusser (10:31):
It ran slow today.

Damian Synadinos (10:32):
But it affects their emotions. It’s purposely, I talk about this in the talk purposely intentionally deceiving users for their benefit. Progress bars are a wonderful technology example. It’s very difficult to estimate how long a computer process will take oftentimes. But people want to know that something’s happening. If not, if they show nothing on the screen…

Matthew Heusser (10:51):
They wanna see the screen go. Yeah.

Damian Synadinos (10:53):
So the next best thing, if you don’t show anything, if, if there’s nothing shown, people hate it, you gotta show ’em something’s happening. So sometimes they’ll have a spinner, which is better than nothing, but people don’t like spinners because there’s no kind of progress. It’s just that something’s happening. So the next after that is progress bars. But there’s been research and studies on what types of progress bars, and how fast to make them move. Even if they’re not directly connected to the actual process. It makes people feel a certain way. So that’s deception for the benefit of the user.

Matthew Heusser (11:20):
I can see that. I wonder if there was a time in history in testing when we used to intentionally put bugs in the software and give it to test. Cuz if they don’t find this obvious ridiculous stuff we’re throwing in there, something’s wrong. They’re not looking at the full menus of the dropdowns. That was, we used to throw in Easter eggs for a while. I think that was a good experiment to run, putting a couple of placebos like where can we put placebos in our software? That might be a good conversation to have.

Damian Synadinos (11:47):
I also talk about if you asked about how do placebos help us think about software with regards to bugs. If you have a functional requirement, the product should do this and you test it and the product does not do that. It’s pretty obvious that if it was supposed to do it but it didn’t do it, that’s a problem. It’s a bug. But it gets more gray if it’s an emotional requirement. If you say the product should induce a feeling of happiness,

Matthew Heusser (12:11):
That’s a tough one.

Damian Synadinos (12:11):
If it does induce happiness then it’s no bug. If it induces sadness or anger, that’s a bug. But what if it induces ecstasy? What if repeated use of this thing goes from happiness to adequate feeling to anger?

Matthew Heusser (12:24):
How do we measure that though? I mean, do you put 10 people in front of an emotions wheel and say, use the software?

Damian Synadinos (12:29):
I talk about that. There’s a whole section about how to measure emotions. Most of the ways that we have available to us are self-reported. They’re not objective. There are technologies that are coming out that attempt to measure emotions such as measuring the dilation of someone’s pupil. If they’re excited, that gets wider. If they’re nervous.

Matthew Heusser (12:46):
Microexpressions, there’s micro, there’s some science on this, which is not terrible,

Damian Synadinos (12:51):
Correct? No, this is not terrible science. These are attempts to measure emotional responses. Galvanic skin response is how much electricity your skin will conduct. And if you’re sweaty, it will conduct more.

Matthew Heusser (13:01):
I say not terrible because it’s based on repeatable experiments and it’s fallible.

Damian Synadinos (13:05):
Correct.

Matthew Heusser (13:05):
So if you, if someone is a psychopath, they might not have the reaction.

Damian Synadinos (13:09):
Mm-hmm ,

Matthew Heusser (13:09):
They might actually show contempt but not do the reaction.

Damian Synadinos (13:12):
Correct. And there are all sorts of brainwaves and voice tone and inflection. There are different ways that there are technologies that attempt to measure them. However, most of those technologies are not available to the average person. So then what do you do? The number one way to measure emotions, ask. Face-to-face, ask them. Interviews, discussions, say we wanted you to feel safe and secure at this point in our application, in our software, at the video game, whatever you’re doing, did you feel safe? Did you feel secure? It’s self-reported. Then you hope they tell the truth. So ask for discussions. If you don’t have direct contact, you can use surveys and questionnaires, and polls. If you can’t do that, you can use role-play. If you can’t do that, you can lean on the area of social sciences which has different surveys and questionnaires that are purpose-built for measuring emotions. They’re actually constructed to do that. So you don’t have to come up with your own and research how to make a correct poll or questionnaire.

Matthew Heusser (14:01):
But if I wanted to do this tomorrow and I had a formal user acceptance test process, I had internal customers or customer proxies. I mean one product owner probably isn’t gonna do it, but if there were a team, put ’em in front of the wheel, do the thing point at it, average it, that would be better than nothing.

Damian Synadinos (14:19):
Correct.

Holly Bielawa (14:20):
It reminds me of, there’s an episode of Silicon Valley where it’s a focus group. How did everybody feel? Right? Remember? And it goes across the angry. Angry? Okay. Angry. All right. So that was maybe helpful for their product. They have a focus group or they…

Matthew Heusser (14:41):
It was a cell phone or something. Or a cell phone app. I remember the…

Holly Bielawa (14:44):
Yeah! Something. Yeah, something like that. Yeah. did not work out. It’s just very interesting because back to what we were saying before is that businesses will assume that this is a UX thing and that it’s not a matter of quality or quality of product, like how it makes somebody feel. It’s strategic to organizations

Damian Synadinos (15:06):
During the support part of the claim. My claim in the talk is the way we feel about software is important, therefore should be considered. And so I spent a significant amount of time trying to support that claim through various means. And one of those is research of people smarter and wiser, that have done far deeper research into the relationship between emotions and software making, and software development. Many or all of them have found that emotional considerations are super important, if not the most important consideration when making software above all other requirements. And that it directly influences whether your customers embrace or reject your product.

Matthew Heusser (15:41):
Well you think about Google and Yahoo, I don’t think it’s too much of a stretch. There was a time when those companies were at war and Google had this incredibly clean interface, a blank page with a text field and a button, type it in, click submit and you get your results. But Yahoo was all over the place. It was so busy that you’d feel overwhelmed using it. I would submit that might have led to the decline of Yahoo and the success of Google. Fundamentally that was the feature monster. One more feature. We’re gonna add one more button we’re gonna add and eventually, you end up with a hole that is less than the sum of its parts. Holly, you had a talk this week, right?

Holly Bielawa (16:24):
We did, yeah. The talk that we did this week was about product coaching and why it’s difficult. This is an interesting conversation because product coaching is difficult because of the lack of consideration of, you know, what are we really trying to do? And it being very intentional about these types of things that we’re talking about. And the organizational styles do not help us in this way because part of the organization like UX may be thinking about this. But then the collaboration, the quality of the collaboration to have everybody understand what we’re trying to achieve. Whether it be an emotion, or money, or efficiency. I even had somebody ask me today, we have an internal product, do users really matter? Do we really be using user stories? And I thought, oh gee. So there’s a real lack of understanding across organizational silos of the relative importance of all of this type of thinking.

Matthew Heusser (17:22):
Steve Jobs said that that was why enterprise products are not so great is because you don’t sell to the people that are going to use it. Jobs said that enterprise products are sold to the CIO who does it by a checklist. It’s got a bigger checklist than that product. We should use it. But the people that have to live with the consequences of that decision, don’t get a vote. I’ve worked with many companies where every user story is “as a user, as a call center rep, I want to…” hundreds of stories like that. And then maybe if you’re lucky there are 10 or 20 that are “as an admin panel user, I want to…” maybe in those cases, they could whack the first 40 characters and just talk about the feature. Maybe that’d be okay.

Holly Bielawa (18:05):
Yeah. I haven’t actually really seen a good user story in an actual backlog. . I think we’re right now going straight from we named the project, we funded the project. Go make that happen.

Matthew Heusser (18:18):
Name a feature. Name a feature. Name a feature.

Holly Bielawa (18:19):
Yeah. Or I mean it’s just project X, Y, Z got approved so go do that project X, Y, Z and you look in Jira and it’s a to-do list

Matthew Heusser (18:29):
really

Holly Bielawa (18:29):
of things. Well, because most of the time now you know you’re not just dealing with a new product. You’re dealing with, we’re trying to improve our product. Normally when you’re in a startup, when we talk about a new product, you would do a bunch of research, right? You’ve got limited runway. Well now you’ve got a bunch of legacy code and you have an existing product. So knowing your current state, if you have a product that’s been around a while, you never thought about these things before, now you wanna think about them and you wanna implement some metrics perhaps or something like that. That gets really difficult. Like the code is not set up to do that. People aren’t set up to do it, they don’t know what their products are, and they don’t know what their current state is. And so then it’s really hard to say, okay, how do we get to a good future state?

Matthew Heusser (19:15):
I’d love to hear from the audience… We don’t do this enough. I’d love to hear from the audience what, you don’t have to tell me the name of your company, I don’t want any details, and don’t file any NDAs. But what’s the quality of your backlog? Do you get things that actually make sense? And you can go look at the software and you can look at what it’s supposed to do and say, “I think it actually does what it’s supposed to do. Are those sliced reasonably sized so they’re reasonably similar? Or is it just sort of the janky-yanky figure out if it works, have an argue about with the dev about what it should do, and then go take that to the product owner and argue with that person about what maybe you convince them, and then you get then write a bug and then wait three weeks and then someone has to go fix the bug and that person doesn’t think it’s a bug and then you have to explain that it’s a bug and then explain how to reproduce it and then they reproduce it and then you get a fix a week after that.

Damian Synadinos (20:03):
Sounds like you’ve experienced this.

Matthew Heusser (20:04):
like so that would be sort of like not the best case for the companies that we’ve worked with, but that’s not unheard of.

Holly Bielawa (20:12):
I’ve worked with companies where QA does not know which team created the code that created the bug. They find a bug, they’re like, they guess. Who even created the code? You know back in we were doing this. I mean you knew who created the code and you had the right environments. I mean sometimes companies don’t even have that, but it was a really weird experience for me to have. It showed up in a backlog for a team that I was working with. They’re like, “Oh yeah, this isn’t us.” And they had to go find out who would actually fix this bug. So when you’re dealing with table stakes issues, some of those types of issues in companies are big distractions because they’re having so many problems writing so much code that maybe makes people mad that maybe doesn’t get used, but they’re still having to worry about the bugs and fix them when we shouldn’t have written the code in the first place, maybe.

Matthew Heusser (21:08):
I think that, I’m not quite sure, I don’t wanna speak for you Holly, but it sounds like when you say table stakes if we’ve got that level of dysfunction, just talking about the emotional feeling of the cust… I guess you could do both at the same time. Maybe one would lead to improving the other one.

Holly Bielawa (21:22):
That’s why we do what we do in the product land. That’s the whole thing we’re trying to do is make sure we’re building the right things before we build them right. Yeah, maybe. But we wanna make sure that if you do have these problems implementing anything that at least we’re not asking for the wrong things to be built. And we’re collaborating to make sure that we’re building only what needs to be built to get the emotion or to get the value that we want.

Matthew Heusser (21:50):
That project focus is, my experience has always been, yeah, yeah, yeah we got these problems, we’ll fix ’em when we get this project done and there’s always another project. Whereas the product focus at least has a hope of being more holistic, looking at the long-term viability of the product for the customer.

Jeff MacBane (22:05):
Do you think that almost thin slicing and trying to go for that MVP would actually help in something like that, though?

Matthew Heusser (22:13):
For a new product.

Jeff MacBane (22:14):
Well, even for legacy products. If you’re trying to enhance it or… Well, if you’re fixing a defect, that’s kind of cut and dry. But even for legacy products,

Matthew Heusser (22:23):
Generally speaking, trying to make a bell curve distribution of our stories, that is a relatively thin bell curve, improves flow. It’s kind of orthogonal or perpendicular. We can do both at the same time. But yeah, sure. And then if we get a work that comes in in these little slices that can lead to flow across the board faster, which would lead to more, we know who did the work.

Jeff MacBane (22:47):
Well, it would also provide more feedback. So you’re not developing a huge thing or huge enhancement on a legacy product, then you’re able to get that feedback.

Matthew Heusser (22:55):
I don’t see a ton of the giant projects anymore outside like telecom or people building physical systems. I don’t know about you all.

Holly Bielawa (23:02):
Oh yeah.

Matthew Heusser (23:02):
Yeah?

Holly Bielawa (23:03):
Oh yeah. I think we’re working with a client right now that has three-year projects. Yep. Better software projects still. So it still,

Matthew Heusser (23:13):
Is it automotive or…

Holly Bielawa (23:14):
No, it’s manufacturing. I guess they have a manufacturing legacy Japanese company. So LEAN, they know LEAN and continuous improvement. But in product management, a lot of product managers don’t know software at all. They don’t know what software delivery is. So when you talk about trying to slice things, we talk to people who were subject matter experts in finance maybe, or their past clients or there’s somebody who was a salesperson who wanted to get off the road. So they became a product manager so they could get off the road. The basic understanding of the things we’re talking about. A lot of product managers, they’ve never come across real software processes. They don’t know what QA is necessarily. They’re just like, can we make this stuff happen, and how fast? And they’ll argue that things should be built to leadership and then they get okayed and then it’s just “Go!” Maybe somebody does some research or not or they’re good at making the argument and they have a nice PowerPoint presentation. So it really is, a lot of the speakers here we get beyond cause they’re doing it for a while. But those are real issues, especially when you wanna have, you wanna have really good user experiences and have people come out being happy about or have some emotion related to you. Cuz it speaks to retention as well.

Matthew Heusser (24:35):
To some extent. I would think that that three-year project has gotta have some waterfall components to it. Ideation and design planning and then eventually you drop down into prototyping. But tell me that at least are building it in phases. Like we’re not talking about requirements for nine months. Nine months of coding, nine months of testing, nine months of operationalization. Right?

Holly Bielawa (24:58):
We shouldn’t be like, we should be talking about… you’re familiar with the cone of uncertainty?

Matthew Heusser (25:04):
Yeah.

Holly Bielawa (25:04):
Or the innovation funnel. Having big pictures. And I know we had the planning onion, so like every once in a while it resurfaces, somebody’s like, oh my God, this is amazing. Those bigger things into smaller things. And as you’re making those bigger things into smaller things that actually are executable, you figure out the things that you should not do along the way. And that’s a continuous discovery. But we’re also like it’s, we should be doing continuous discovery to figure out all the way down to what are we actually building. So some of the stuff that we talk about is that discovery, which you would need to do to figure out what emotions people are having, who does that work in organizations? It’s sort of been, that has been gutted. So you may have some UX researchers who come in and dabble a little bit in it or throw a little discovery sauce.

Matthew Heusser (25:54):
If you’re lucky you have like a BA or something who has just got some skills and experience.

Holly Bielawa (25:58):
Yeah. And that’s really, if you’re lucky because a lot of them became Scrum masters and that’s not their job anymore. Or they become team-level product owners and so they’re like scrambling in the backlog.

Matthew Heusser (26:08):
This is making me sad.

Holly Bielawa (26:10):
Sorry.

Matthew Heusser (26:11):
What does, what is a tester to do?

Holly Bielawa (26:13):
Yeah.

Matthew Heusser (26:14):
How can we help?

Holly Bielawa (26:15):
Yeah. You know, I spoke at that QA conference and I’m like, oh, there’s no light in anybody’s eyes over here . Because it is very difficult in QA because nobody’s really thinking of quality at the front end like they should be, in a lot of different lenses of qa. And one of the solutions are things like product operations and knowing your product landscape and be organized around those things, which is not how it has been. It’s now we’ve gotten more scrum teams that are functionally siloed. It’s harder to get that everybody’s cross-functional working on the same product. A lot of dependencies built in.

Matthew Heusser (26:54):
Scrum teams that are functionally siloed? Did I hear that right?

Holly Bielawa (26:58):
Yes. Yes, Matt. Functionally siloed. So a lot of companies adopted Scrum where they’re like, Oh, you’re an iOS team? You’re about seven to 10 people? You’re a team. Oh, you’re a backend team? Okay, well we’ll have two scrum teams over there. And so to get anything delivered of value to a customer, it could be eight teams, or ten teams. But I think that, that really was not very helpful. And then the business product managers just got left out of that whole journey. And now companies are going like, why are we having, why are we having such problems?

Matthew Heusser (27:39):
I guess I see that that’s true. I see front-end teams and API teams, front-end teams, and backend teams and so they have to coordinate.

Holly Bielawa (27:46):
Yeah. So the program, you know, we used to say, you know, oh, project managers are program managers. They don’t really understand agile, they don’t really understand how and do this thing. You have to have them now. Cuz you gotta have somebody who actually holds all that mess together. That’s a hard thing to hold 10 teams trying to work on one thing. And most companies don’t even know that’s what they’re doing. And all of them have task-level backlogs and there are no user stories anymore. Right.

Jeff MacBane (28:14):
Well like, like, like you were saying, Matt, with the front end and back end, there may be different developers working on it, but a lot of companies, that’s the same tester having to test both the front end and the back end.

Matthew Heusser (28:24):
Right. So Yeah, yeah, yeah. And the test team, right? Yeah. You got the front end, the back end, and the test team, which is, and we’re doing Scrum in air quotes.

Holly Bielawa (28:31):
Offshore. Yes. Or some such thing.

Matthew Heusser (28:34):
That’s pretty common now. Yeah, yeah, yeah. I mean most if you’re lucky or offshore work pretty close to USA hours, which is not great for them.

Holly Bielawa (28:42):
So yeah, there are problems, which is why companies need us to come in and help them figure out how to solve some of these.

Matthew Heusser (28:50):
Do you break up the organization? Do you restructure or do you make it work?

Holly Bielawa (28:54):
I wouldn’t say I do it because I just come in and try to help. Mm-hmm . But I would say I have worked with companies who are now making their head of engineering and it’s a trend to make the head of engineering the chief product officer now.

Matthew Heusser (29:07):
Oh okay.

Holly Bielawa (29:08):
I have people say, oh my god, no, that’s so bad. They’ll just solution. But I’m saying a lot of people who have managed engineering and had to collaborate with their peers and leadership understand these issues a lot better than somebody who used to head sales. That’s a positive move in the industry. It’s change management. So you’re talking about people’s emotions, right? I mean that’s, people leave when reorgs happen a lot of times when they’re scared and it’s not communicated correctly. So that’s another lens on just the emotion that people have when they find out that they’re gonna be reorged and who’s their new boss or what is it gonna mean for them. So we’ve done a lot of change management thinking and working now cause it takes that. How are you going to help leadership communicate and be confident that they’re gonna make a change that will help with these things?

Matthew Heusser (30:03):
Thanks. Before we go, I wanted to get to Jeff for a minute. Now, when we started this conference, I’m asking people what’s new and exciting in test and quality and then if they had an answer set up the podcast and you said something that really surprised me, you were like “actually doing testing well” how would you put it?

Jeff MacBane (30:25):
Right. One of the trends I’ve seen is the basics of testing has been forgotten about. It’s been more focused on tools or trying to implement a huge change. But testers can’t really test an input box or like an example that I’m gonna be using at one of the talks that I gave years ago is how would you test a car stereo? People use a car stereo, but how would you test it?

Matthew Heusser (30:50):
So to try to put that in my language, that would be like today’s testing is “write a tool that turns the stereo on to a preset station record to make sure that I can hear some sound and turn it off. And if you’re really lucky record to make sure there’s no sound while I turn it off, done”. That would be, and we call it test automation. We move on to the next thing. And any of us that have been around for more than a week or two know there’s so much more to it.

Damian Synadinos (31:19):
I was asked the same question a week or so ago about what’s exciting to me, and it was very similar to your answer, Jeff. The way that I stated, it’s something similar. It’s something that’s exciting and something that’s wildly depressing to me at the same time. Two different things. The exciting thing is software engineering’s relatively young when compared to civil or mechanical or other forms of engineering. So it’s a younger field. And within that testing is a somewhat recent addition. Someone decided that, figured out that, hey, the people that make this thing have blind spots and maybe we should separate the thing of checking the thing to make sure that the thing that we wanted to do and the thing that we did are in line and congruent. That’s how software testing was kind of born. And then it developed through the years with different methodologies and approaches and techniques.

(32:05):
And then it kind of became, in my opinion, taken for granted. Like everybody tests, it’s just… obviously, you have a QA department, whether it’s siloed or across and umbrellas, whatever. And then in more recently it started to become less important. It started to be more, let the users test it or let’s just cut the testing completely and let our developers test it. And the whole idea is two heads are better than one. The only reason that that phrase is true is if the second head is sufficiently different than the first head. If the second head is identical to the first head, then it’s effectively one head, and the whole benefit and advantage is gone. So the benefit…

Matthew Heusser (32:39):
Wait, but but, but you hit your deadlines. . , yeah, you laugh.

Damian Synadinos (32:46):
So I think it’s sad that testing has started to become relegated, not just where it was commonplace and taken for granted. Now it’s actually in recent times it’s been pushed back where people don’t see the value in it. But what’s exciting to me is in even more recent times I see people pushing back saying, “You know what? Testing is important.” Some of that’s coming from the testing and QA community, but there’s other companies that are saying, “You know what? Bad things are happening when we don’t test properly, when we don’t test adequately.” So that’s encouraging to me that that’s starting to be seen as an important aspect to making software. The depressing part to me is that even though testing is starting to once again be seen as an important aspect to making software, the testers are, as you said, frequently ill-prepared to do their job. They’re relying too much on tools and tech and they’re forgetting the fundamental concepts of relativism, epistemology, causality, linguistics, semantics, all these really fundamental concepts that will help them do their job.

Matthew Heusser (33:46):
Potato, Po-tah-do.

Damian Synadinos (33:48):
Heuristics, algorithms, all these types of things. And you get a bunch of shallow folks that are toolsmiths that don’t really know how to test. So that’s depressing to me. Although testing is once again seeming to be more important and focused on. The people that are doing it aren’t as good as they used to be or should be. That’s my opinion.

Matthew Heusser (34:06):
Yeah, thanks Damien. I’d like to tell a quick funny story. Well, a historical note that might be interesting. QA versus Tester. Big silly arguments that have happened over and over and over again that I want to explore some other time. But where that term came from… one of my old bosses, Ken Pier, worked at Xerox PARC and he was a coder on the Dorado, which was the world’s second personal computer, the second personal computer that Xerox made. He said that if you were a tester, it meant that you had like dirt and soot on your fingers and you would physically take computer equipment and put it on a bench and work it. And that was a blue-collar job and it wasn’t salary, it was hourly. So they didn’t want to call people testers. The term quality assurance, they didn’t even know what it was.

(34:55):
It was some kind of vague term they knew people used in manufacturing. We want to get people that are more educated and have a more thoughtful job and pay them more. But Xerox won’t let us because they’ve got this tester job description. So we’re going to invent quality assurance. Didn’t have to do with any of this thought that the testers were gonna magically prevent quality problems. There was no like semantic anything. It was just a vaguely similar-sounding title so we could pay people more. And the connection to the test architect is an exercise for the reader. But final thoughts,

Jeff MacBane (35:30):
I’m really interested in trying to learn more about emotion requirements. I figure even if it’s not something that the project managers or BAs can take, that’s yet another thing that I can throw into the many different things that I will be taking a look at with testing.

Holly Bielawa (35:48):
From my perspective, quality is sort of making sure things work the way they’re intended to work. The only way to do that is really to collaborate from beginning to end on what it is we’re intending to do. So we know how to make sure that, to test, to make sure that it does work.

Matthew Heusser (36:05):
I have been in the room where someone has said, “It checks all the boxes” and the tester says, “It checks all the boxes”. And someone outside, which we didn’t know who’s the stakeholder, comes in and says, “but it doesn’t let me do the reason that I funded it”. And I would say, “You don’t ever want to be in that room.”

Holly Bielawa (36:23):
No .

Damian Synadinos (36:28):
I think everyone tests since birth. Children are some of the greatest testers and little scientists, you can give a child a toy that they never previously encountered and they’ll twist it and poke it and prod it and turn it and make observations about the things that they’re doing to it. And those observations will lead to discoveries. And sometimes they discover that this poke, if you do it twice, something different happens. So those are all little experiments, little tests. And if you’re doing it as a child, you can do it as an adult. Additionally, I say things to my kids and they challenge my ideas or my questions. I’ll say something and they say, “Well, is that true?” And they start questioning. Those are forms of tests. And if you can test a toy, a physical object, and you can test an idea, an abstract concept, you can certainly test a process. Testing a process to me is what some people call QA.

Matthew Heusser (37:17):
Okay, thanks, everybody. Appreciate it. Talk to you soon.

Holly Bielawa (37:19):
Thanks, Matt.

Jeff MacBane (37:20):
Bye.

Damian Synadinos (37:20):
Thank you.

Michael Larsen (OUTRO):
That concludes this episode of the Testing Show. We also want to encourage you, our listeners to give us a rating and a review on Apple Podcasts. Those ratings and reviews help raise the visibility of the show and let more people find us. Also, we want to invite you to come join us on The Testing Show Slack channel as a way to communicate about the show. Talk to us about what you like and what you’d like to hear. And also to help us shape future shows, please email us at [email protected] and we will send you an invite to join the group. The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen. Additionally, if you have questions you’d like to see addressed on the testing show or if you would like to be a guest on the podcast, please email us at [email protected].

Recent posts

Get started with a free 30 minute consultation with an expert.