The Testing Show: The ROI on Testing

July 12, 2023
/

Panelists

Matthew Heusser
Michael Larsen
Curtis Stuehrenberg
Nabhanshu Bambi
Transcript

Michael Larsen (INTRO):

Hello, and welcome to The Testing Show.

Episode 137.

The ROI on Testing.

This episode was recorded on Tuesday, May 2nd, 2023.

In this episode, Matthew Heusser and Michael Larsen invited Nabhanshu Bambi and Curtis Stuehrenberg to join us and discuss the true costs and the Return on Investment that testing can provide both in the short and long terms of projects and teams.

And with that, on with the show.

Matthew Heusser (00:00):
So welcome back to The Testing Show. This time we want to talk about a topic… it’s a little different. Sometimes our topics are soft. They’re people, sometimes our topics are hard, and by that I mean it can have a defined outcome. There’s a correct answer. This is kind of both because we’re talking about the R O I, the return on investment of software testing. To do that, we have two experts in the field. First, I think it’s his first time on this show, but he actually contributed a chapter to “The Cost of Testing” book 10 years ago. Curtis Stuehrenberg, welcome to the show, Curtis.

Curtis Stuehrenberg (00:39):
Thank you. And ouch. Hearing that was, that was 10 years ago, .

Matthew Heusser (00:42):
, you know, I didn’t mean that. I mean like you’ve stuck around. It’s a good thing. I understand.

Curtis Stuehrenberg (00:47):
True, true.

Matthew Heusser (00:48):
We have an industry where it’s like, ooh, oh my gosh, you’re, there was a joke on Silicon Valley once You’re how old? 24. 26. Ooh. Like no, we’ve got experience. We’ve been seasoned and some of us have kept up with the times. I realize some people don’t.

Curtis Stuehrenberg (01:03):
Yes.

Matthew Heusser (01:04):
You know, in that time you’ve done a lot of impressive things. You’re a senior program manager at Facebook now Meta.

Curtis Stuehrenberg (01:09):
Mm-hmm. .

Matthew Heusser (01:10):
You work for Deloitte Digital, you work for Applause doing some of the crowdsource testing work and you’ve done a lot of software development going back and going, and it was actually your idea on LinkedIn. You had a topic on the ROI for testing. Started the ball rolling.

Curtis Stuehrenberg (01:26):
Yeah, I mean I, I don’t wanna take up too much time, but that was a subject that I’ve been sort of struggling with for a few years now because as you and I think Michael know, for a while I had my own company running, but it’s mostly like a skunkworks project where I was trying to consolidate all of my theoretical ideas and put them together into something that was more practical. And then I joined Deloitte in a quasi-consulting role. And one of the things I noticed is that when I began sitting in the room with your CTOs and your CFOs and your CEOs to sort of pitch them on this idea of testing and exploratory testing and automation and all of these things, none of them really cared about in improving the product, the quality, things like that. What they wanted to hear was numbers.

(02:13):
They said, “Okay, if I write you a check for 1.5 million, what can I see back from that?” And I realized that I had no answer for that. So that’s when it began trying to put some of those numbers together to say, “All right, how can I answer that question?” and not just pull out the standard response of, “Well customer experience and retention and things”. They wanted some sort of at least vague numbers to say, “If we do this type of automation, what can we expect to see from that? Or what are some of the things we’re gonna look at? If we’re gonna do exploratory testing, what’s that gonna look like?” That’s what began this process for me and that’s why I’ve been kind of on this journey ever since then.

Matthew Heusser (02:51):
Yeah. And that pairs well with our other guest. Nabhanshu Bambi. Can I call you Nabs? Would that be all right?

Nabhanshu Bambi (03:00):
Yes, absolutely. Nabs is totally fine.

Matthew Heusser (03:02):
Okay. The reason I think that pairs well is because not only are you Qualitest’s head of transformation, but you also have a deep background in analytics and strategy at a couple of different companies. At Dun and Bradstreet. Capgemini, it’s interesting in our field that people do ask for numbers. They do say, “How can you make that relevant to me?” And I think you’ve got the experience putting those numbers in contexts in several different places. I look forward to hearing your contributions on this. If it’s all right, I’ll ask Nabs the first question, which is, is testing an investment and how do you calculate return?

Nabhanshu Bambi (03:43):
Thanks Matt. I totally agree with Curtis that when you are going to a client and they basically ask about that, well, tell me if I’m putting a million dollars over here, I’m putting half a million dollars over here as an investment to bring up X, Y, Z tools or automate a certain aspect of a customer journey, so on and so forth, what do I get? I have always been a numbers guy. Primarily I go with just numbers coming from my background where I started my career in the BI domain and then I’ve moved into the analytics and from the analytics side right now I am heading the advisory for Qualitest. When we go into a client, generally the client asks us the same question. I understand these five things or two things are broken, you are going to fix these five things. As a consultant, what is the R O I that I’m going to get?

(04:40):
What is the value that I will get? Apart from, yes, the customer experience as what Curtis said, these are all definitely things that are non-tangible, but what is the value? What can I tell my management, my leadership, I’m investing half a million dollars that I’m asking you to invest over here. How much can I get? Maybe six months, a year, two years, or five years. Coming from a numbers background and analytical background, I always go with giving hard numbers. What is the investment going to be and what is the cost savings going to be in a certain span of time? Whenever we are venturing into with a new client and we are giving our opinions and we are giving our findings on what are the things that are broken, what can be fixed and we are creating roadmaps for them, it always is a intentional fact that we are talking just in numbers because that’s primarily what the clients can take to the leadership and talk about because numbers is exact same language that people understand and it’s easier to get the dollars into play once you start talking about that.

(05:50):
I will be, we will be increasing operational efficiency by close to about 35% or we will be able to cut your cost by about 27% or probably we’ll be able to bring your investment dollars to about let’s say half a million dollars or so in the span of a year or so. So yes, absolutely. Your question about, “Is testing an investment?” You might wanna call it as an insurance or you probably might wanna call it even as a experience that you are providing to your customer. I’ll give a small example. When we are going to buy a car, there are certain segments of cars like the Mercedes-Benz or the BMWs or X, Y, Z, the luxury brands, the reason why of course they have plush seats and all those kind of things. However, most of the consumers actually wanna buy these products. Reason is because first thing what they say is the quality is good, it’s dependable and that’s primarily the reason the engine or the investment that the brand or the company is making in quality assurance to make sure that the customer experience is not just grade A on day one, but it happens to be grade A as the viewers go by even in software products too.

Matthew Heusser (07:13):
Well thanks Curtis. What did you think of that answer?

Curtis Stuehrenberg (07:16):
There’s a couple of things I completely agree with, Nabhanshu. One of the things that I really agree with you is that everybody has a leader. Even if you’re talking to like say a founder or a CEO, they still have people they report to. If it’s a publicly traded company, there’s going to be a board of directors, there’s going to be the stockholders. In many cases, as we’re seeing now, there’s going to be completely independent stock analysts who are going to react to whatever you put in your financial statements and they will comb through it looking for all of the stuff. Typically testing or quality things are put into one of two buckets, either R&D or into operational stuff. They’re going to comb that over to say, “Okay, what’s this? How are they investing in R&D?

(08:00):
How are they investing in their operational um, maintenance and flow through and how is that impacting what’s going on? And if they see what they think is a giant hole, that’s going to impact their understanding of the company. My next point is I, I think sometimes we focus too much on testing. Testing is something we do, so it’s an activity. I kind of liken it to a scientist running experiments in a lab. So for me, testing are those lab experiments that are meant to collect data. We can then use to analyze a hypothesis that we’re using to try to learn about a theory. And the theory for me goes into understanding the risk that is important to the client. A lot of times when there’s an assumption that what we’re doing when we say we’re testing is we’re looking at either product risk and sometimes financial risk from that product, but there’s all kinds of risks that customers can be concerned about, which we can then do this analysis and then create some experiments or tests to try to understand better what that current risk exposure is so the CEOs and things can make informed decisions.

(09:11):
I think if we change our perspective just a little bit, we can start to see we’re not just looking at operational metrics to say, “We caught certain issues or certain bugs and we released them out and so we’re reducing the cost to produce things.” We can also look to say, “Where can we start investigating to understand where we can push forward so they can produce more products, react better to their customers?” Generally, profit is based upon two things. It’s based upon revenue versus cost. As quality assurance people, I think we can have an impact on both of those, whereas I think a lot of times we just focus on cost but we can also impact revenue. Automation I think is a great tool for impacting revenue in multiple ways. We can talk about that later. I’m really passionate about this topic and we can go on and on and I can talk forever on this. So I’m going to say that’s my view on this. I don’t disagree with you Nabhanshu, or Nabs. I’m just expanding on what you said and I remembered it was something I wanted to talk about in this podcast.

Michael Larsen (10:16):
So here’s something that I often come back to. There’s a couple of quotes that whenever I think about what is software testing and the purpose of why we do it, there’s two things that I remember intimately. One of which was… actually it was amusing because it was directed at the time that I was working at Cisco Systems. It was something Bill Gates said about something that John Chambers had said when talking about the nature of Cisco and where it was and how it all fit together… mind you, again, in the 90s and one of the comments that Bill Gates said was that, “routing is a lot like the plumbing in your house. When it’s working, you don’t care about it, you don’t think about it, it’s not present. The only time you care about the plumbing in your house is when there’s a problem. If you spring a leak or there’s a blockage, your plumbing becomes very important and you need to make sure that it’s working. But once you get it working again you go back to not really thinking about that.” In a way, you are actually looking at software testing as, to focus on what Nabs said. It’s an insurance policy and also to focus on something that Curtis said here, is you don’t necessarily think about how much money you’re saving from testing, but you think about how much money you’re losing when a problem comes to light. Again, testing is a cost until that cost is highlighted by a bigger cost of lost opportunity.

Curtis Stuehrenberg (11:51):
I wanna just agree with you, and sort of push it a little bit further. Everything that a company does to produce a product is a cost. Hiring developers, like Michael was hinting at, doing the infrastructure, doing all of that stuff. It’s all a cost. Everyone in business understands that. The thing is, it’s all about what we’re talking about, the return on investment. If somebody says, “If I can spend 500,000 and make a million, I will spend that $500,000 as much as I can.” I’m perfectly fine with that and I think when I look at people talking about testing, they seem to say it’s a cost and then they sort of assume, “Well, no one’s gonna wanna do it because it’s a cost.” I think there’s a philosophical switch that needs to be flipped. Everything is a cost, it’s what value are we producing for that? I just wanted to specifically state that cause I think Michael and you were both sort of hinting at that and to a certain extent Nabs was hinting at that, but I wanted to say it just outright. Everything is a cost, but if you could say you will make more than this cost you then people are perfectly happy to spend as much money as you want on that.

Nabhanshu Bambi (12:55):
I totally agree, Curtis. I’m totally on board with that except for the fact I always go back to one of the quotes when we are trying to build a product and when, whenever we are trying to build a model that can help us to expedite the processes that we have there. This guy, I think Tom DeMarco, I think he had a quote that said, “Quality is an expense as long as you’re ready to pay the price for it”. We did an exercise recently with one of our clients and what we did was pretty much slice and dice the data for exactly they had the number of defects test runs and so on and so forth. After massaging all the data for the past six, eight months or so, we found out that they had a certain percentage of defects that they were actually having in production.

Curtis Stuehrenberg (13:45):
Mm-hmm

Nabhanshu Bambi (13:46):
We understood how much of those defects would be and generally we came out with a, you know, a formula and we’ve been using that kind of a formula. Percentage of upside and downside pretty much applies to all the other clients too. What we found out was a cost of a defect introduction is close to about $12,000, but the cost of the same defect if you actually discover it in testing is close to $145. It can definitely vary a lot more here and there depending upon the size of the company and the kind of customers they’re interacting with. What I’m trying to say over here is the cost of incurring or catching a defect in the lower environments in the testing phase is way much lower. That is primarily the reason why an investment on testing is the best investment that a company can get. The value on the return, not just from a tangible perspective but from an intangible perspective to where the brand carries the weight of having a quality product being churned out every time that it hits the market.

(14:56):
Of course, as what Curtis said, there are a lot of levers that can be brought to bear like the automation or the CI/CD or introducing the AI/ML, which we started heavily using with our clients to bring their operational efficiencies and reduce their cost by catching the defects early on. One of the places that we have a product that we constantly leverage, which primarily goes and looks at the requirements as when the analyst or the product owner is writing the requirements. It gives the feedback to the product owner exactly at the time, within less than a half a second, that trickles down into creating a better requirement, creating a better test case, and finally a better product. Reducing the cost much more than probably it would have been if the requirement wouldn’t have been validated right at the start. So having an investment at that point when it’s a starting phase of when you’re creating the product and making sure you testing a lot, everybody’s talks about the shift left is primarily from my perspective, I think we’ve seen a lot of improvements and a cost savings with our clients that we have seen across the spectrum.

Curtis Stuehrenberg (16:15):
I completely agree with you, Nabs, but I think we can have a whole podcast on that topic alone. I’ve got some strong thoughts that I’ve seen through both research and practice into what’s going on with that observed behavior. I don’t disagree with the observed behavior, but I think there’s some deeper things there. What I’ve noticed and through the research I’ve done, if you look at things like time to resolve and you look at priority movement such as things get deprioritized down, where you see a lot of this impact is not necessarily on the high-level bugs or the stoppers is when you start moving down into bugs that get downplayed or start out as like a medium or maybe even a sort of high bug. What I’ve seen is that it’s more about how something is detected and how long it takes to sort of understand the root cause.

(17:11):
If you file a bug from say a user or even a tester punching away at buttons and says, “Oh, I found this”, that’s gonna take a little effort to try to understand what was really going on to cause the bug. That’s like the worst possible scenario when a user reports the bug and there there’s a few other factors like you have to now engage the customer service or the maintenance teams or things like that to sort and that’s gonna increase your costs. But I think why these bugs are so difficult is because it takes people forever to understand what’s going on because they’re in a production environment, the user doesn’t have any logs, they can point to you, but I’ve noticed that when testers do a bunch of their own work and file the debug logs and the other whatever, then the cost goes down even if they catch it late in testing because the developers can then say, “Oh, I know exactly what that is”, and just fix it.

Matthew Heusser (18:01):
Right? So and

Curtis Stuehrenberg (18:02):
Roll it out. But again, so I’m going back to say, Nabs, when you look at something, if you catch the bug in the design phase, it’s easy to fix because people know exactly where it is and they don’t have to do anything to fix it. If you catch it with a unit test, people can say, “Oh yep! I know right where that is. I can fix it.” Unfortunately, most bugs don’t get caught in unit testing, but you can… the easier it is to understand what the root cause is and fix it, the lower the cost of the bug is going to be no matter when you catch it.

Nabhanshu Bambi (18:28):
I totally agree with you Curtis. There is absolutely no doubt about it and I think I can fall back to the example that I was giving about the cost of the defect that we had actually calculated across the spectrum of all the environments, You’re absolutely spot on. Catching a defect in the design phase is the cheapest, it’s the most inexpensive. Moving on when you move it to the unit testing, when the developers are unit testing the code, then definitely the cost goes up a little bit but not as much as probably it is even moving to the next environment. Testing, yes, that’s primarily where you get the bang for the buck where those guys are actually trained to catch all these defects. But definitely, there’s no doubt about it, I totally agree with you that catching a defect in the early phases of the product of whenever the requirement is being created, it is the most inexpensive for any company to get all those defects.

Curtis Stuehrenberg (19:27):
There’s not as much impact. It’s almost null if the testers that you have testing who caught the bug are just reporting as if a customer would report it. But if they’re reporting it as someone who has technical acumen or actually understands the product, it can tell you, “Oh, this happened, and here’s some, say, Charles logs” or “Here’s what the service was, here’s the error I got back underneath the service level”, things like that. Then you start to see some significant cost savings. I just wanna make sure we say it’s not just catching the bug, it’s how the bug is reported and then handled in the process.

Matthew Heusser (20:02):
I think what I hear us saying is if we had access to data and we could say things like, “The average time to debug is so long. Averages are difficult because bugs are not created equal, but we tend to find more bugs here, which is late and then it takes us a long time to fix it.” And we could even say, “Because we find them so late, no one knows that the person that wrote it has left the company. The analyst that designed it has left the company. We don’t understand this system and is really complex, we should just rewrite it. It’s not really that bad. There’s a workaround”… and so the software itself slowly, incrementally becomes worse. And if we could say what would be the difference in the quality if we had found and fixed it early, reverse it as it is now. There’s two problems. Well getting the data, creating the model, having a model that stands up to even the lightest amount of scrutiny.

Curtis Stuehrenberg (20:59):
Yeah, and I’ll just say that that was like the last two years of my job at Meta. Doing that analysis, tilting the windmills, actually creating data because none existed. So I had to go out and actually create the data ourselves to sort of pull that in and then running analysis to try to pitch the idea. That was a big part of my job cuz again, they’re generating petabytes of data like every three to four hours.

Michael Larsen (21:24):
As I’m following this and I’m listening again, one of the things I love to do on this show is put myself in the position of our average everyday tester who’s listening to this and we realize that everybody who’s listening to this could be… you’re gonna have your really senior people to just up-and-comers. I want to present this from the up-and-comers who want to be able to get into a position to have these conversations. I wanna say, “I want to as an up-and-coming tester who wants to be able to be more effective, but maybe I don’t either have the vocabulary or the experience to do so.” What’s a concrete way that I as your garden variety tester might be able to start participating in these and say, “How can I help shape the conversation on return for investment?” Or how can I help indicate that I am a good return on investment for this?

Curtis Stuehrenberg (22:17):
I’ll just jump in and I’ll just respond back by saying what I wish I had done when I began doing this in 1998. I think the sooner you can start understanding the why of what you’re doing, what’s the intention of the test cases you’re running? Be curious, dig into. What are you actually doing? What’s the risk that you’re trying to uncover? Uncover what’s important about what you’re doing. I think if you do that, it puts you in the mindset to then start talking about, okay great, I now know why what I’m doing is important. How do I put a value on that? Because if you don’t understand what value you’re contributing to the people who then control the purse strings, you’re gonna have a hard time having that conversation.

Nabhanshu Bambi (23:02):
That’s awesome. Curtis. What I would probably say to the folks who are actually starting their testing right now, the testers, if they are doing a certain job in a certain span of time and writing a test case, that could be, or probably running a test case or maybe automating it, they probably need to think about after a week or a month or a sprint or a cycle, is it the same amount of time that they dedicated as in the previous cycle? Has it gone smaller? Have they been able to bring some sort of an efficiency to the system and I’m not talking about reducing, bringing in cost efficiencies and all that. Doing the same task, what they did the last time, have they been able to do it faster and easier? Why is the first thing that comes is if I’m able to do something in 30 minutes or probably an hour or two hours and if I’m able to shave off about 10% of the time in the first go, that basically gives me some time to think.

(24:06):
If I’m thinking, I’m probably looking at my mind is relaxed and I’m actually starting to plan for the next stage, the next cycle probably I’ll be able to shave about another 10 or 20% and that keeps on adding up. That 10, 20% compounding every time also amounts to increased efficiency of that specific tester or individual into looking at certain other tasks, putting his mind to rest and actually being able to think about some other things. Maybe trying to find some other ways to bring in an efficiency. That also reduces your time to market and of course cuts down the cost for creating that particular product, that particular mindset that tester can create. If 10 or 20% can be shaved off, that is the mindset that I would advise the new people who are gonna join the workforce of are in their start of the careers to be looking in that direction.

Michael Larsen (25:05):
So one of the things that we’ve already talked about, and we’ve hinted on this, is that anybody in the C-suite cares about metrics. We’ve already talked about this from the perspective of, okay, it’s important to be aware and I absolutely agree Curtis and Nabs both. If you understand the why, that is critical. From my own experience with accessibility, by being able to communicate the why and to take an advocacy bent, I can actually be effective. But that also comes down to understanding the what’s and where we’re actually losing opportunities. Accessibility is a little bit of a cheater because I can always point to the simple answer is, “Well why is this really important? Oh it’s not unless you would like to avoid getting sued.” That’s an easy thing to discuss because at that point it’s like, “Oh well then maybe this is something we need to get into.” Some other areas are a little more tricky and squishy and hard to explain. So metrics do matter. Famous quote of course is anything that can be measured is something that can be improved. But we also have to say that anything that can be measured is also something that can be gamed. If we really want to focus on metrics, what metrics matter in this regard?

Curtis Stuehrenberg (26:22):
Oh geez. Um,

(26:26):
For me there isn’t a one size fits all. It all depends upon your theory or what you’re concerned about. To give context, what you brought up about accessibility. Accessibility can mean a couple of different things when we talk about it. The one you were mentioning is accessibility from a compliance standpoint. The typical quality assurance person outside of the software community would qualify that as a legal risk. So what risk are we exposing ourselves legally if we do not comply with the American Disabilities Act, if you’re in the United States, I think the European Union has one that’s similar to that. I just don’t know the name of it off the top of my head. So if we are not compliant with the A D A guidelines and the 5 0 8 requirements in anything if you’re working with government agencies, what is the risk of that? And is that a risk? And if it’s, it is a risk that’s important to people, then you can build out, okay, let’s measure how exposed we are to that risk and then come up with a mitigation plan to understand how we can have that risk be a lot more tolerable.

(27:25):
I think you can do the same thing with say like customer experience or satisfaction or usage. A lot of marketing teams these days in digital space put a lot of effort into understanding how the customers use a product and where they’re getting value out of it. Say you’re a retail agency and they’ll say, how long does it take to go from getting into the site to them spending money? What actions are our customers taking that are generating more revenue for us either by ads or with other things like that? If you look at that, you can say, “Great. Now how can I measure risk to that? How can I measure that we’re not impeding customers from getting to that point?” I think in that case you would wanna work with the marketing team or the research team to understand how they’re doing it and then basically steal from them.

(28:10):
musicians will steal notes or guitar riffs. Use that data, build it out and say, “We’re validating these things are happening. They’re noticing that customers get stuck on this one page or they go back and forth between these three pages a lot before they buy something. How can we look to see what’s going on and see if there’s something in the product that’s causing that?” And if you do that, you’re moving into like a revenue generation point. “Why are customers doing X? And then go experiment with your exploratory testing. Why are they doing that? Are we covering that? Is this something we know about or didn’t know about?” Yada yada. To give also some ideas around how we can impact revenue going forward as well, one of the biggest areas where teams see costs is through churn and through the “big giant fuzzy gorilla” of legacy code.

(28:58):
The engineers don’t really understand what’s going on and how to touch it. So they either refuse to touch it or they touch it with trepidation and then ship and then things break. I’ve noticed that automation can act both as documentation as well as a blanket to give them comfort for touching those things. A good set of automation and not just unit test cases, end-to-end test cases, but also integration tests, service level tests, things like that can be living documentation so that the people working on code that need to touch the legacy code can understand what it’s doing at a very fundamental level, so they can understand, “Okay, this is what we need to do and how to touch it”, so they’re more comfortable and confident building products that go into that and the time it takes to get somebody onboarded is shrunk significantly. I’ve noticed a trend that when a team has good automation, they can usually have somebody up and running that’s new to the team in about 30 days to about a month to a month and a half.

(29:49):
It can take anywhere up to three to six months to get somebody up to speed, depending upon the amount of legacy code, if they don’t have test cases or if their documentation’s not up to date. Since developers don’t like documenting, have them write test cases. You now suddenly have documentation that is up to date because it passes. That’s an example of where testing can actually improve revenue and profitability. The biggest message I want to get outta this entire podcast is I really want testers and QA people to break outta this idea that we’re there to be a prophylactic, that we’re there to catch bugs and to have bugs fixed so that customers have a better experience. We can have so much impact all around the company simply by changing our own expectations for ourselves and not doing anything different. Just applying it in new ways around the organization. So anyway…

Matthew Heusser (30:40):
We like to end this on a final thought and I think I would call that your final thought, but would you agree if we were retail and we had a path to purchase

Curtis Stuehrenberg (30:50):
mm-hmm.

Matthew Heusser (30:50):
and we knew that, when people filled up the shopping cart some, there’s some amount of abandons where they just don’t click final. We could look at the incidents we’ve had over the past year, outages they caused and just not really wonderful customer experiences that we defer as “fix later” or “fix never”, or “works as designed”. And we say if we really invested in testing such that, and I know it’s proving a negative, I know it’s not possible to make these kind of promises, but if we

Curtis Stuehrenberg (31:23):
mm-hmm

Matthew Heusser (31:24):
draw significant reduction in the number of outages, the things that cause people to not click through, when we take that number and we divide what’s possible by half, if we think testing could get us there, we think testing would cost this much and would increase revenue by this much. Is that a realistic, responsible argument to make?

Curtis Stuehrenberg (31:44):
Yes. In a way. I think again, you’re focusing on bugs and outages. I think where exploratory testing and having experienced testers can really shine is because if you talk to the people who are doing this analysis, they don’t understand why the customers are not following through and actually purchasing what’s in their cart. Cuz all they have are logs that say, “People do this, they do that, they do this, and then suddenly they just leave and they don’t actually buy anything.” I think exploratory testing and QA can help answer those questions by being the explorers who go in and then look at the process and say, “Okay, we know how they got here and they know what they were doing. Let’s explore why they might not be converting because that is lost revenue and that is really important to all of the people that are sitting in that big giant board of director’s room.”

Matthew Heusser (32:34):
It’s kind of a more expansive view of quality.

Curtis Stuehrenberg (32:37):
Yeah.

Matthew Heusser (32:37):
Over the years, Qualitest and this show, we’ve gone back, we’ve swung the pendulum too. We’re really focused on testing too. Now again, yeah, we’re really, we have a more expansive view of quality and the customer experience and that seems like a good time for Nabs to give his final thought.

Nabhanshu Bambi (32:52):
I’ve been listening to Curtis and yourself. I totally agree. There needs to be an exploratory testing where it’s not just about why that particular bug came in or why that particular defect came in, but it’s also about what if there was a change of behavior? What could influence that particular customer to not abandon that card and actually go forward with that particular sale? From an R O I perspective, what is probably the crux of what we are talking over here, testing or quality assurance brings a lot of value to the table from a brand recognition perspective, from a brand reliability perspective. It decreases the cost from a cost-decreasing perspective, increasing the time to market, Whoever… from my perspective, the biggest R O I that any company stands to generate is probably from a testing organization or a quality assurance organization is the cost of banking upon and saying this is a quality product and I would definitely like to buy it. That’s the biggest value that any quality assurance can actually build into a product in any organization.

Matthew Heusser (34:02):
Okay. Well I think we should let our hosts… I think we should let our guests have the last word. Michael and I say a lot a lot of the time and I think that was a great way to end it. So thanks you all for coming. It sounds like we got a lot more to talk about. Thank you.

Curtis Stuehrenberg (34:19):
Thank you very much. And I, I really, I love talking with in this sort of format, great minds always just inspire me to do things after we leave. So, and I got that from everybody here, so thank you.

Nabhanshu Bambi (34:28):
You. Thank you. It was definitely a great conversation.

Michael Larsen (34:31):
Thanks for joining us, everybody. Much appreciated.

Michael Larsen (OUTRO):
That concludes this episode of the Testing Show. We also want to encourage you, our listeners to give us a rating and a review on Apple Podcasts. Those ratings and reviews help raise the visibility of the show and let more people find us. Also, we want to invite you to come join us on The Testing Show Slack channel as a way to communicate about the show. Talk to us about what you like and what you’d like to hear. And also to help us shape future shows, please email us at [email protected] and we will send you an invite to join the group. The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen. Additionally, if you have questions you’d like to see addressed on the testing show or if you would like to be a guest on the podcast, please email us at [email protected].

Recent posts

Get started with a free 30 minute consultation with an expert.