Interview with Ben Fellows: Life Running a Top Quality Assurance Agency

contents

Ben Fellows joined me for a chat about his work with Loop, a QA testing agency. We covered a range of topics including

Read on to check it out.

Joel: Thanks for chatting with us. How about you start by telling us a bit about yourself and your work?

Ben: Thanks for having me Joel! 

I’m the founder and CEO of Loop QA, which I started about five years ago. Or at least we had our five year anniversary of incorporation, but I think the first year and a half I didn’t exactly know how that worked so it took me a little bit to get off the ground.

Loop was inspired by my experience running operations for a startup, where we suffered from death by a thousand cuts. We never really got on top of our QA funnel.

It wasn’t for a lack of trying, but we had plenty of 2am testing or digging through Excel sheets. I think it’s because we were learning how to build a plane while flying it at the same time.

So I very much started Loop with the theory that when it comes to QA, that critical thinking and analytics abilities are a foundation to a world class understanding of terminology and methodology. 

Fast forwards five years, about half of my company focuses on the manual side of the work. We often help with thinking about better acceptance criteria and really trying to break away from the utopian world that product people can be in.

Then the other half is the test automation that’s relevant to you guys, specifically these days with Playwright. I’ve enjoyed working with that, and I know you guys also interact with Playwright.

Joel: I’m curious to hear more about the service, are you going in and writing these tests for people or are you more of a consulting relationship where you give them best practices?

Ben: So we tend to present ourselves as an alternative to hiring a full time QA. We focus mainly on three categories.

One is startups that are growing quickly. To hire really good QA, and there’s lots of great talent out there, you really have to go through a recruitment process to find the right people.

The way I describe the right QA candidate is three core things. It’s critical thinking, emotional intelligence and ambition.

I think emotional intelligence is the most undervalued QA skill because you’re not just the developer writing code all day long. You’re in the middle of an organization and often getting pulled in a lot of directions, so emotional intelligence is handling that.

What I mean by ambition is that a lot of people have accidentally ended up in QA, and they don’t quite understand where they’re going in the industry. I like to find people and work with them on understanding how being in QA or software testing really benefits your career. If someone can picture where they’re going with this, then there’s just so much to learn and ingest.

That’s all to say that we work with companies that recognize how challenging it is to hire. We’ll basically do their QA for them instead of them having to hire.

Of course, every engagement is slightly different, so there’s always some consultative aspect around strategy and stuff like that, but it’s mostly boots on the ground.

Joel: That’s cool, I can imagine it’s challenging in lots of ways. Is it typically green land development where there’s nothing already there, or are you trying to take over or migrate existing test suites?

Ben: It really varies, we have three personas.

First are startups that are scaling quickly. We’re often helping them to build QA from the ground up.

You then have legacy companies that have been around for a while. But they’re doing things like waterfall on their top level and need QA.

Then the third persona is these large, successful organizations that traditionally aren’t software companies. But, they’ve now decided to build software. In those companies, you often have project managers or other roles, but they’re not super familiar with development practices.

We don’t too often plug into an existing team. When we do, we tend to be opinionated so it works well if that team sees us as an asset, but sometimes they feel threatened which is totally fair.

One of the things I’m actively doing as a CEO is thinking if we should be standardizing more, maybe creating more of a box. We focus on being bespoke, which is both a good thing and a challenging thing.

Joel: I’ve been in companies where one QA team wrote an entire library that sat on top of another library, intending the whole company to use it. But then another engineering team said no as it wouldn’t work for their use case, so they used something else. It was a mess.

What types of tests are you guys mostly doing and familiar with? Is it mostly UI synthetic tests or do you delve deep into the land of things like API server tests as well?

Ben: Yeah, I’d say mostly API and above. We’re not necessarily doing integration tests, because often we’re treating API tests almost as a black box test. 

We tend to write end to end, even at the API level.

Ultimately, I think the gold standard in the industry is continuous delivery, and the idea that people like Facebook can release 1,000 times a day, right?

There’s this hype around speed, which is great. But unless you started on that development style, it’s difficult to retroactively shift into it.

What we try to do is create the maximum efficiency with the least amount of organizational friction.

To get true continuous delivery, you have to fully adopt something like TDD or BDD. That means role change, process and, architecture change, all these different things. It takes a couple years of work to make that shift.

We fit in with those organizations who aren’t interested in making that change. They have other priorities which is totally fair.

Our proposition is saying that we can get rid of 80% to 90% of the friction with just end to end tests and API tests, with a QA team running that. 

You’re not going to get 100% continuous delivery, but you can go from 30hr manual regression down to a 20min automation suite.

Does that all make sense?

Joel: Oh yeah, definitely. At Browserless, we have revenue targets and it’s really hard to put a price on good tests and delivery competence.

And if you have a small team and want to write automated tests, now you’ve got to figure out how to run them, what libraries to use, metrics you care about…lots of decisions that take a long time. Having a consultancy come in with those answers sounds really helpful.

Ben: Exactly, and then there’s the risk of hiring a one off person, right? If they leave after six months, you’re left with this abandoned test project.

Almost every company I talked to about test automation are like “oh yeah, we tried it for like six months, but it never provided us value”. And that’s as it wasn’t really maintained or a bunch of other reasons.

Joel: I find that testing is like a seat belt. It’s just a minor nuisance until you need it, and then it’s priceless. But even more so, as you won’t even know what catastrophes your test suite prevented.

Ben: Definitely. In lots of situations it is faster to write 500 end to end tests. But, lots of little hurdles stack up until people wonder why you’re not just doing continuous delivery.

Joel: I’m sure we could talk about testing philosophies for ages, but let's move on. Could you tell me more about what drew you to Playwright?

Ben: I actually found it through Robot Framework, using a library that was extending Playwright. 

So I figured I’d try Playwright out. The thing that struck me was the ability to use a locator for pretty much anything in the DOM.

We work with so many companies that don’t have patterns they necessarily follow, so we needed the flexibility to interact with different classes, CSS attributes or levels.

Playwright was one of the first tools where other than some closed shadow DOM stuff, it could interact with anything in the DOM.

That one piece alone is so core to test automation, especially us where we have very little control of the front end, so we need a tool to work in almost every situation.

Then there’s other great details, like the ability to have two users in different browsers logged in at the same time.

I remember writing a Cypress test where you’d log in as a student, then you'd have to log out, then you'd log in as a parent, then you'd have to log out and log back in as a student. With Playwright it was you just spin up two different browser contacts that don't know that each exists, and they just play with each other.

Really it’s just a super usable framework.

I host a happy hour on the Playwright Discord, so I’ve enjoyed getting to the development team and the community,

I’ll admit I’m not that familiar with Browserless, could you tell me a bit about it?

Joel: Yeah of course. Put simply it’s a management layer for headless chrome. It’s got all the things you need like token authorization, firewalling IPs, dealing with memory leaks, file size limits. 

It’s designed to abstract headless Chrome for use with Playwright or Puppeteer when performing tasks like E2E testing.

Ben: And have you played with load testing as a use case?

Joel: It’s definitely a great topic, as almost all the load testing tools I've seen don’t run JavaScript. So they can simulate lots of HTTP calls, but not parse that HTTP to run the other stuff it needs. We have a nice case study about using Browserless for stress testing.

Ben: The closest that we’ve done is using Playwright with Artillery. It runs on Lambdas which worked well enough. We did around 50,000 users over the course of a day.

I was pretty stunned that while the infrastructure is still challenging, the tooling between Playwright and Artillery was really straightforward.

Back 10 years ago I was at a ticketing agency. Our engineer didn’t know what load we could handle, and a  load test with an agency was quoted at $10k.

We couldn’t afford that so skipped the test. Sure enough, the servers failed.

Infrastructure is something I want to get more into because so many test automation limitations are infrastructure based.

A top reason why companies can’t have parallel running sites is because their staging environment isn’t taken care of. I’ve accidentally taken down staging environments by running just ten concurrent tests. The staging won’t have any kind of auto scaling or anything like that.

Joel: Load balancers are a fun one, we had to write an edge load balancer pretty much from the ground up. We have tests with that but it’s the hardest thing to test.

Security is also a fun one. I almost think security will get to a place where you’ll have security regression suites. 

Ben: I think security is a fascinating subject as the industry evolves. I was thinking about how phishing is so slick these days. I really want to retrain my company a little bit more.

I was listening to a podcast where they pointed out how the easiest way to get into a company’s infrastructure is to drop a load of USBs in front of their office and wait for a curious person to plug one in.

Joel: Last question then. It’s January, so is there anything you’re looking forward to this year that you think is going to make a big difference?

Ben: The biggest thing is using ChatGPT’s image recognition.

I’ve had a play with putting in an expected state and an observed state of two images from an app. From there it can call out the differences pretty easily, along with writing up bug tickets.

I tested it on an app where I changed one of the tags on a card from high priority to low. Within seconds, it’s saying that in the observed state the card has the wrong tag.

Image recognition isn’t part of their API yet, but I’m sure someone will build test automations around that.

If it doesn’t even get to that failure state and fails earlier, ChatGPT might take a guess on why it failed.

I also think more and more unit tests will be written by Copilot. I’m guessing it’s using vector searches, because now it looks through a whole codebase.

There’s a massive existential fight in QA right now, with a fear that AI will take people’s jobs.

I think we’re a long way from that. What it will do for now is empower people to be smarter and faster. Image recognition is fascinating because that is when you can avoid the DOM and just interpret the image instead.

Using mouse and images means you don’t care if it’s an iframe or a shadow DOM. It’ll make testing a lot less technically challenging. I’m sure it’ll have downsides, but it’s fascinating.

Joel: Great answer, those all sounds like really interesting developments. But that's all we've got time for, so thank you so much for chatting with me!

Want to find out more?

You can find Ben on linkedin or at workwithloop.com.

And of course, if you need any hosted browsers for your testing, grab a Browserless trial or check out our guide on Using Browserless for Test Automation.

Share this article

Ready to try the benefits of Browserless?