Thoughts on Software Quality Assurance ๐ญ
- QA exists to protect the business and be an advocate for the
- end user. It's about spotting risks like UX friction, gaps in
- requirements, security holes, and compliance blind spots before
- they become bigger problems.
- If it all โlooks good,โ that's your cue to look deeper.
- QA isn't just about clicking through features and reporting
- obvious bugs. It's about digging deeper even when things seem
- fine, asking whether things actually make sense, and not
- assuming what's given to you is correct.
- See why
"It works" isn't enough in QA.
- I see QA as an observer and communicator, not as a gatekeeper
- or the 'quality police.' My job is to provide insights so the team
- can make informed decisions, not create roadblocks.
- Quality kicks in way before code shows up. Upstream affects
- downstream. Bugs don't only originate in the code. They also
- come from rushed choices,
fuzzy requirements, and blind spots.
- Testing can catch some stuff later, but it's a lot harder to fix
- the foundation after the house is already built.
- If QA collaborates early they can help the team spot where
- cracks might form before the concrete is even placed.
- QA Bottleneck?
Here's How to Fix It.
โ
Just because your tests pass doesn't mean your product does.
โ
It just means the code did what the tests expected. If the
โ
tests are weak, then those green check marks are worthless.
โ
It's like a car passing an inspection that never checked the
โ
brakes.
- You're shaping the product with a focus on user experience
- and business impact. Say you're testing a new checkout flow.
- The goal is more sales. If the process feels cumbersome you
- flag it, even if there are no functional bugs, because it
- might cause people to abandon their cart. You're not just
- testing the features, you're testing the experience.
- When you're checking your own work, you're too close to it.
- Everyone has blind spots.
- Not all bugs are worth fixing. In the real world, teams don't
- have unlimited time or dev resources to fix every bug and make
- everything perfect. You prioritize. Some bugs just don't matter
- enough to fix, and so you mark them as "won't do" and move on.
- Not all parts of an application are equally important or carry
- the same level of risk.
- Look at the pull request and diff, even if you can't read the code.
- How big of a change is it (one file or ten?), and in which areas?
- Check the PR convo too. If it says, โWait, are we sure this is the
- right behavior?โ or โQuick patch before launch,โ then you should
- probably test more thoroughly. I'm not reviewing code; I'm looking
- at PRs to understand scope, risk, and the story behind the change.
- Offshoring QA with a big time zone difference often leaves
- people just following instructions because they have limited
- real-time collaboration and little integration into the team.
-
Goodhart's Law
says: "When a measure becomes a target, it
- ceases to be a good measure." Look at the
VW emissions scandal.
- If metrics become the goal, teams end up protecting numbers
- instead of actually improving software.
-
Tight deadlines usually come from poor planning or pressure to
- ship fast before teams are ready. Constant pressure leads to
- rushing, cutting corners, and quick fixes. That's when quality
- drops. Focus moves from doing it right to just getting it done.
- If a team pushes fast, takes risks, and things go sideways,
-
QA shouldn't be the only one held responsible.
------------------------------------------------------------------------
Thoughts on Manual Testing ๐ญ
- I'm an exploratory tester who first started in UX research
- and I've seen first hand that users don't follow expected
- behaviors or assumptions about workflows. You know those
- preconceived ideas or fixed paths people expect users to
- take, like assuming everyone uses the app the same way or
- follows the โhappy pathโ perfectly. Real users often do
- things differently or unexpectedly. I may jot down some
- areas or scenarios that I want to explore, but I don't follow
- a strict script.
- Some people think manual testing doesn't take much skill.
- Sure, anyone can do it, but it's like anyone can cook a
- meal. It doesn't mean it's any good. Some follow the recipe
- without knowing why, miss the details, and forget about the
- user experience. That surface-level testing means the real
- users end up finding the frustrating bugs in production. ๐
- Really good manual testing is definitely not less valuable
- or easier than automation. It's just a different, creative
- kind of skill not everyone has.
- If written test cases are needed, that time's better spent
- writing out the documentation as code in an automation script.
------------------------------------------------------------------------
Thoughts on Automation Testing ๐ญ
- Automation is a tool; it's not the whole toolkit. It doesn't
- replace manual testing. Manual testing is needed for exploring
- new features, identifying edge cases, and understanding the
- user experience. Automation can't replace human intuition or
- creativity, and it will miss anything that requires thinking
- outside the script.
๐ Automation passed, experience failed. Even if the brakes
๐ work, it doesn't mean the car is comfortable, the controls
๐ are intuitive, and the dashboard is easy to read. Automation
๐ won't uncover any of this. It just checks if the car moves.
- Not every test is a good candidate or suitable for automation.
- Some tests are too complex, too flaky, or depend on things
- outside your control.
- The automation suite can grow too big and become unmanageable.
- Focus on the critical paths and high-risk areas of the application.
- Don't forget to
test your tests. You want to avoid the flaky ones.
- You'll have flaky tests no matter what, like assertions failing due
- due to time zones or environment quirks, so be ready when the devs
- say, โHey, your test broke my PR for no reason.โ
- And brittle tests? They break with small changes like targeting the
- nth element, then a front-end dev adds one above it, and boom, your
- test fails. Sometimes you can't avoid those selectors because of how
- the code's built, and if you can't add a custom HTML attribute like
- datatest-id, you'll just learn to live with it.
- Looking ahead is important. If the product roadmap says a part
- of our site is getting a facelift in the next couple of months,
- maybe we shouldn't go all-in on automation scripts just yet.
- The last thing we want is to spend more time maintaining
- automation scripts than we save by having them.
- Also we need to keep in mind that automated scripts can be
- sensitive, behaving differently between environments. What
- works perfectly in your local environment, might throw a fit
- in the
GitHub Actions runner.
------------------------------------------------------------------------
You made it all the way down here, so we'll give you a Joke.
Joke API source file:
jokeApi.js
------------------------------------------------------------------------