Software Testing is one of those things we usually learn in detail during our undergraduate years and then throw most of it out of the window when we start working in a fast paced environment. Unless of course the work demands that you do it. Some times the reason is obvious - scalability & reliability or other times its a senior who wants to see you write those test cases. It’s not that the code is not tested, there’s a QA, a PM (and some times internal users) to do all kinds of tests and tell you what the bugs are & then you fix it.
One of the long running jokes on writing test cases is that if you can’t get the tests to pass you delete the tests. Sound point.
Just like documentation at times, writing test cases feels like a chore. There’s always more work, you write the code for a feature, feature gets tested by someone else, you fix the bugs that someone else calls out on, you are called in to build another feature. Repeat that for a few times and you forget that there’s even such a thing as writing test cases.
At times there’s a tiny gap in the requirements that a product owner shares and the software engineers code that leads to what we all call - edge cases or misses. If the product owner has fully thought through the user journeys then those edge cases are just bug fixes about to be done. Nothing major, business as usual.
Bugs, edge cases, random misses, miscommunication on what’s to be done are common when the expectation is to move fast. And they are not that bad. Just that its either our short sighted nature that makes us not do the due diligence of testing or our offloading nature that makes us feel its someone else’s job to tell you what you coded wrong. With basic checks, few new habits & thinking user out - you - the software engineer can make the first release (even on staging) even more robust.
If you feel this resonates with you - here’s few things to bring in a change.
fwiw breaking testing into three parts
basic / fundamental - does the code do what the intention is, does it work well with rest of the codebase
scalability & reliability - does it scale, is it reliable at scale
UAT - user acceptance testing, testing for the end user
Basic Testing
Basic here includes writing unit tests (tests for individual modules of your code, in isolation) & integration testing (tests to see if your code interacts well with each other).
Python has `pytest`, Java has `JUnit`.
Scalability & Reliability Testing
This round of testing gets you to know what’s the performance of your code is, how well can it handle the scale, what are its QPM, P99 etc. metrics, are you using the resources in an efficient manner?
You can use locust (python) or JMeter for doing such tests. JMeter is my favorite, it has a slick old UI that helps you generate all kinds of tests and loads, and you can just save the tests as a “jmx plan” and then run that in a big VM with enough cores for you to bombard your service till you are satisfied.
The “loads” you bombard your service with - or the input to your code / service; you should think hard and add all kinds of input to handle all cases well.
User Acceptance Testing
Typically done by QA or a Product Manager, its aimed at testing your code as actual users would. It aims to validate how well your code handles the expectations in real world scenarios. It aims to check if the code behaves in the exact manner that the people who set out the expectations and requirements expect it to do.
IMO great engineers think user out. They simulate how the user of your tech is going to behave and test the code for those flows. If the requirements are not clear that cover all possible situations in real world then they push for clarity. The first cut if tested (and iterated) by them before releasing it to QA / PM surely results into lesser cycles of feedback and bugs being raised.
Like the title of this piece reads - `common sense testing`