Based on job postings, articles, it seems there is an unstoppable trend toward 100% automated tests in software development. This comes seemingly, at the expense of other methods like black box testing or, the heresy, manual testing.
This is surprising because they are many issues with fully automated testing.
The rise of TDD, BDD
But really, the focus of most teams and organizations is on TDD.
I can see the allure: "Let developers test their own code automatically. Have machines confirm everything is fine, with continuous integration multiple times a day.. no more bugs.. ever.."
The intent seems so logical, it is hard to argue. And many drank the cool-aid.
This article is not about the flame wars between TDD proponents and others. Some argue it is backwards to write failing test first, and it is inefficient to work so hard to make sure tests stay in sync while the code evolve in time.
Others, of course claim this is the only way to ensure coverage and automation.
But TDD or BDD, or even ATDD, the idea is still fundamentally about 100% test automation, set up or wired by developers and run by machines, automatically.
Now, if you do not practice TDD, or pretend to, it is going to be very difficult to get a job.
Testing the wrong things at the wrong time
Too much time is spent writing automated tests for everything.
Every team is different, and some will insist every function in every module has a test. Why would not you want to target 100% test coverage?
Because it is a massive waste of energy.
You should write tests you intend to run multiple times and forever, at a level where you want to avoid any break in functionality. Selecting the right level is the difficult decision.
The trends toward decoupling modules and services should help. Because you could have automated tests at the API level. You can then even treat these tests as black box testing, which is even better.
Not enough integration and system testing
If your software is very simple, unit testing could be enough. You could automate UI testing with a good tool and increase your confidence.
But when your software is more complex, for example it relies on a full database with many relations (SQL or NOSQL), real testing becomes very difficult.
You can mock database connections, you can try use other abstractions, but you are no longer actually testing the software. You are testing an idealized version of it. That is another very common problem.
If your application requires creating and manipulating a lot of data, faking it, automatically, is a huge challenge.
I would rather have experienced "manual testers" vetting all new functionality.
In QA, the Q is for Quality
The focus should be on new features. This is where most bugs will come in.
Sure, the new feature, if architectural changes were needed, may easily break other functionality and code. This is why regression testing is still important.
Still, focusing on the new feature and working hard to achieve a very high level of quality will help the most.
First, because you will find more bugs, second because this higher quality will be more resilient to breakages in the future, when new features come in.
When to use Selenium?
Also, when automation testing is badly done it has a huge impact on efficiency and maintenance costs.
It is easy to lose sight of the big picture. If you use Selenium or a similar tool, do not use it to test everything. Instead, focus on important scenarios, those you want to make sure still work after before a new release.
There is an important place for automation testing. But not when any change requires 30 minutes of automated tests before a commit.
And this should never be at the expense of black box testing, manual testing including system testing and acceptance testing at the highest level (UI with realistic and important scenarios). You should always try to avoid these mistakes.