I agree, debugging is a very important skill, but unfortunately, it is expensive. Thus, I suggest avoiding debugging as maximum as possible.
Testing cannot replace things like exploratory testing, the machine is not human. And of course, having tests will never ensure a bug-free system.
Unfortunately, tests often have bad quality. Specially because people tend to create Waterfall Unit Tests (test an isolated chunk of code), instead of Agile Unit Tests (test one behavior of the program). The Agile Unit Tests imitates how we humans test the application, so they are quite similar to what we could do manually, and we can rely on them. The problem of Waterfall Unit Tests is that it do not allow things like safe refactors, or just be sure that the program works as expected. They only test units.
The recommendation about avoid running the application manual, is not only to avoid debugging, but also to increase the quality of the tests: the less the application runs, the better the tests should be. Because you try to mimic what you test manually, it guides you to implement Agile Unit Tests.
The most incredible thing is that you can transfer your test skills to debugging: you can debug really fast when you apply the TDD mindset. With TDD, we repeatedly break the code to see if everything works. To debug, we can break the program in a controlled way to discover the origin of the failure earlier and rule out false assumptions. This is something that I discovered in the last year, and it is truly effective.
It is so effective, that in one occasion there was a team looking for an unknown bug, they ask for my help, and they told me: "We have spent two days looking for the bug, please, do not fix it in less than an hour", it took two minutes to make them find and isolate the bug. We did not need to run the application manually.
But of course, there will be always points in your software that cannot be covered, and thus the test will not be reliable. But if you have good testing, it should happen once every blue moon.