Musing on feedback from tests when using AI

Brian Kotos

June 10, 2025

Just a random thought— one benefit historically that you would get from writing tests is feedback on the design of your code. I.E. if your tests are hard to write, that’s a signal that the design of your code could likely be improved.

Now with generative AI, we are more removed from the friction created by the design of our code, since you can just use AI to brute force glaze over any bad code.

Not saying that this is good or bad. I think it probably depends on the context. But it’s important to be cognizant of this. Especially if the “product” you build is inherently technical. If the product that you deliver is an SDK, an NPM package, or a REST API, you might not want to ignore the feedback from your tests.

I use LLM tooling like Cursor and GitHub Copilot heavily, both at work and for personal projects. But I think it’s important for us all to be mindful of the blind spots these tools create in addition to the ways that they increase our leverage.