It’s lookin’ good kids.
There’s one use case that I find very well suited for Copilot, which spares me tons of tedious work – unit testing.Yanis — Using Github Copilot for unit testing
I’ve seen that in my own test writing. GitHub Copilot has surely seen so many unit tests it’s really quite smart about helping complete them. Sometimes kicking it off with a descriptive comment is what it needs.
But taking it another step further, GitHub Copilot Labs now has a straight up button to generate tests.
Tabnine is in on this exact same game, where you just click a link and get tests:
This isn’t directly related to AI, but I would add that snapshot testing is a test I’ve been enjoying, whether a built-in feature of a testing library or just conceptually how you’ve constructed your tests.
For example, at CodePen we have tests for checking the code that processors spit out. In a recent setup, we have input for them in a
src folder, and it processes that and checks against output in a
dist folder. But the clutch part is that when the test runs it outputs a
dist_tmp folder. So if a test fails, you can “dif” the
dist_temp folders and see what differences made the failure. If the change is expected or non-important, you can just trash the
dist folder and rename
dist and the test passes again. This is like hitting an “Accept Updated Snapshot” button that a framework might provide.
Snapshots also tend to capture the full output of a function, rather than asserting against one small bit. That is helpful for more comprehensive coverage, which I also like.
I was thinking about all this after reading What if writing tests was a joyful experience? which calls them “expect tests” and has the same kinda vibe.