Go Testing? All You Need to Know Here
Explore key Go testing elements such as parallel tests, subtests, teardown functions, and test helpers to improve your testing skills…
Testing is a crucial step in the software development process, and it’s straightforward in Go.
You usually follow these simple steps:
Make a file ending in *_test.go
Create a function that follows the TestXxx(t *testing.T) signature
Execute it with the go test command (or simply hit the button in your IDE).
But have you ever considered that you can do more than that?
In this article, we’ll delve into additional techniques for writing tests in Go that move past the basic steps. These methods aim to make your tests not just functional, but also more maintainable and efficient.
1. Dealing with Failed Marking
First, let’s talk about marking a test as failed in Go, it’s not just about using t.Fail()
, here’s a quick rundown of your options:
t.Fail()
: This simply marks the test as failed but doesn’t stop the test from running further.t.FailNow()
: This not only marks the test as failed but also halts any further execution right away withruntime.Goexit()
.t.Errorf()
: a two-in-one, it logs an error message witht.Logf()
while also marking the test as failed usingt.Fail()
.t.Fatalf()
: another combo, it logs an error message witht.Logf()
and halts the test immediately witht.FailNow()
.
If you have multiple independent test cases within a single function and you want to continue running the test even when one case fails, t.Errorf()
would be suitable:
func TestAdd(t *testing.T) {
// case 1
if res := Add(1, 2, 3); res != 6 {
t.Errorf("Expected %d instead of %d", 6, res)
}
// case 2
if res := Add(-1, -2); res != -3 {
t.Errorf("Expected %d instead of %d", -3, res)
}
}
But if your test cases are interdependent and you need to stop everything if one fails, then you might want to think about using t.Fatalf()
.
func TestAdd(t *testing.T) {
// case 1
case1 := Add(1, 2, 3);
if case1 != 6 {
t.Fatalf("Expected %d instead of %d", 6, case1) // <- stop the execution
}
// case 2
case2 := Add(case1, -2);
if case2 != 4 {
t.Errorf("Expected %d instead of %d", 4, case2)
}
}
2. Table Driven Test
To keep things straightforward, let’s look at unit testing a basic Add function that takes a list of integers as input.
In our TestAdd
function, we’re currently testing two cases:
func Add(a ...int) int {
total := 0
for i := range a { total += a[i] }
return total
}
func TestAdd(t *testing.T) {
// case 1
if res := Add(1, 2, 3); res != 1+2+3 {
t.Errorf("Expected %d instead of %d", 1+2+3, res)
}
// case 2
if res := Add(-1, -2); res != -1-2 {
t.Errorf("Expected %d instead of %d", -1-2, res)
}
}
Imagine you want to test more than just these two scenarios, say 10? Continuing with this style would result in redundant and hard-to-maintain code.
A more efficient alternative is the table-driven test method, which is also easier to read and update:
func TestAdd(t *testing.T) {
testCases := []struct { args []int; want int} {
{args: []int{1, 2, 3}, want: 6},
{args: []int{-1, -2}, want: -3},
{args: []int{0}, want: 0},
{args: []int{-1, 2}, want: 1},
}
for _, tc := range testCases {
if res := Add(tc.args...); res != tc.want {
t.Errorf("Add(%v) = %d; want %d", tc.args, res, tc.want)
}
}
}
With the table-driven method, adding new test scenarios becomes a breeze. Just update the testCases
slice, specifically the anonymous struct, and you’ll prevent code duplication. This keeps your test code neat and lets you review all test scenarios more easily.
Hmm… there’s a hiccup with the current approach, running the test using the go test -v
command for verbose output looks like this:
$ go test -v medium/write_test.go
=== RUN TestAdd
--- PASS: TestAdd (0.00s)
PASS
ok command-line-arguments 0.349s
This doesn’t tell us much other than the test passed and it doesn’t indicate the number of test cases or point out which specific one might have failed.
To illustrate, let’s pretend one test case doesn’t pass:
$ go test -v medium/write_test.go
--- FAIL: TestAdd (0.00s)
write_test.go:30: Add([0]) = 0; want 1
FAIL
The output isn’t that helpful when a test case fails so e can’t determine which case went wrong or what its inputs were.
This is where subtests prove to be valuable, incorporating subtests gives us the ability to elevate the quality of our test reports, making them more informative and useful when issues arise.
3. Subtests: Running Multiple Testcases
To take advantage of subtests in Go’s testing package, let’s get acquainted with a new function: t.Run(name string, f func(t *testing.T)) (isSuccess bool)
.
The t.Run()
function spawns a subtest with the provided name and runs the function f
as a separate goroutine. Even though each subtest is executed in its own goroutine, they're run one after another (we'll explore parallel execution in a bit)
Here’s how we can modify our existing test to include subtests:
func TestAdd(t *testing.T) {
testCases := []struct { name string; args []int; want int}{
{name: "case 1 2 3", args: []int{1, 2, 3}, want: 6},
{name: "case -1 -2", args: []int{-1, -2}, want: -3},
{name: "case 0", args: []int{0}, want: 0},
{name: "case -1 2", args: []int{-1, 2}, want: 1},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
if res := Add(tc.args...); res != tc.want {
t.Errorf("Add(%v) = %d; want %d", tc.args, res, tc.want)
}
})
}
}
I’ve introduced an extra field named ‘name
’ in our testCases anonymous struct, and this serves as the label for each subtest. To see how this alters our test output, let’s rerun the test using the go test -v
command:
$ go test -v medium/write_test.go
=== RUN TestAdd
=== RUN TestAdd/case_1_2_3
=== RUN TestAdd/case_-1_-2
=== RUN TestAdd/case_0
=== RUN TestAdd/case_-1_2
--- PASS: TestAdd (0.00s)
--- PASS: TestAdd/case_1_2_3 (0.00s)
--- PASS: TestAdd/case_-1_-2 (0.00s)
--- PASS: TestAdd/case_0 (0.00s)
--- PASS: TestAdd/case_-1_2 (0.00s)
PASS
ok command-line-arguments 0.523s
Perfect!
Notice how each subtest has its own distinct name? This makes it much easier to understand what’s going on, especially when a test case doesn’t go as planned.
Moreover, there’s a way to run just a specific test rather than the entire test suite and this can be quite the time-saver when you’re only interested in checking certain functionalities.
Just use this command line instruction:
$ go test -v -run=TestAdd/case_1_2_3 medium/write_test.go
=== RUN TestAdd
=== RUN TestAdd/case_1_2_3
--- PASS: TestAdd (0.00s)
--- PASS: TestAdd/case_1_2_3 (0.00s)
You might have noticed, like in our Failed Marking example, that our individual test cases don’t rely on one another.
To make the tests run faster, we can turn on parallel execution using the t.Parallel()
function. This lets the test cases run at the same time.
4. Running Subtests Concurrently
To run subtests in parallel, use t.Parallel()
to turn on the parallel mode.
This becomes particularly useful when you have a bunch of independent test cases. So by running them in parallel, you can speed up the whole testing process.
Check out how it’s done:
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel() // <---- mark this line
if res := Add(tc.args...); res != tc.want {
t.Errorf("Add(%v) = %d; want %d", tc.args, res, tc.want)
}
})
}
“Okay, hold on. Why on Earth is there a
tc := tc
in the loop?"
Well, that’s to handle a common gotcha in Go related to closures.
This extra line makes sure that each iteration of the loop has its own, separate copy of tc
, preventing conflicts with the original loop variable.
Confused? Here’s an example to clear things up:
func main() {
for i := 0; i < 3; i++ {
go func() {
fmt.Println(i)
}()
}
time.Sleep(5 * time.Second)
}
Now, what are you expecting this to output? You’d think ‘0, 1, 2’, right?
Wrong.
You’ll actually see ‘3 3 3’ because all the goroutines are referring to the same final value of i
, which is 3 after the loop ends. To fix this, you'd add i := i
just after starting the loop.
Good news: The LoopVar issue will be addressed in Go 1.22. You can read more in this article.
5. Other Unit Test Concepts
We’ve gone through the basics, but unit testing in Go offers even more but if you have advanced needs, you’ll find other tools similar to what’s available in different unit testing frameworks.
Helper
Firstly, let’s discuss the t.Helper()
function. According to the official description: “Helper marks the calling function as a test helper function. When printing file and line information, that function will be skipped.”
At first, I didn’t quite get the point. Why would I want to hide function details? But as I started creating my own test utilities, like assertions, the role of t.Helper()
became clear.
For instance, let’s say you’re using an AssertNil
helper function to make sure that another function returns a 'nil' error. Here's what that would look like:
// AssertNil is our helper here
func AssertNil(t *testing.T, a any) {
if a != nil {
t.Errorf("expected nil but receive %v", a)
}
}
// ReturnNil will return nil but actually return an error (not nil)
func RetunrNil() error {
return errors.New("error: fake nil")
}
func TestF(t *testing.T) {
AssertNil(t, RetunrNil())
}
When you run the go test -v
command as we've done before, you would see:
go test -v medium/write_test.go
=== RUN TestF
write_test.go:10: expected nil but receive error: fake nil
--- FAIL: TestF (0.00s)
FAIL
FAIL command-line-arguments 0.518s
FAIL
“So what’s the big deal, huh? Looks like the report is working just fine.”
Well, here’s the catch.
The report is too precise in a way that’s not so helpful. See, the error message points to line 10, where the t.Errorf
line is in the helper function.
But what I really want is for the error to point out where I actually called the helper function—namely, the "AssertNil(t, ReturnNil())" line.
Good news is, t.Helper()
solves this problem, so just slap that function call at the beginning of your helper function like so:
func AssertNil(t *testing.T, a any) {
t.Helper()
if a != nil {
t.Errorf("expected nil but receive %v", a)
}
}
Now, any error messages will point to the specific line where the helper function is invoked in the test. In this example, that would be line 20 with “AssertNil(t, ReturnNil())
”.
go test -v medium/write_test.go
=== RUN TestF
write_test.go:20: expected nil but receive error: fake nil
--- FAIL: TestF (0.00s)
FAIL
FAIL command-line-arguments 0.498s
FAIL
Now the error report is way more useful, showing exactly where the problem happened in our test function.
Cleanup (teardown)
The t.Cleanup()
function in Go 1.14 might stump you at first, it lets you define a cleanup function that will run after your test is over.
“Defer does the same thing, doesn’t it? What’s the big deal?”
Let’s take a closer look.
// Using t.CleanUp(f)...
func TestAdd(t *testing.T) {
t.Cleanup(func() {
fmt.Println("CleanUp called")
})
if res := Add(1, 2); res != 3 {
t.Errorf("1 + 2 = 3 but receive %d", res)
}
fmt.Println("Done")
}
// using defer
func TestAddWithDefer(t *testing.T) {
defer func() {
fmt.Println("CleanUp called")
}()
if res := Add(1, 2); res != 3 {
t.Errorf("1 + 2 = 3 but receive %d", res)
}
fmt.Println("Done")
}
// ------- THEY ARE THE SAME
Now, what makes t.Cleanup()
any better than using defer for tidying up post-test? Here's the scoop: t.Cleanup()
really shines when you're passing around that *testing.T
parameter between different functions.
What’s the point?
To make it clear, let’s look at an example that involves a function named ConnectDB(t *testing.T). This function establishes a database connection and then returns that connection:
type Connection struct{}
func ConnectDB(t *testing.T) Connection {
fmt.Println("ConnectDB")
t.Cleanup(func() {
fmt.Println("CloseDB")
})
return Connection{}
}
func TestDB(t *testing.T) {
connection := ConnectDB(t)
// ...
fmt.Println("Perform operations using connection...", connection)
}
Now, let’s run the test to actually see how t.Cleanup() behaves in real life:
go test -v medium/write_test.go
=== RUN TestDB
ConnectDB
do something with connection... {}
CloseDB
--- PASS: TestDB (0.00s)
PASS
We send t *testing.T
over to ConnectDB
and set up a cleanup function, instead of just sticking with defer conn.Close()
.
The 'CloseDB' message only shows its face after the complete test has wrapped up, not the ConnectDB()
has done its part.
This makes sure we shut down the database connection exactly when we mean to, not prematurely. If you go the defer
route, you'd end up closing the connection right as ConnectDB()
exits, and that might not be what you're after.
If you’re aware of any new testing techniques or updates from the Go Team, leave a comment below, this will alert me to revise the article.