Devtrovert

Share this post

Go Performance Boosters: The Top 5 Tips and Tricks You Need to Know

blog.devtrovert.com

Discover more from Devtrovert

Devtrovert provides software engineers with educational, practical write-ups on backend development, system design, and other core topics.
Continue reading
Sign in
Golang

Go Performance Boosters: The Top 5 Tips and Tricks You Need to Know

The good news is you don't have to master complex theoretical optimizations to get meaningful speed boosts…

Phuong Le
Aug 25, 2023
1
Share this post

Go Performance Boosters: The Top 5 Tips and Tricks You Need to Know

blog.devtrovert.com

Feeling like your Go code could use a performance pick-me-up? I’ve been there too.

Photo by israel palacio on Unsplash

Don’t worry, you don’t need to delve deep into complex algorithms to make your code run faster. In this article, I’ll share five straightforward tips to enhance Go’s performance that have served me well in real-world applications.

Claims: While the following practices are generally beneficial, there may be instances where maximum performance isn’t a top priority. In such cases, it’s okay to make exceptions.


1. Avoid string concatenation

Creating strings in Go is a common task right? You might think that using the + operator for string concatenation is the easy way to go, but it’s actually not that efficient.

Why? Each time you use a +, a new string is allocated behind the scenes.

Take a look at this code snippet:

s := ""
for i := 0; i < 100000; i++ {
  s += "x"
}
fmt.Println(s)

In this example, a new “x” string and a new s + “x” string are created in each loop iteration. If you’re dealing with large loops, this is far from ideal.

A smarter way to solve this is by using a bytes.Buffer. This method builds the string in a more efficient manner:

var buffer bytes.Buffer

for i := 0; i < 100000; i++ {
  buffer.WriteString("x")
}

s := buffer.String()
fmt.Println(s)

This approach avoids the overhead of new string allocations with each loop. Thanks my reader Fede, he also recommended using strings.Builder, which works much like bytes.Buffer. However, it’s tailored for string building and might offer even better performance.

var builder strings.Builder

for i := 0; i < 100000; i++ {
  builder.WriteString("x")
}

s := builder.String()
fmt.Println(s)

I did some tests to figure this out, and what I experienced was pretty interesting:

  • If you use bytes.Buffer, it’s a lot faster than using the + sign to join strings together. In some tests, it was even more than 250 times faster, which is a big deal.

  • But if you’re focused on making strings, strings.Builder is about 1.5 times faster than bytes.Buffer.

The speed can change based on your computer and setup, but the main idea is simple: using + to join strings is slow, strings.Builder is the fastest for making strings, and bytes.Buffer gives you more options

“Why is strings.Builder quicker specifically for string creation?”

Well, strings.Builder is made to do one thing really well: create strings quickly.

Bytes.Buffer, on the other hand, is more all-around and can handle different kinds of data. So when it comes to just making strings, strings.Builder is better because that’s all it’s designed to do.

Devtrovert is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

2. Pre-Allocating for slice, map

Allocating space in advance for slices and maps in Go can speed things up. Let me break it down:

When you add more elements to a slice or map and it fills up, it will need to resize and this resizing process takes extra time. So by pre-allocating the size at the beginning, you avoid this resizing, making everything faster as you add more data.

To illustrate this, I ran a simple test.

First, I created a slice with a small initial capacity and kept adding elements to it. It took about 1.165 milliseconds to add 100,000 elements.

func main() {
  // Allocate a slice with a small capacity
  start := time.Now()
  s := make([]int, 0, 10)
  for i := 0; i < 100000; i++ {
    s = append(s, i)
  }
  elapsed := time.Since(start)
  fmt.Printf("Allocating slice with small capacity: %v\n", elapsed) // 1.165208ms
}

Next, I created a slice that was pre-allocated with enough space for all the elements and this method only took 361 microseconds for the same 100,000 elements , more than three times faster.

// Allocate a slice with a larger capacity
start = time.Now()
s = make([]int, 0, 100000)
for i := 0; i < 100000; i++ {
  s = append(s, i)
}
elapsed = time.Since(start)
fmt.Printf("Allocating slice with larger capacity: %v\n", elapsed) // 361.333µs

The speedup happens because the pre-allocated slice doesn’t have to resize each time you append an element, the space is already set aside.

If you want the full details, I have another article Go Secret — Slice: A Deep Dive into Slice that goes into this topic more. The main point? If you have an idea of how much space you’ll need, pre-allocating that space can give your performance a noticeable boost.

3. strconv Over fmt for Number-to-String Conversion

You know, I’ve seen a good number of developers, and yes, I’m talking about myself as well, usually opt for the fmt package when they need to turn a number into a string. It often looks something like this:

i := 10
iString := fmt.Sprint(i) // <---

But here’s a thought: have you ever wondered about the speed difference between strconv and fmt when it comes to these simple conversions?

Turns out, using strconv can actually be a quicker option. The reason is that fmt needs to perform some extra operations under the hood to determine the type and format, which makes the conversion a tad slower.

So, first on my list, let’s see how fmt fares:

// Using fmt
var x string
start := time.Now()

for i := 0; i < 10000000; i++ {
  x = fmt.Sprint(42)
}
elapsed := time.Since(start)
fmt.Printf("Time taken using fmt: %s\n", elapsed)

This operation takes about 904.7 milliseconds. Now, let’s run the same test using strconv, still looping 10,000,000 times:

// Using strconv
var x string
start := time.Now()

for i := 0; i < 10000000; i++ {
  x = strconv.Itoa(42)
}
elapsed := time.Since(start)
fmt.Printf("Time taken using strconv: %s\n", elapsed)

Here’s a question for you: if strconv is faster like I mentioned earlier, do you have any guesses on by how much?

Well, the strconv route only takes 70.63 milliseconds, making it almost 13 times faster compared to using fmt.Sprint.

4. String to []byte conversion

You know, when we’re building feature quickly, we often turn a string into a byte slice using straightforward syntax, something like this::

s := "Hello World"
sBytes := []byte(s)

However, it’s important to note that strings and byte slices have different internal structures in Go, so converting between them isn’t as smooth as one might think. It actually involves new memory allocation.

Here’s a quick look at the different internal setups:

type StringHeader struct {
  Data uintptr
  Len  int
}

type SliceHeader struct {
  Data uintptr
  Len  int
  Cap  int
}

So, how can we be more efficient?

Good news!

Starting from Go 1.20, there’s a feature called unsafe.Slice that allows for a less costly conversion, it looks like this:

unsafe.Slice(unsafe.StringData(s), len(s))

Curious about how much time this could save? Let’s see:

s := "Hello World"

start := time.Now()
var sBytes []byte
for i := 0; i < 10000000; i++ {
  sBytes = []byte(s)
}
fmt.Println("Using []byte(s):", time.Since(start))

start = time.Now()
for i := 0; i < 10000000; i++ {
  sBytes = unsafe.Slice(unsafe.StringData(s), len(s))
}
fmt.Println("Using unsafe.Slice:", time.Since(start))

Again, how about a guess? Do you think using unsafe.Slice will be more efficient? If yes, by how much?

Well, it turns out that using []byte(s) takes about 244.6 milliseconds, whereas unsafe.Slice shaves it down to just 21.2 milliseconds. That's an improvement by a factor of nearly 11.5 times.

5. Restrict Reflection Usage

Reflection in Go is quite impressive, it allows you to examine and even alter your program’s structure and behavior in runtime.

Interesting, isn’t it?

You can utilize reflection to identify variable types, read struct’s fields, and invoke methods during runtime like this, for example:

package main

import (
  "fmt"
  "reflect"
)

func main() {
  x := 100
  v := reflect.ValueOf(x)
  t := v.Type()
  fmt.Println("Type:", t) // "Type: int"
}

But!

This flexibility comes with trade-offs.

Reflection asks the compiler to do some extra work, like sorting out types and connections while the program is running. This extra effort can slow down your code compared to when you use checks that happen at compile time.

Imagine the difference between declaring a variable during runtime and using a constant that’s ready at compile time.


Devtrovert is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

And there you have it, these tips are just the beginning. There are tons of other tricks to explore if you want to become a true Go performance expert. But don’t stress about mastering them all at once.

1
Share this post

Go Performance Boosters: The Top 5 Tips and Tricks You Need to Know

blog.devtrovert.com
Share
Previous
Next
Comments
Top
New
Community

No posts

Ready for more?

© 2023 Phuong Le
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing

Our use of cookies

We use necessary cookies to make our site work. We also set performance and functionality cookies that help us make improvements by measuring traffic on our site. For more detailed information about the cookies we use, please see our privacy policy. ✖