Channels are Go’s preferred way to coordinate goroutines. But sometimes the natural way to model your problem is shared state — multiple goroutines reading and writing the same variable. The sync package gives you the primitives for doing that safely.
The race condition problem
First, the problem. Here’s a buggy program — multiple goroutines incrementing a shared counter:
package main
import (
"fmt"
"sync"
)
func main() {
var counter int
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter++
}()
}
wg.Wait()
fmt.Println("counter:", counter)
}
Run it several times:
counter: 967
counter: 988
counter: 942
We expected 1000. We get something less and the result changes every run. This is a race condition — two goroutines reading and modifying counter at the same time, stepping on each other’s updates.
Run with
go run -race main.goand Go’s race detector will tell you exactly where the conflict happened. Use-racewhile developing concurrent code — it’s invaluable.
You also met sync.WaitGroup in this example — it’s the simple way to wait for a known number of goroutines to finish. Add(n) says “expect n more goroutines”; each goroutine calls Done() when finished; Wait() blocks until everyone’s done.
sync.Mutex — the basic lock
A mutex (“mutual exclusion”) guarantees that only one goroutine at a time can be inside a critical section.
package main
import (
"fmt"
"sync"
)
func main() {
var (
counter int
mu sync.Mutex
wg sync.WaitGroup
)
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
mu.Lock()
counter++
mu.Unlock()
}()
}
wg.Wait()
fmt.Println("counter:", counter)
}
counter: 1000
Every time, every run. No more race.
The pattern:
mu.Lock()— wait until no one else holds the lock, then claim it- Do the protected operation
mu.Unlock()— release the lock so someone else can claim it
The most common pattern uses defer to guarantee the lock is released even if the function panics or returns early:
mu.Lock()
defer mu.Unlock()
// safely use the protected state
Mutex with a struct
In real code, the mutex usually lives next to the data it protects:
package main
import (
"fmt"
"sync"
)
type SafeCounter struct {
mu sync.Mutex
count int
}
func (c *SafeCounter) Inc() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
func main() {
c := &SafeCounter{}
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
c.Inc()
}()
}
wg.Wait()
fmt.Println("Final count:", c.Value())
}
Final count: 1000
This is the right shape: the protected data and its mutex are bundled into a type. Methods do the locking; callers don’t need to know.
sync.RWMutex — when reads outnumber writes
A regular Mutex allows only one holder at a time, even when all goroutines are just reading (which is safe to do concurrently). For workloads with many readers and few writers, this is wasteful.
A sync.RWMutex allows either:
- Many readers at the same time (call
RLock()/RUnlock()) - One writer, exclusive (call
Lock()/Unlock())
package main
import (
"fmt"
"sync"
)
type Cache struct {
mu sync.RWMutex
data map[string]string
}
func (c *Cache) Get(key string) (string, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
v, ok := c.data[key]
return v, ok
}
func (c *Cache) Set(key, value string) {
c.mu.Lock()
defer c.mu.Unlock()
c.data[key] = value
}
func main() {
cache := &Cache{data: make(map[string]string)}
cache.Set("country", "India")
cache.Set("city", "Chennai")
if v, ok := cache.Get("city"); ok {
fmt.Println("city:", v)
}
}
city: Chennai
Use RWMutex when you can prove that read-heavy workloads benefit. For mostly-write or balanced patterns, regular Mutex is fine and simpler.
sync.Once — run something exactly once
Sometimes you need a piece of code to run exactly once, even when called from many goroutines simultaneously. Initializing a database connection, loading a config file, registering a metric — all good candidates.
package main
import (
"fmt"
"sync"
)
var (
config map[string]string
once sync.Once
)
func loadConfig() {
fmt.Println("loading config...")
config = map[string]string{
"version": "1.0.0",
"env": "production",
}
}
func GetConfig() map[string]string {
once.Do(loadConfig)
return config
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done()
cfg := GetConfig()
fmt.Println("got version:", cfg["version"])
}()
}
wg.Wait()
}
loading config...
got version: 1.0.0
got version: 1.0.0
got version: 1.0.0
got version: 1.0.0
got version: 1.0.0
Notice “loading config…” prints once, no matter how many goroutines call GetConfig. sync.Once guarantees this. The first goroutine into Do() runs the function; everyone else waits for it to finish, then proceeds.
Channels vs mutexes — which to use?
A common question. The Go community has a saying:
Don’t communicate by sharing memory; share memory by communicating.
Translation: prefer channels (passing values between goroutines) over mutexes (shared state with locks). Channels make the data flow explicit.
But this isn’t a religious rule. Use a mutex when:
- The state genuinely is shared and needs to be read/written from multiple goroutines
- The state is simple — a counter, a cache, a flag
- A channel-based design would be more complex without being clearer
Use a channel when:
- You’re passing data through stages of a pipeline
- Goroutines need to coordinate (signal completion, request work, etc.)
- You’re building producer-consumer patterns
In real codebases, you’ll see both — often in the same program.
Summary
You now know the full Go concurrency toolkit:
- Goroutines — start with
go funcCall() - Channels — typed pipes between goroutines
sync.WaitGroup— wait for a group of goroutines to finishsync.Mutex— exclusive lock for shared statesync.RWMutex— multi-reader / single-writer locksync.Once— run code exactly once across all goroutines-race— Go’s race detector, your best friend during development
Concurrency is a skill that takes years to master, but you now have every tool you need to start.
What’s next
You have the core of the language. The final section covers three powerful features that round out the picture: testing, reflection, and generics. They’re not used in every program, but every Go developer should know them.