golang sync.Cond
Exploring sync.Cond from the Golang sync Package #
After a colleague shared a
video with me about the Go programming language reviewing the use of Go Generics and the Cond
type from the go standard library sync
package. The code being reviewed can be found on Github
here. I decided to take a closer look at the usefulness of sync.Cond
in managing synchronization in concurrent programs. Before watching the video, I had never used
sync.Cond but had experience using the other types of the sync
package. Intrigued, I started to research more about it and discovered how beneficial it is for preventing race conditions and deadlocks.
For Golang developers, the sync
package is a popular and helpful tool for synchronizing concurrent operations. The sync.Cond
type is specifically designed to help developers synchronize shared resources and is particularly useful when dealing with heavily loaded concurrent systems. It does so by allowing goroutines to pause their execution until a particular condition is met, enabling other goroutines to carry on uninterrupted until the condition is satisfied.
Using sync.Cond
is surprisingly easy. It works in a very similar manner to sync.Mutex
which is more common in Go projects. I think for me a particularly intriguing use case is concurrently accessing and appending items into a slice. For example, say you need a particular number of items from a slice to perform a particular operation or within a particular window say the last 5 seconds then sync.Cond
is very useful for this kind of work.
Firstly to create a new sync.Cond
we can use the sync.NewCond
method which takes a parameter that implements the
Locker interface. If we are to use a type from the sync
package to satisfy the Locker
interface then we’ll either want to use sync.Mutux
or sync.RWMutux
.
Let’s now look at how we can implement this for our example.
// List stores T in a slice and provides a way to get the last n items from the list
type List[T any] struct {
items []T
cond *sync.Cond
}
// NewList creates a new list of T
func NewList[T any]() *List[T] {
return &List[T]{
items: []T{},
cond: sync.NewCond(&sync.Mutex{}),
}
}
In this generic example, we create a List
that can store any type.
Next, we need to look at how we can add new items to the list. When using sync.Mutux
we need to use the underlying Locker that was passed into the NewCond()
call.
// Add adds a new item reading to the list
func (l *List[T]) Add(item T) {
l.cond.L.Lock()
defer l.cond.L.Unlock()
l.items = append(l.items, item)
l.cond.Broadcast()
}
A call to Lock()
is called immediately when we call the Add()
function followed by deferring a call to Unlock()
. This is still essential as we want to modify the items slice safely. What is different in this Add()
function is the call to Broadcast()
. The call to broadcast signals all waiting goroutines that they can now wake.
Preventing race conditions and deadlocks is essential for developing reliable concurrent programs. A race condition occurs when two or more goroutines access and modify the same shared resource without proper synchronization, leading to unpredictable behaviour and data corruption. The way sync.Cond
can broadcast to waiting for goroutines.
On the other hand, a deadlock happens when two or more goroutines wait for each other to release a resource, resulting in a situation where none of them can proceed. Both issues can cause programs to crash or behave incorrectly, making them difficult to debug and maintain. Therefore, preventing race conditions and deadlocks is critical for building robust concurrent programs.
We can look at how we might implement a Get
call that can retrieve a given number of items from the list.
// Get returns the last n items readings
func (l *List[T]) Get(ctx context.Context, count int) ([]T, error) {
l.cond.L.Lock()
defer l.cond.L.Unlock()
var cancel func()
ctx, cancel = context.WithCancel(ctx)
defer cancel()
go func() {
<-ctx.Done()
l.cond.Broadcast()
}()
for len(l.items) < count {
l.cond.Wait()
if err := ctx.Err(); err != nil {
return nil, err
}
}
temps := l.items[:count:count]
l.items = l.items[count:]
return temps, nil
}
The most interesting part of this function is the for loop. This loop blocks the function from going any further if there are not enough items in the list to satisfy the caller. If there are not enough items then the function is blocked by the call to Wait()
. This call will block until another goroutine signals it by calling either Signal()
or Broadcast()
.
Some interesting additions in the Get
code example above are the use of a cancellable context to provide cancellation and a way to prevent goroutines deadlocking.
In conclusion, the sync.Cond
is an incredibly useful tool for synchronisation with shared resources in highly loaded concurrent systems.
A full working copy of the example above can be found here.