This page looks best with JavaScript enabled

Channels and mutexes in Go

 ·  ☕ 7 min read  ·  ✍️ t1

Channels and mutexes in Go serve as tools for managing concurrency but are suited for different scenarios based on the nature of the problem you’re solving. Choosing between them depends on what you aim to achieve in terms of synchronization, communication, and the architectural patterns you prefer in your concurrent applications.

When to Use Channels

Channels in Go are used for communication between goroutines. They are a part of Go’s design philosophy to “share by communicating rather than communicate by sharing.” Here are scenarios where channels are typically the preferred choice:

  1. Producer-Consumer Problems: Channels are ideal for cases where one goroutine needs to send data to another goroutine to be processed. This is the classic producer-consumer scenario.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
package main

import (
	"fmt"
)

// producer
func gen(numbers ...int) <-chan int {
	out := make(chan int, len(numbers))
	go func() {
		for _, num := range numbers {
			out <- num
		}
		close(out)
	}()
	return out
}
// transformer
func sq(in <-chan int) <-chan int {
	out := make(chan int)
	go func() {
		for num := range in {
			out <- num * num
		}
		close(out)
	}()
	return out
}

func main() {
	numbers := []int{23, 12, 3, 5, 6}
	in := gen(numbers...)
	out := sq(sq(in))
	// consumer
	for num := range out {
		fmt.Println(num)
	}
}
  1. Coordination and Signaling: Channels are useful for coordinating the execution of goroutines, for instance, signaling a goroutine to start processing or to stop processing.

  2. Distributing Work: When distributing tasks among multiple workers (goroutines), channels can serve as queues. Each worker can receive tasks sent over the channel.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
// bounded parallelism
package main

import (
	"crypto/md5"
	"errors"
	"fmt"
	"os"
	"path/filepath"
	"sync"
)

func main() {
	results, err := MD5All(".")
	if err != nil {
		fmt.Println("got an error: ", err)
	}
	for k, v := range results {
		fmt.Println(k, ": ", v)
	}
}

type result struct {
	path string
	sum  [md5.Size]byte
	err  error
}

// walkFiles: first stage, emits path of regular files in the tree
func walkFiles(done <-chan struct{}, root string) (<-chan string,
	<-chan error) {
	paths := make(chan string)
	errc := make(chan error, 1)
	go func() {
		defer close(paths)
		defer close(errc)
		errc <- filepath.Walk(root,
			func(path string, info os.FileInfo, err error) error {
				if err != nil {
					return err
				}
				if !info.Mode().IsRegular() {
					return nil
				}
				select {
				case paths <- path:
				case <-done:
					return errors.New("files walk canceled")
				}

				return nil
			})
	}()
	return paths, errc
}

// digester: middle stage receives filenames from paths and sends out result
func digester(done <-chan struct{}, paths <-chan string,
	c chan<- result) {
	for path := range paths {
		data, err := os.ReadFile(path)
		select {
		case c <- result{path, md5.Sum(data), err}:
		case <-done:
			return
		}
	}
}

// MD5All starts fixed numbers of go routines to read and digest files
func MD5All(root string) (map[string][md5.Size]byte, error) {
	c := make(chan result)
	done := make(chan struct{})
	defer close(done)
	paths, errc := walkFiles(done, root)
	var wg sync.WaitGroup
	maxCount := 20
	wg.Add(maxCount)
	for i := 0; i < maxCount; i++ {
		go func() {
			defer wg.Done()
			digester(done, paths, c)
		}()
	}
	go func() {
		wg.Wait()
		close(c)
	}()
	m := make(map[string][md5.Size]byte)
	for r := range c {
		if r.err != nil {
			return nil, r.err
		}
		m[r.path] = r.sum
	}
	if err := <-errc; err != nil {
		return nil, err
	}
	return m, nil
}
  1. Implementing Patterns Like Fan-in and Fan-out: Channels make it easier to implement concurrency patterns where multiple inputs need to be consolidated or a single input needs to be distributed among multiple workers.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
package main

import (
	"fmt"
	"sync"
)

func main() {
	data := []int{13, 4, 5, 12, 34, 7, 9, 10}
	done := make(chan struct{})
	defer close(done)

	in := gen(done, data...)
	// fan-out
	out1 := sq(done, in)
	out2 := sq(done, in)
	out3 := sq(done, in)
	// fan-in
	out := merge(done, out1, out2, out3)
	for n := range out {
		fmt.Println(n)
	}
}

func gen(done <-chan struct{}, nums ...int) <-chan int {
	out := make(chan int)
	go func() {
		defer close(out)
		for _, num := range nums {
			select {
			case <-done:
				return
			case out <- num:
			}
		}
	}()
	return out
}

func sq(done <-chan struct{}, ch <-chan int) <-chan int {
	out := make(chan int)
	go func() {
		defer close(out)
		for num := range ch {
			select {
			case <-done:
				return
			case out <- num * num:
			}
		}
	}()
	return out
}

func merge(done <-chan struct{}, chs ...<-chan int) <-chan int {
	out := make(chan int)
	var wg sync.WaitGroup
	output := func(ch <-chan int) {
		defer wg.Done()
		for num := range ch {
			select {
			case <-done:
				return
			case out <- num:
			}
		}
	}
	wg.Add(len(chs))
	for _, ch := range chs {
		go output(ch)
	}
	go func() {
		wg.Wait()
		close(out)
	}()

	return out
}
  1. Timeouts and Cancellation: Channels work well with Go’s built-in select statement, which makes implementing timeouts and cancellation straightforward.

When to Use Mutexes

Mutexes (mutual exclusions) are used to protect shared resources or critical sections where concurrent access must be controlled to prevent data races. They are a more traditional approach seen in many programming languages. Use mutexes when:

  1. Protecting Shared State: If multiple goroutines need access to the same shared state and you need to ensure that only one goroutine can access the state at a time, a mutex is appropriate.

  2. Simplicity in Low-Concurrency Situations: In situations where concurrency is limited or where the critical sections are very short and infrequent, mutexes might introduce less overhead than channel-based solutions.

  3. Avoiding Overhead of Channel Communication: Mutexes can be more efficient than channels when the task involves simple, quick actions on shared data. This can be because channels involve copying data and more complex synchronization protocols.

  4. Non-communicating Parallelism: If goroutines are largely independent and just need occasional access to shared resources (without needing to communicate), mutexes can be simpler and more direct.

Comparing Channels and Mutexes

  • Design Philosophy: Channels align with Go’s philosophy of encouraging patterns where data flows between independent actors (goroutines). Mutexes are more about controlling access to shared state.
  • Ease of Use: Channels can lead to more readable and maintainable code by structuring the flow of data and signals explicitly. However, incorrect use of channels can also lead to complex and subtle bugs like deadlocks or goroutine leaks.
  • Performance Considerations: Mutexes are typically lightweight in terms of memory overhead and execution speed compared to channels, especially for simple, quick access to shared data.

Conclusion

In general, prefer channels when the problem is primarily about communication or when using patterns that naturally fit channel semantics. Opt for mutexes when dealing with shared states that need protection during brief access periods. Both tools are powerful, and their appropriate use depends on the specifics of the problem you’re trying to solve in your Go application.

Share on

t1
WRITTEN BY
t1
Dev