- Use ETS (Erlang Term Storage) for shared in-memory state across processes — faster than GenServer calls for read-heavy workloads.
- Use `Stream` for lazy evaluation of large collections — avoid materializing entire lists when you only need a subset.
- Profile with `:observer.start()` and `Benchee` — identify bottleneck processes and measure function execution time before optimizing.
- Use ETS for read-heavy shared data: `:ets.new(:cache, [:named_table, :public, read_concurrency: true])` — ETS reads are O(1) and don't serialize through a GenServer process, giving 10x+ throughput for lookup-intensive workloads.
- Use `Stream` instead of `Enum` when chaining multiple transformations over large datasets: `Stream.map(rows, &parse/1) |> Stream.filter(&valid?/1) |> Enum.take(100)` — materializes only the first 100 results.
- Use binary pattern matching for string/byte processing: `<<first::8, rest::binary>> = data` — Elixir binaries are reference-counted slices, making pattern matching zero-copy.
- Batch external calls with `Task.async_stream(ids, &fetch/1, max_concurrency: 20)` instead of sequential calls — runs up to 20 fetches concurrently with backpressure.
- Profile with `:fprof.trace([:start]) ... :fprof.profile() ... :fprof.analyse([totals: true])` or `Benchee.run(%{'fn' => fn -> ... end})` before optimizing.