View Source Common Caveats

This section lists a few constructs to watch out for.

Operator ++

The ++ operator copies its left-hand side operand. That is clearly seen if we do our own implementation in Erlang:

my_plus_plus([H|T], Tail) ->
    [H|my_plus_plus(T, Tail)];
my_plus_plus([], Tail) ->

We must be careful how we use ++ in a loop. First is how not to use it:


naive_reverse([H|T]) ->
    naive_reverse(T) ++ [H];
naive_reverse([]) ->

As the ++ operator copies its left-hand side operand, the growing result is copied repeatedly, leading to quadratic complexity.

On the other hand, using ++ in loop like this is perfectly fine:


naive_but_ok_reverse(List) ->
    naive_but_ok_reverse(List, []).

naive_but_ok_reverse([H|T], Acc) ->
    naive_but_ok_reverse(T, [H] ++ Acc);
naive_but_ok_reverse([], Acc) ->

Each list element is copied only once. The growing result Acc is the right-hand side operand, which it is not copied.

Experienced Erlang programmers would probably write as follows:


vanilla_reverse([H|T], Acc) ->
    vanilla_reverse(T, [H|Acc]);
vanilla_reverse([], Acc) ->

In principle, this is slightly more efficient because the list element [H] is not built before being copied and discarded. In practice, the compiler rewrites [H] ++ Acc to [H|Acc].

Timer Module

Creating timers using erlang:send_after/3 and erlang:start_timer/3, is more efficient than using the timers provided by the timer module in STDLIB.

The timer module uses a separate process to manage the timers. Before Erlang/OTP 25, this management overhead was substantial and increasing with the number of timers, especially when they were short-lived, so the timer server process could easily become overloaded and unresponsive. In Erlang/OTP 25, the timer module was improved by removing most of the management overhead and the resulting performance penalty. Still, the timer server remains a single process, and it may at some point become a bottleneck of an application.

The functions in the timer module that do not manage timers (such as timer:tc/3 or timer:sleep/1), do not call the timer-server process and are therefore harmless.

Accidental Copying and Loss of Sharing

When spawning a new process using a fun, one can accidentally copy more data to the process than intended. For example:


accidental1(State) ->
    spawn(fun() ->
                  io:format("~p\n", [])

The code in the fun will extract one element from the record and print it. The rest of the state record is not used. However, when the spawn/1 function is executed, the entire record is copied to the newly created process.

The same kind of problem can happen with a map:


accidental2(State) ->
    spawn(fun() ->
                  io:format("~p\n", [map_get(info, State)])

In the following example (part of a module implementing the gen_server behavior) the created fun is sent to another process:


handle_call(give_me_a_fun, _From, State) ->
    Fun = fun() -> State#state.size =:= 42 end,
    {reply, Fun, State}.

How bad that unnecessary copy is depends on the contents of the record or the map.

For example, if the state record is initialized like this:

init1() ->
    #state{data=lists:seq(1, 10000)}.

a list with 10000 elements (or about 20000 heap words) will be copied to the newly created process.

An unnecessary copy of 10000 element list can be bad enough, but it can get even worse if the state record contains shared subterms. Here is a simple example of a term with a shared subterm:

{SubTerm, SubTerm}

When a term is copied to another process, sharing of subterms will be lost and the copied term can be many times larger than the original term. For example:

init2() ->
    SharedSubTerms = lists:foldl(fun(_, A) -> [A|A] end, [0], lists:seq(1, 15)),

In the process that calls init2/0, the size of the data field in the state record will be 32 heap words. When the record is copied to the newly created process, sharing will be lost and the size of the copied data field will be 131070 heap words. More details about loss off sharing are found in a later section.

To avoid the problem, outside of the fun extract only the fields of the record that are actually used:


fixed_accidental1(State) ->
    Info =,
    spawn(fun() ->
                  io:format("~p\n", [Info])

Similarly, outside of the fun extract only the map elements that are actually used:


fixed_accidental2(State) ->
    Info = map_get(info, State),
    spawn(fun() ->
                  io:format("~p\n", [Info])


Atoms are not garbage-collected. Once an atom is created, it is never removed. The emulator terminates if the limit for the number of atoms (1,048,576 by default) is reached.

Therefore, converting arbitrary input strings to atoms can be dangerous in a system that runs continuously. If only certain well-defined atoms are allowed as input, list_to_existing_atom/1 or binary_to_existing_atom/1 can be used to guard against a denial-of-service attack. (All atoms that are allowed must have been created earlier, for example, by using all of them in a module and loading that module.)

Using list_to_atom/1 to construct an atom that is passed to apply/3 is quite expensive.


apply(list_to_atom("some_prefix"++Var), foo, Args)


The time for calculating the length of a list is proportional to the length of the list, as opposed to tuple_size/1, byte_size/1, and bit_size/1, which all execute in constant time.

Normally, there is no need to worry about the speed of length/1, because it is efficiently implemented in C. In time-critical code, you might want to avoid it if the input list could potentially be very long.

Some uses of length/1 can be replaced by matching. For example, the following code:

foo(L) when length(L) >= 3 ->

can be rewritten to:

foo([_,_,_|_]=L) ->

One slight difference is that length(L) fails if L is an improper list, while the pattern in the second code fragment accepts an improper list.


setelement/3 copies the tuple it modifies. Therefore, updating a tuple in a loop using setelement/3 creates a new copy of the tuple every time.

There is one exception to the rule that the tuple is copied. If the compiler clearly can see that destructively updating the tuple would give the same result as if the tuple was copied, the call to setelement/3 is replaced with a special destructive setelement instruction. In the following code sequence, the first setelement/3 call copies the tuple and modifies the ninth element:

multiple_setelement(T0) when tuple_size(T0) =:= 9 ->
    T1 = setelement(9, T0, bar),
    T2 = setelement(7, T1, foobar),
    setelement(5, T2, new_value).

The two following setelement/3 calls modify the tuple in place.

For the optimization to be applied, all the following conditions must be true:

  • The tuple argument must be known to be a tuple of a known size.
  • The indices must be integer literals, not variables or expressions.
  • The indices must be given in descending order.
  • There must be no calls to another function in between the calls to setelement/3.
  • The tuple returned from one setelement/3 call must only be used in the subsequent call to setelement/3.

If the code cannot be structured as in the multiple_setelement/1 example, the best way to modify multiple elements in a large tuple is to convert the tuple to a list, modify the list, and convert it back to a tuple.


size/1 returns the size for both tuples and binaries.

Using the BIFs tuple_size/1 and byte_size/1 gives the compiler and the runtime system more opportunities for optimization. Another advantage is that those BIFs give Dialyzer more type information.

Using NIFs

Rewriting Erlang code to a NIF to make it faster should be seen as a last resort.

Doing too much work in each NIF call will degrade responsiveness of the VM. Doing too little work can mean that the gain of the faster processing in the NIF is eaten up by the overhead of calling the NIF and checking the arguments.

Be sure to read about Long-running NIFs before writing a NIF.