Erlang logo
Reference Manual
Release Notes

Reference Manual
Version 4.2

Expand All
Contract All

Table of Contents





Erlang Networking Kernel


The net kernel is a system process, registered as net_kernel, which must be running for distributed Erlang to work. The purpose of this process is to implement parts of the BIFs spawn/4 and spawn_link/4, and to provide monitoring of the network.

An Erlang node is started using the command line flag -name or -sname:

$ erl -sname foobar

It is also possible to call net_kernel:start([foobar]) directly from the normal Erlang shell prompt:

1> net_kernel:start([foobar, shortnames]).

If the node is started with the command line flag -sname, the node name will be foobar@Host, where Host is the short name of the host (not the fully qualified domain name). If started with the -name flag, Host is the fully qualified domain name. See erl(1).

Normally, connections are established automatically when another node is referenced. This functionality can be disabled by setting the Kernel configuration parameter dist_auto_connect to false, see kernel(6). In this case, connections must be established explicitly by calling net_kernel:connect_node/1.

Which nodes are allowed to communicate with each other is handled by the magic cookie system, see Distributed Erlang in the Erlang Reference Manual.


allow(Nodes) -> ok | error


Nodes = [node()]

Permits access to the specified set of nodes.

Before the first call to allow/1, any node with the correct cookie can be connected. When allow/1 is called, a list of allowed nodes is established. Any access attempts made from (or to) nodes not in that list will be rejected.

Subsequent calls to allow/1 will add the specified nodes to the list of allowed nodes. It is not possible to remove nodes from the list.

Returns error if any element in Nodes is not an atom.

connect_node(Node) -> boolean() | ignored


Node = node()

Establishes a connection to Node. Returns true if successful, false if not, and ignored if the local node is not alive.

monitor_nodes(Flag) -> ok | Error
monitor_nodes(Flag, Options) -> ok | Error


Flag = boolean()
Options = [Option]
Option = {node_type, NodeType} | nodedown_reason
NodeType = visible | hidden | all
Error = error | {error, term()}

The calling process subscribes or unsubscribes to node status change messages. A nodeup message is delivered to all subscribing process when a new node is connected, and a nodedown message is delivered when a node is disconnected.

If Flag is true, a new subscription is started. If Flag is false, all previous subscriptions -- started with the same Options -- are stopped. Two option lists are considered the same if they contain the same set of options.

As of kernel version 2.11.4, and erts version 5.5.4, the following is guaranteed:

  • nodeup messages will be delivered before delivery of any message from the remote node passed through the newly established connection.
  • nodedown messages will not be delivered until all messages from the remote node that have been passed through the connection have been delivered.

Note, that this is not guaranteed for kernel versions before 2.11.4.

As of kernel version 2.11.4 subscriptions can also be made before the net_kernel server has been started, i.e., net_kernel:monitor_nodes/[1,2] does not return ignored.

As of kernel version 2.13, and erts version 5.7, the following is guaranteed:

  • nodeup messages will be delivered after the corresponding node appears in results from erlang:nodes/X.
  • nodedown messages will be delivered after the corresponding node has disappeared in results from erlang:nodes/X.

Note, that this is not guaranteed for kernel versions before 2.13.

The format of the node status change messages depends on Options. If Options is [], which is the default, the format is:

{nodeup, Node} | {nodedown, Node}
  Node = node()

If Options /= [], the format is:

{nodeup, Node, InfoList} | {nodedown, Node, InfoList}
  Node = node()
  InfoList = [{Tag, Val}]

InfoList is a list of tuples. Its contents depends on Options, see below.

Also, when OptionList == [] only visible nodes, that is, nodes that appear in the result of nodes/0, are monitored.

Option can be any of the following:

{node_type, NodeType}

Currently valid values for NodeType are:

Subscribe to node status change messages for visible nodes only. The tuple {node_type, visible} is included in InfoList.
Subscribe to node status change messages for hidden nodes only. The tuple {node_type, hidden} is included in InfoList.
Subscribe to node status change messages for both visible and hidden nodes. The tuple {node_type, visible | hidden} is included in InfoList.

The tuple {nodedown_reason, Reason} is included in InfoList in nodedown messages. Reason can be:

The connection setup failed (after nodeup messages had been sent).
No network available.
The net_kernel process terminated.
Unspecified connection shutdown.
The connection was closed.
The connection was disconnected (forced from the current node).
Net tick timeout.
Failed to send net tick over the connection.
Status information retrieval from the Port holding the connection failed.

get_net_ticktime() -> Res


Res = NetTicktime | {ongoing_change_to, NetTicktime} | ignored
NetTicktime = integer() >= 1

Gets net_ticktime (see kernel(6)).

Currently defined return values (Res):


net_ticktime is NetTicktime seconds.

{ongoing_change_to, NetTicktime}

net_kernel is currently changing net_ticktime to NetTicktime seconds.


The local node is not alive.

set_net_ticktime(NetTicktime) -> Res
set_net_ticktime(NetTicktime, TransitionPeriod) -> Res


NetTicktime = integer() >= 1
TransitionPeriod = integer() >= 0
Res =
    unchanged |
    change_initiated |
    {ongoing_change_to, NewNetTicktime}
NewNetTicktime = integer() >= 1

Sets net_ticktime (see kernel(6)) to NetTicktime seconds. TransitionPeriod defaults to 60.

Some definitions:

The minimum transition traffic interval (MTTI)

minimum(NetTicktime, PreviousNetTicktime)*1000 div 4 milliseconds.

The transition period

The time of the least number of consecutive MTTIs to cover TransitionPeriod seconds following the call to set_net_ticktime/2 (i.e. ((TransitionPeriod*1000 - 1) div MTTI + 1)*MTTI milliseconds).

If <anno>NetTicktime</anno> < PreviousNetTicktime, the actual net_ticktime change will be done at the end of the transition period; otherwise, at the beginning. During the transition period, net_kernel will ensure that there will be outgoing traffic on all connections at least every MTTI millisecond.


The net_ticktime changes have to be initiated on all nodes in the network (with the same NetTicktime) before the end of any transition period on any node; otherwise, connections may erroneously be disconnected.

Returns one of the following:


net_ticktime already had the value of NetTicktime and was left unchanged.


net_kernel has initiated the change of net_ticktime to NetTicktime seconds.

{ongoing_change_to, NewNetTicktime}

The request was ignored; because, net_kernel was busy changing net_ticktime to NewNetTicktime seconds.

start([Name]) -> {ok, pid()} | {error, Reason}
start([Name, NameType]) -> {ok, pid()} | {error, Reason}
start([Name, NameType, Ticktime]) -> {ok, pid()} | {error, Reason}


Name = atom()
NameType = shortnames | longnames
Reason = {already_started, pid()} | term()

Note that the argument is a list with exactly one, two or three arguments. NameType defaults to longnames and Ticktime to 15000.

Turns a non-distributed node into a distributed node by starting net_kernel and other necessary processes.

stop() -> ok | {error, Reason}


Reason = not_allowed | not_found

Turns a distributed node into a non-distributed node. For other nodes in the network, this is the same as the node going down. Only possible when the net kernel was started using start/1, otherwise returns {error, not_allowed}. Returns {error, not_found} if the local node is not alive.