Erlang logo
Reference Manual
Release Notes
PDF
Top

Kernel
Reference Manual
Version 4.1.1


Expand All
Contract All

Table of Contents

net_kernel

MODULE

net_kernel

MODULE SUMMARY

Erlang Networking Kernel

DESCRIPTION

The net kernel is a system process, registered as net_kernel, which must be running for distributed Erlang to work. The purpose of this process is to implement parts of the BIFs spawn/4 and spawn_link/4, and to provide monitoring of the network.

An Erlang node is started using the command line flag -name or -sname:

$ erl -sname foobar

It is also possible to call net_kernel:start([foobar]) directly from the normal Erlang shell prompt:

1> net_kernel:start([foobar, shortnames]).
{ok,<0.64.0>}
(foobar@gringotts)2>

If the node is started with the command line flag -sname, the node name will be foobar@Host, where Host is the short name of the host (not the fully qualified domain name). If started with the -name flag, Host is the fully qualified domain name. See erl(1).

Normally, connections are established automatically when another node is referenced. This functionality can be disabled by setting the Kernel configuration parameter dist_auto_connect to false, see kernel(6). In this case, connections must be established explicitly by calling net_kernel:connect_node/1.

Which nodes are allowed to communicate with each other is handled by the magic cookie system, see Distributed Erlang in the Erlang Reference Manual.

EXPORTS

allow(Nodes) -> ok | error

Types:

Nodes = [node()]

Limits access to the specified set of nodes. Any access attempts made from (or to) nodes not in Nodes will be rejected.

Returns error if any element in Nodes is not an atom.

connect_node(Node) -> boolean() | ignored

Types:

Node = node()

Establishes a connection to Node. Returns true if successful, false if not, and ignored if the local node is not alive.

monitor_nodes(Flag) -> ok | Error
monitor_nodes(Flag, Options) -> ok | Error

Types:

Flag = boolean()
Options = [Option]
Option = {node_type, NodeType} | nodedown_reason
NodeType = visible | hidden | all
Error = error | {error, term()}

The calling process subscribes or unsubscribes to node status change messages. A nodeup message is delivered to all subscribing process when a new node is connected, and a nodedown message is delivered when a node is disconnected.

If Flag is true, a new subscription is started. If Flag is false, all previous subscriptions -- started with the same Options -- are stopped. Two option lists are considered the same if they contain the same set of options.

As of kernel version 2.11.4, and erts version 5.5.4, the following is guaranteed:

  • nodeup messages will be delivered before delivery of any message from the remote node passed through the newly established connection.
  • nodedown messages will not be delivered until all messages from the remote node that have been passed through the connection have been delivered.

Note, that this is not guaranteed for kernel versions before 2.11.4.

As of kernel version 2.11.4 subscriptions can also be made before the net_kernel server has been started, i.e., net_kernel:monitor_nodes/[1,2] does not return ignored.

As of kernel version 2.13, and erts version 5.7, the following is guaranteed:

  • nodeup messages will be delivered after the corresponding node appears in results from erlang:nodes/X.
  • nodedown messages will be delivered after the corresponding node has disappeared in results from erlang:nodes/X.

Note, that this is not guaranteed for kernel versions before 2.13.

The format of the node status change messages depends on Options. If Options is [], which is the default, the format is:

{nodeup, Node} | {nodedown, Node}
  Node = node()

If Options /= [], the format is:

{nodeup, Node, InfoList} | {nodedown, Node, InfoList}
  Node = node()
  InfoList = [{Tag, Val}]

InfoList is a list of tuples. Its contents depends on Options, see below.

Also, when OptionList == [] only visible nodes, that is, nodes that appear in the result of nodes/0, are monitored.

Option can be any of the following:

{node_type, NodeType}

Currently valid values for NodeType are:

visible
Subscribe to node status change messages for visible nodes only. The tuple {node_type, visible} is included in InfoList.
hidden
Subscribe to node status change messages for hidden nodes only. The tuple {node_type, hidden} is included in InfoList.
all
Subscribe to node status change messages for both visible and hidden nodes. The tuple {node_type, visible | hidden} is included in InfoList.
nodedown_reason

The tuple {nodedown_reason, Reason} is included in InfoList in nodedown messages. Reason can be:

connection_setup_failed
The connection setup failed (after nodeup messages had been sent).
no_network
No network available.
net_kernel_terminated
The net_kernel process terminated.
shutdown
Unspecified connection shutdown.
connection_closed
The connection was closed.
disconnect
The connection was disconnected (forced from the current node).
net_tick_timeout
Net tick timeout.
send_net_tick_failed
Failed to send net tick over the connection.
get_status_failed
Status information retrieval from the Port holding the connection failed.

get_net_ticktime() -> Res

Types:

Res = NetTicktime | {ongoing_change_to, NetTicktime} | ignored
NetTicktime = integer() >= 1

Gets net_ticktime (see kernel(6)).

Currently defined return values (Res):

NetTicktime

net_ticktime is NetTicktime seconds.

{ongoing_change_to, NetTicktime}

net_kernel is currently changing net_ticktime to NetTicktime seconds.

ignored

The local node is not alive.

set_net_ticktime(NetTicktime) -> Res
set_net_ticktime(NetTicktime, TransitionPeriod) -> Res

Types:

NetTicktime = integer() >= 1
TransitionPeriod = integer() >= 0
Res =
    unchanged |
    change_initiated |
    {ongoing_change_to, NewNetTicktime}
NewNetTicktime = integer() >= 1

Sets net_ticktime (see kernel(6)) to NetTicktime seconds. TransitionPeriod defaults to 60.

Some definitions:

The minimum transition traffic interval (MTTI)

minimum(NetTicktime, PreviousNetTicktime)*1000 div 4 milliseconds.

The transition period

The time of the least number of consecutive MTTIs to cover TransitionPeriod seconds following the call to set_net_ticktime/2 (i.e. ((TransitionPeriod*1000 - 1) div MTTI + 1)*MTTI milliseconds).

If <anno>NetTicktime</anno> < PreviousNetTicktime, the actual net_ticktime change will be done at the end of the transition period; otherwise, at the beginning. During the transition period, net_kernel will ensure that there will be outgoing traffic on all connections at least every MTTI millisecond.

Note

The net_ticktime changes have to be initiated on all nodes in the network (with the same NetTicktime) before the end of any transition period on any node; otherwise, connections may erroneously be disconnected.

Returns one of the following:

unchanged

net_ticktime already had the value of NetTicktime and was left unchanged.

change_initiated

net_kernel has initiated the change of net_ticktime to NetTicktime seconds.

{ongoing_change_to, NewNetTicktime}

The request was ignored; because, net_kernel was busy changing net_ticktime to NewNetTicktime seconds.

start([Name]) -> {ok, pid()} | {error, Reason}
start([Name, NameType]) -> {ok, pid()} | {error, Reason}
start([Name, NameType, Ticktime]) -> {ok, pid()} | {error, Reason}

Types:

Name = atom()
NameType = shortnames | longnames
Reason = {already_started, pid()} | term()

Note that the argument is a list with exactly one, two or three arguments. NameType defaults to longnames and Ticktime to 15000.

Turns a non-distributed node into a distributed node by starting net_kernel and other necessary processes.

stop() -> ok | {error, Reason}

Types:

Reason = not_allowed | not_found

Turns a distributed node into a non-distributed node. For other nodes in the network, this is the same as the node going down. Only possible when the net kernel was started using start/1, otherwise returns {error, not_allowed}. Returns {error, not_found} if the local node is not alive.