[Erlang Systems]

17 Mnesia Release Notes

This document describes the changes made to the Mnesia system from version to version. The intention of this document is to list all incompatibilities as well as all enhancements and bugfixes for every release of Mnesia. Each release of Mnesia thus constitutes one section in this document. The title of each section is the version number of Mnesia.

17.1 Mnesia 3.4.1

17.1.1 Improvements and new features

17.1.2 Fixed Bugs and malfunctions

17.1.3 Incompatibilities

None.

17.1.4 Known bugs and problems

None.

17.2 Mnesia 3.4

17.2.1 Improvements and new features

17.2.1.1 Record name may differ from table name

From this release onwards, the record name of records stored in Mnesia may differ from the name of the table that they are stored in. In order to use this new feature the table property {record_name, Name} has been introduced. If this property is omitted when the table is created, the table name will be used as record name. For example, if two tables are created like this:

          TabDef = [{record_name, subscriber}]
          mnesia:create_table(my_subscriber, TabDef)
          mnesia:create_table(your_subscriber, TabDef)
        

it would be possible to store subscriber records in both of them:

          mnesia:write(my_subscriber, #subscriber{}, sticky_write)
          mnesia:write(your_subscriber, #subscriber{}, write)
        

In order to enable usage of this new support for record_name new functions have been added to the Mnesia API:

          mnesia:dirty_write(Tab, Record) 
          mnesia:dirty_delete(Tab, Key) 
          mnesia:dirty_delete_object(Tab, Record) 
          mnesia:dirty_update_counter(Tab, Key, Incr) 
          mnesia:dirty_read(Tab, Key)
          mnesia:dirty_match_object(Tab, Pattern)
          mnesia:dirty_index_match_object(Tab, Pattern, Attr) 
        
          mnesia:write(Tab, Record, LockKind) 
          mnesia:delete(Tab, Key, LockKind) 
          mnesia:delete_object(Tab, Record, LockKind) 
          mnesia:read(Tab, Key, LockKind) 
          mnesia:match_object(Tab, Pattern, LockKind) 
          mnesia:all_keys(Tab)
          mnesia:index_match_object(Tab, Pattern, Attr, LockKind)
          mnesia:index_read(Tab, SecondaryKey, Attr)

          LockKind ::= read | write | sticky_write | ...
        

The old corresponding functions still exists, but are merely a syntactic sugar for the new ones:

          mnesia:dirty_write(Record) ->
            Tab = element(1, Record),
            mnesia:dirty_write(Tab, Record).
  
          mnesia:dirty_delete({Tab, Key}) ->
            mnesia:dirty_delete(Tab, Key).
  
          mnesia:dirty_delete_object(Record) ->
            Tab = element(1, Record),
            mnesia:dirty_delete_object(Tab, Record) 
  
          mnesia:dirty_update_counter({Tab, Key}, Incr) ->
            mnesia:dirty_update_counter(Tab, Key, Incr).
  
          mnesia:dirty_read({Tab, Key}) ->
            Tab = element(1, Record),
            mnesia:dirty_read(Tab, Key).
  
          mnesia:dirty_match_object(Pattern) ->
            Tab = element(1, Pattern),
            mnesia:dirty_match_object(Tab, Pattern).
  
          mnesia:dirty_index_match_object(Pattern, Attr) 
            Tab = element(1, Pattern),
            mnesia:dirty_index_match_object(Tab, Pattern, Attr).
          
          mnesia:write(Record) ->
            Tab = element(1, Record),
            mnesia:write(Tab, Record, write).
  
          mnesia:s_write(Record) ->
            Tab = element(1, Record),
            mnesia:write(Tab, Record, sticky_write).
  
          mnesia:delete({Tab, Key}) ->
            mnesia:delete(Tab, Key, write).
  
          mnesia:s_delete({Tab, Key}) ->
            mnesia:delete(Tab, Key, sticky_write).
  
          mnesia:delete_object(Record) ->
            Tab = element(1, Record),
            mnesia:delete_object(Tab, Record, write).
  
          mnesia:s_delete_object(Record) ->
            Tab = element(1, Record),
            mnesia:delete_object(Tab, Record. sticky_write).
  
          mnesia:read({Tab, Key}) ->
            mnesia:read(Tab, Key, read).
  
          mnesia:wread({Tab, Key}) ->
            mnesia:read(Tab, Key, write).
  
          mnesia:match_object(Pattern) ->
            Tab = element(1, Pattern),
            mnesia:match_object(Tab, Pattern, read).
  
          mnesia:index_match_object(Pattern, Attr) ->
            Tab = element(1, Pattern),
            mnesia:index_match_object(Tab, Pattern, Attr, read).
          

The earlier function semantics remain unchanged.

Use the function mnesia:table_info(Tab, record_name) to determine the record name of a table.

If the name of all tables equals the record name that the table hosts everything is backward compatible. But if the new record_name feature is used this may affect old existing applications:

17.2.1.2 New function mnesia:lock/2

A new locking function has been introduced:

          mnesia:lock(LockItem, LockKind)

          LockItem ::= {table, Tab} | {global, Item, Nodes} | ...
          LockKind ::= read | write | ...
          

The old table locking functions still exists, but are now merely a syntactic sugar for the new functions:

          mnesia:read_lock_table(Tab) ->
            mnesia:lock({table, Tab}, read).

          mnesia:write_lock_table(Tab)
            mnesia:lock({table, Tab}, write).
        
17.2.1.3 New function mnesia:activity/2,3,4

In the Mnesia API there are some functions whose semantics depends of the execution context:

          mnesia:lock(LockItem, LockKind)
          mnesia:write(Tab, Rec, LockKind)
          mnesia:delete(Tab, Key, LockKind)
          mnesia:delete_object(Tab, Rec, LockKind)
          mnesia:read(Tab, Key, LockKind)
          mnesia:match_object(Tab, Pat, LockKind)
          mnesia:all_keys(Tab)
          mnesia:index_match_object(Tab, Pat, Attr, LockKind)
          mnesia:index_read(Tab, SecondaryKey, Attr)
          mnesia:table_info(Tab, InfoItem)
        

if these functions are executed within a mnesia:transaction/1,2,3, locks are acquired, atomic commit is ensured etc. If the same functions are executed within the context of mnesia:async_dirty/1,2, mnesia:sync_dirty/1,2 or mnesia:ets/1,2 their semantics are different. Although this is not entirely new, new functions have been introduced:

          mnesia:activity(ActivityKind, Fun)
          mnesia:activity(ActivityKind, Fun, Args)
          mnesia:activity(ActivityKind, Fun, Module)
          mnesia:activity(ActivityKind, Fun, Args, Module)

          ActivityKind ::= transaction | 
                           {transaction, Retries} |
                           async_dirty |
                           sync_dirty |
                           ets
          

Depending on the ActivityKind argument, the evaluation context will be the same as with the functions:
mnesia:transaction,
mnesia:async_dirty,
mnesia:sync_dirty and
mnesia:ets respectively. The Module argument provides the name of a callback module that will implement the mnesia_access behavior. It must export the functions:

          lock(ActivityId, Opaque, LockItem, LockKind)
          write(ActivityId, Opaque, Tab, Rec, LockKind)
          delete(ActivityId, Opaque, Tab, Key, LockKind)
          delete_object(ActivityId, Opaque, Tab, Rec), LockKind
          read(ActivityId, Opaque, Tab, Key, LockKind)
          match_object(ActivityId, Opaque, Tab, Pat, LockKind)
          all_keys(ActivityId, Opaque, Tab, LockKind)
          index_match_object(ActivityId, Opaque, Tab, Pat, Attr, LockKind)
          index_read(ActivityId, Opaque, Tab, SecondaryKey, Attr, LockKind)
          table_info(ActivityId, Opaque, Tab, InfoItem)
          
          ActivityId ::=

            A record which represents the identity of the enclosing Mnesia
            activity. The first field (obtained with element(1, ActivityId)
            contains an atom which may be interpreted as the type of the
            activity: 'ets', 'async_dirty', 'sync_dirty' or 'tid'. 'tid'
            means that the activity is a transaction.

          Opaque ::=
        
            An opaque data structure which is internal to Mnesia.
        

mnesia and mnesia_frag are examples of callback modules. By default the mnesia module is used as callback module for accesses within "Mnesia activities".
For example invoke the function mnesia:read(Tab, Key, LockKind), and the corresponding Module:read(ActivityId, Opaque, Tab, Key, LockKind) will be invoked to perform the job (or it will pass it on to mnesia:read(ActivityId, Opaque, Tab, Key, LockKind)).

A customized callback module may be used for several purposes, such as providing triggers, integrity constraints, run time statistics, or virtual tables. The callback module does not have to access real Mnesia tables, it is a free agent provided the callback interface is fulfilled.

The context sensitive function mnesia:table_info/2 may be used to provide virtual information about a table. This function enables the user to perform Mnemosyne queries within an activity context with a customized callback module. By providing table indices information and other Mnemosyne requirements, Mnemosyne can be used as an efficient generic query language for access of virtual tables.

Please, read the "mnesia_access callback behavior" in Appendix C for a code example from the mnesia_frag module.

17.2.1.4 New configuration parameter access_module

The new configuration parameter access_module has been added. It defaults to the atom mnesia, but may be set to any module that fulfills the callback interface with mnesia_access behavior.

The mnesia:activity functions will use the access_module as a callback module if it not is explicitly overridden by the Module argument.

Use mnesia:system_info(access_module) to determine the actual access_module setting.

17.2.2 Fixed Bugs and malfunctions

17.2.3 Incompatibilities

None as long as all tables only host records with the same name as the table name. Please, read the chapter Improvements and new features about the potential inconsistencies.

17.2.4 Known bugs and problems

None.

17.3 Mnesia 3.3

17.3.1 Improvements and new features

17.3.2 Fixed Bugs and malfunctions

17.3.3 Incompatibilities

None.

17.3.4 Known bugs and problems

No new problems or bugs. See previous release notes.

17.4 Mnesia 3.2

17.4.1 Improvements and new features

17.4.2 Fixed Bugs and malfunctions

17.4.3 Incompatibilities

None.

17.4.4 Known bugs and problems

No new problems or bugs. See previous release notes.

17.5 Mnesia 3.1.1

This release is a minor release and the release notes describes the difference between version 3.1.1 and version 3.1 of Mnesia.

17.5.1 Improvements and new features

None.

17.5.2 Fixed Bugs and malfunctions

17.5.3 Incompatibilities

None.

17.5.4 Known bugs and problems

No new problems or bugs. See previous release notes.

17.6 Mnesia 3.1

17.6.1 Improvements and new features

17.6.2 Fixed Bugs and malfunctions

17.6.3 Incompatibilities

None.

17.6.4 Known bugs and problems

No new problems or bugs. See previous release notes.

17.7 Mnesia 3.0

This release is a major release and the release notes describes the difference between version 3.0 and version 2.3. 3.0 is classified as major release due to the issues described in the chapter about incompatibilities described below.

17.7.1 Improvements and new features

17.7.2 Fixed Bugs and malfunctions

No serious bugs or malfunctions.

17.7.3 Incompatibilities

Mnesia 3.0 is primary developed for OTP R4, but is still backward compatible with the OTP R3 platform.

The internal database format on disc has been made more future safe. It has also been altered in order to cope with the newly introduced features. The special upgrade procedure is as follows:

Mnemosyne has been made into an own separate application. The new application is called mnemosyne. Please, read its release notes. This application split implies a few incompatibilities:

17.7.4 Known bugs and problems

None of these are newly introduced.

17.8 Mnesia 2.3

17.8.1 Improvements and new features

None.

17.8.2 Fixed Bugs and malfunctions

17.8.3 Incompatibilities

None.

17.8.4 Known bugs and problems

No new ones. See previous release notes.

17.9 Mnesia 2.2

17.9.1 Improvements and new features

None.

17.9.2 Fixed Bugs and malfunctions

17.9.3 Incompatibilities

None.

17.9.4 Known bugs and problems

17.10 Mnesia 2.1.2

17.10.1 Improvements and new features

None.

17.10.2 Fixed Bugs and malfunctions

17.10.3 Incompatibilities

None.

17.10.4 Known bugs and problems

See previous release notes.

17.11 Mnesia 2.1.1

17.11.1 Improvements and new features

None.

17.11.2 Fixed Bugs and malfunctions

17.11.3 Incompatibilities

None.

17.11.4 Known bugs and problems

See previous release notes.

17.12 Mnesia 2.1

17.12.1 Improvements and new features

17.12.2 Fixed Bugs and malfunctions

17.12.3 Incompatibilities

17.12.4 Known bugs and problems

17.13 Mnesia 2.0.2

17.13.1 Improvements and new features

The performance of the Mnemosyne catalog is improved. There are now some parameters per table available for tuning. Two functions are introduced for this:

        mnemosyne_catalog:set_parameter(Table, Name, Value)
        mnemosyne_catalog:get_parameter(Table, Name)
      

Both return the present value. They may change in a future releases! The possible Names are:

Name Values Default Description
do_local_upd yes no yes Collect statistics for this table on this node
min_upd_interval integer seconds 10 Minimum allowed interval between two updates
upd_limit Percent 10 New statistics are collected when more than upd_limit % of the table is updated
max_wait integer millisec 1000 Maximum time to wait for the initial call from the optimizer to the catalog server
Mnemosyne configuration parameters

17.13.2 Fixed Bugs and malfunctions

17.13.3 Incompatibilities

17.13.4 Known bugs and problems

17.14 Mnesia 2.0.1

17.14.1 Improvements and new features

None.

17.14.2 Fixed Bugs and malfunctions

17.14.3 Incompatibilities

None.

17.14.4 Known bugs and problems

17.15 Mnesia 2.0

The release notes describe the difference between version 2.0 and version 1.3.2 of Mnesia. 2.0 is classified as major release of Mnesia due to changes in the internal database format on disc, see the chapter about incompatibilities for further details.

17.15.1 Improvements and new features

17.15.2 Fixed Bugs and malfunctions

17.15.3 Incompatibilities

Mnesia 2.0 is primary developed for OTP R3, but is still backward compatible with the OTP R1D and OTP R2D platforms.

The internal database format on disc has been changed in order to cope with the new features that have been introduced. This implies a special upgrade procedure:

17.15.4 Known bugs and problems

17.16 Mnesia 1.3.2

17.16.1 Improvements and new features

None.

17.16.2 Fixed Bugs and malfunctions

17.17 Mnesia 1.3.1

17.17.1 Improvements and new features

17.17.2 Fixed Bugs and malfunctions

17.17.3 Incompatibilities

None.

17.17.4 Known bugs and problems

See notes about release 1.3

17.18 Mnesia 1.3

This release is a minor bugfix release and the release notes describes the difference between version 1.3 and version 1.2.3 of Mnesia.

17.18.1 Improvements and new features

None.

17.18.2 Fixed Bugs and malfunctions

17.18.3 Incompatibilities

None.

17.18.4 Known bugs and problems

See notes about release 1.2.3

17.19 Mnesia 1.2.3

17.19.1 Improvements and new features

None.

17.19.2 Fixed Bugs and malfunctions

17.19.3 Incompatibilities

None.

17.19.4 Known bugs and problems

See notes about release 1.2.2.

17.20 Mnesia 1.2.2

17.20.1 Improvements and new features

None.

17.20.2 Fixed Bugs and malfunctions

17.20.3 Incompatibilities

None.

17.20.4 Known bugs and problems

See notes about release 1.2.1.

17.21 Mnesia 1.2.1

17.21.1 Improvements and new features

None.

17.21.2 Fixed Bugs and malfunctions

17.21.3 Incompatibilities

None.

17.21.4 Known bugs and problems

See notes about release 1.2.

17.22 Mnesia 1.2

17.22.1 Improvements and new features

None.

17.22.2 Fixed Bugs and malfunctions

17.22.3 Incompatibilities

None.

17.22.4 Known bugs and problems

See notes about release 1.1.1.

17.23 Mnesia 1.1.1

This section describes the changes made to Mnesia in the 1.1.1 version of Mnesia. This release is a minor upgrade from 1.1.

17.23.1 Improvements and new features

17.23.2 Fixed Bugs and malfunctions

17.23.3 Incompatibilities

None.

17.23.4 Known bugs and problems

See notes about release 1.1.

17.24 Mnesia 1.1.1

This section describes the changes made to Mnesia in the 1.1.1 version of Mnesia. This release is a minor upgrade from 1.1.

17.24.1 Improvements and new features

17.24.2 Fixed Bugs and malfunctions

17.24.3 Incompatibilities

None.

17.24.4 Known bugs and problems

See notes about release 1.1.

17.25 Mnesia 1.1

This release is a normal release for general use and it comes with full documentation. The release notes describe the difference between version 1.1 and version 1.0 of Mnesia. 1.1 is a minor release, but the storage format on disc has been changed. In order to use databases created with older versions of Mnesia, a full backup file must be created with the old version of Mnesia. The backup must be installed as fallback by the new version of Mnesia. Then the new version of Mnesia may be started.

17.25.1 Improvements and new features

17.25.2 Fixed Bugs and malfunctions

17.25.3 Incompatibilities

As mentioned above, the storage format on disc has been changed.

17.25.4 Known bugs and problems

See notes about release 1.0.

17.26 Mnesia 1.0

This special version of Mnesia is intended to be used by the GPRS project only. It is released without any new documentation besides this release note. The release notes describes the difference between version 1.0 and version 0.80 of Mnesia. 1.0 is a major release and in order to use databases created with older versions of Mnesia, a full backup file must be created with the old version of Mnesia. The backup must be installed as fallback by the new version of Mnesia. Then the new version of Mnesia may be started.

17.26.1 Improvements and new features

17.26.1.1 Enhanced concept of schema and db_nodes

The notion of db_nodes has been extended. Now Mnesia is able to run on disc-less nodes as well as regular nodes that utilize the disc.

The schema table may, as other tables, reside on one or more nodes. The storage type of the schema table may either be disc_copies or ram_copies (not disc_only_copies). At startup Mnesia uses its schema to determine which other nodes that it should try to establish contact with. If any of the other nodes are already started, the starting node merges its table definitions with the table definitions brought from the other nodes. This also applies to the definition of the schema table itself. The application parameter extra_db_nodes contains a list of nodes which Mnesia should also establish contact with (besides the ones found in the schema). The default value is the empty list [].

The application parameter schema_location controls enables Mnesia to look for and locate schema. The parameter may be one of the following atoms:

disc
Mandatory disc. The schema was assumed to be located on the Mnesia directory. And if the schema could not be found , Mnesia refused to start. This was the old behavior.
ram
Mandatory ram. The schema resides in ram only. At startup a tiny new schema is generated. This default schema contains just the definition of the schema table and only resides on the local node. Since no other nodes are found in the default schema, the configuration parameter extra_db_nodes must be used in order to let the node share its table definitions with other nodes. (The extra_db_nodes parameter may also be used on disc-full nodes.)
opt_disc
Optional disc. The schema may reside on either disc or ram. If the schema is found on disc, Mnesia starts as a disc-full node (the storage type of the schema table is disc_copies). If no schema is found on disc, Mnesia starts as a disc-less node (the storage type of the schema table is ram_copies). When the schema_location is set to opt_disc the function mnesia:change_table_copy_type/3 may be used to change the storage type of the schema. The default is opt_disc.

The functions mnesia:add_table_copy/3 and mnesia:del_table_copy/2 can be used to add and delete replicas of the schema table. Adding a node to the list of nodes where the schema is replicated will affect two things.
First it allows other tables to be replicated to this node.
Secondly it will cause Mnesia to try to contact the node at startup of disc full nodes.

If the storage type of the schema is ram_copies, Mnesia will not use the disc on that particular node. The disc usage is enabled by changing the storage type of the table schema to disc_copies.

The schema table is not created with mnesia:create_table/2 as normal tables. New schemas are created explicitly with mnesia:create_schema/1 or implicitly by starting Mnesia without a disc resident schema. Whenever a table (including the schema table) is created it is assigned its own unique cookie. At startup when Mnesia connects to each other on different nodes, they exchange table definitions with each other and the table definitions are merged.
During the merge procedure Mnesia performs a sanity test to ensure that the table definitions are compatible with each other. If a table exists on several nodes the cookie must be the same, otherwise Mnesia will shutdown one of the nodes. This unfortunate situation will occur if a table has been created on two nodes independently of each other while they were disconnected. To solve this problem, one of the tables must be deleted (as the cookies differ, it is regarded as two different tables even if they happen to have the same name).

Merging different versions of the schema table, does not always require the cookies to be the same. If the storage type of the schema table is disc_copies, the cookie is immutable, and all other db_nodes must have the same cookie. But if the storage type of the schema is ram_copies its cookie can be replaced with a cookie from another node (ram_copies or disc_copies). Cookie replacement during a merge of the schema table definition is performed each time a RAM node connects to another node.

The functions mnesia:add_db_node/1 and mnesia:del_db_node/3 have been removed from the API. Adding and deleting db_nodes are performed as described above.

mnesia:system_info(schema_location) and mnesia:system_info(extra_db_nodes) may be used to determine the actual values of schema_location and extra_db_nodes respectively. mnesia:system_info(use_dir) may be used to determine whether Mnesia is actually be using the Mnesia directory or not. use_dir may be determined even before Mnesia is started. The function mnesia:info/0 may now be used to printout some system information even before Mnesia is started. When Mnesia is started the function prints out more information.

Transactions which update the definition of a table, require that Mnesia is started on all nodes where the storage type of the schema is disc_copies. All replicas of the table on these nodes must also be loaded.

There are a few exceptions to these availability rules. Tables may be created and new replicas may be added without all disc-full nodes being started. New replicas may be added without all other replicas of the table is being loaded, one other replica is sufficient.

The internal representation of the schema cookie, schema version and db_nodes has been changed. Their representation in backup files has also been changed. This affects the function mnesia:traverse_backup/4,6 slightly. Now the definition of the schema table is represented in the same manner as the definition of other tables. In the backup this means a tuple {schema, schema, TableDef} instead of {schema, cookie,Cookie}, {schema, version, Version} and {schema, db_nodes, DbNodes}. Now all tables (including the schema table) have their own cookie and version. The db_nodes are found in the list of ram_copies and disc_copies nodes respectively in the tuple containing the definition of the schema table.

17.26.1.2 New concept of handling Mnesia events

As Mnesia has evolved to conform to the application concept, the mnesia_user process has been replaced with a gen_event server.

In various situations Mnesia generates events. There are several categories of events. First, there are system events, which are important events that serious Mnesia applications should take an interest. The system events are currently:

{mnesia_up, Node}
This means that Mnesia has been started on a node. Node is the name of the node. By default this event is ignored.
{mnesia_down, Node}
Mnesia has been stopped on a node. Node is the name of the node. By default this event is ignored.
{mnesia_checkpoint_activated, Checkpoint}
A checkpoint with the name Checkpoint has been activated and that the current node is involved in the checkpoint. Checkpoints may be activated explicitly with mnesia:activate_checkpoint/1 or implicitly at backup, adding table replicas, internal transfer of data between nodes etc. By default this event is ignored.
{mnesia_checkpoint_deactivated, Checkpoint}
A checkpoint with the name Checkpoint has been deactivated and that the current node was involved in the checkpoint. Checkpoints may explicitly be deactivated with mnesia:deactivate/1 or implicitly when the last replica of a table (involved in the checkpoint) becomes unavailable, e.g. at node down. By default this event is ignored.
{mnesia_overload, Details}
Mnesia on the current node is overloaded and that the application ought to do something about it.
One example of a typical overload situation is when the application is performing more updates on disc resident tables than Mnesia is able to handle. Ignoring this kind of overload may lead into a situation where the disc space is exhausted (regardless of the size of the tables stored on disc). Each update is appended to the transaction log and it is dumped to the tables files occasionally depending on how it is configured. The table file storage is more compact than the transaction log storage, especially if the same record is updated many times. If the thresholds for dumping the transaction log has been reached before the previous dump was finished an overload event is triggered.
Another typical overload situation is when the transaction manager cannot commit transactions at the same pace as the applications are performing updates of disc resident tables. When this happens the message queue of the transaction manager will continue to grow until the memory is exhausted or the load decreases. The same problem may occur for dirty updates.
The overload is detected locally on the current node, but the cause may be on another node. Application processes may cause heavy loads on other nodes if any of the tables are residing on other nodes (replicated or not). By default this event is reported to the error_logger.
{mnesia_fatal, Format, Args, BinaryCore}
Mnesia has encountered a fatal error and will be terminated imminently. The reason for the fatal error is explained in Format and Args which may be given as input to io:format/2 or sent to the error_logger. By default it is sent to the error_logger. BinaryCore is a binary containing a summary of Mnesia's internal state when the fatal error was encountered. By default the binary is written to a unique file name on current directory. On RAM nodes the core is ignored.
{mnesia_info, Format, Args}
This means that Mnesia has detected something that may be interesting when debugging the system. What is interesting is explained in Format and Args which may be given as input to io:format/2 or sent to the error_logger. By default this event is printed with io:format/2.
{mnesia_error, Format, Args}
This means that Mnesia has encountered an error. The reason for the error is explained i Format and Args which may be given as input to io:format/2 or sent to the error_logger. By default this event is reported to the error_logger.
{mnesia_user, Event}
This means that some application has invoked the function mnesia:report_event(Event). Event may be any Erlang data structure. When tracing a system of Mnesia applications it is useful to be able to interleave Mnesia's own events with application related events that gives information about the application context. Whenever the application starts with some new demanding Mnesia activity or if it is entering a new interesting phase in its execution it may be a good idea to use mnesia:report_event/1.

Another category of events are table events, which are events related to table updates. The table events are tuples typically resembling: {Oper, Record, TransId}. Where Oper is the operation performed. Record is the record involved in the operation and TransId is the identity of the transaction performing the operation. The various table related events that may occur are:

{write, NewRecord, TransId}
A new record has been written. NewRecord contains the new value of the record.
{delete_object, OldRecord, TransId}
A record has possibly been deleted with mnesia:delete_object/1. OldRecord contains the value of the old record as stated as argument by the application. Note that, other records with the same key may be remaining in the table if it is a bag.
{delete, {Tab, Key}, TransId}
One or more records has possibly been deleted. All records with the key Key in the table Tab have been deleted.

The function mnesia:subscribe_config_change/0 has been replaced with the functions mnesia:subscribe(EventCategory) and mnesia:unsubscribe(EventCategory). EventCategory may either be the atom system or the tuple {table, Tab}. The subscribe functions activate a subscription of events. The events are delivered as messages to the process evaluating the mnesia:subscribe/1 function. The syntax of system events is {mnesia_system_event, Event} and {mnesia_table_event, Event} for table events. What system events and table events means is described above.

All system events are always subscribed by Mnesia's gen_event handler. The default gen_event handler is mnesia_event. But it may be changed with the application parameter event_module. The value of this parameter must be a name of a module implementing a complete handler as specified by the gen_event module in stdlib. mnesia:system_info(subscribers) and mnesia:table_info(Tab, subscribers) may be used to determine which processes subscribe to various events.

17.26.1.3 Enhanced debugging support

The new subscription mechanism enables building of powerful debugging and configuration tools (like the soon to be released Xmnesia).

mnesia:debug/0 and mnesia:verbose/0 has been replaced with mnesia:set_debug_level(Level). Level is an atom which regulates the debugging level of Mnesia. The following debug levels are supported:

none
No trace output at all. This is the default.
verbose
Activates tracing of important debug events. These debug events will generate {mnesia_info, Format, Args} system events. Processes may subscribe on these events with mnesia:subscribe/1. The events are always sent to Mnesia's event handler.
debug
Activates all events on the verbose level plus a full trace of all debug events. These debug events will generate {mnesia_info, Format, Args} system events. Processes may subscribe to these events with mnesia:subscribe/1. The events are always sent to Mnesia's event handler. On this debug level Mnesia's event handler starts subscribing updates to the schema table.
trace
Activates all events at the level debug. On this debug level Mnesia's event handler starts subscribing updates on all Mnesia tables. This level is only intended for debugging small toy systems, since many large events may be generated.
false
is an alias for none.
true
is an alias for debug.
17.26.1.4 Enhanced error codes

Mnesia functions return {error, Reason} or {aborted, Reason} when it fails. This is still true but the Reason is now in many cases a tuple instead of a cryptic atom. The first field in the tuple tells what kind of error it is and the rest of the tuple contains details about the context or where the error occurred. For example, if a table does not exist {no_exists, TableName} is returned instead of just the atom no_exists. The function mnesia:error_description/1 accepts the old atom style and the new tuple style. {error, Reason} and {aborted, Reason} tuples are also accepted.

17.26.1.5 Conformance to the application concept

As Mnesia has evolved to conform with the OTP application concept, the process architecture of Mnesia has been restructured. For processes which not are performance critical gen_server, gen_event and gen_fsm are now used. Supervisor's are used to supervise Mnesia's internal long living processes. The startup procedure now conforms with the supervisor concept. The side effect is poorer error codes at startup. mnesia:start/0 will now return the cryptic tuple {error,{shutdown, {mnesia_sup,start,[normal,[]]}}} when Mnesia startup fails. Use -boot start_sasl as argument to the erl script in order to get a little bit more information from start failures.

Mnesia now negotiates with Mnesia on other nodes at startup about which message protocol to use. This means that connecting a node with a future release of Mnesia with a node running this release of Mnesia will work fine, and not cause inconsistency because of protocol mismatches. Mnesia is also prepared for code change without stopping Mnesia. (The file representation on disc was already prepared for future format changes.)

17.26.1.6 The transaction concept has been extended

A functional object (Fun) performing operations like:

may be sent to the function mnesia:transaction/1,2,3 and will be performed in a transaction context involving mechanisms like locking, logging, replication, checkpoints, subscriptions, commit protocols etc. This is still true, but the same function may also evaluated in other contexts.

By sending the same "fun" to the function mnesia:async_dirty(Fun [, Args]) it will be performed in dirty context. The function calls will be mapped to the corresponding dirty functions. This will still involve logging, replication and subscriptions but there will be no locking, local transaction storage or commit protocols involved. Checkpoint retainers will be updated but it will be updated dirty. As for normal mnesia:dirty_* operations the operations are performed semi asynchronous. The functions will wait for the operation to be performed on one node but not the others. If the table resides locally no waiting for other nodes is involved.

By sending the same "fun" to the function mnesia:sync_dirty(Fun [, Args]) it will be performed in almost the same context as mnesia:async_dirty/1,2. The difference is that the operations are performed synchronously. The caller will wait for the updates to be performed on all active replicas. Using sync_dirty is useful for applications that are executing on several nodes and want to be sure that the update is performed on remote node before a remote process is spawned or a message is sent to a remote process. It may also be useful if the application perform so frequent or voluminous updates that Mnesia becomes overloaded on other nodes.

By sending the same "fun" to the function mnesia:ets(Fun [, Args]) it will be performed in a very raw context. The operations will be performed directly on the local ets tables assuming that the local storage type is ram_copies and that the table is not replicated to other nodes. Subscriptions will not be triggered nor checkpoints updated, but it is blindingly fast.

All these activities (transaction, async_dirty, sync_dirty and ets) may be nested. Yes, we do support nested transactions! A nested activity is always automatically upgraded to be of the same kind as the outer one. For example, a "fun" executed in async_dirty inside a transaction will be executing inside transaction context.

Locks acquired by nested transactions will not be released until the outermost transaction has ended. The updates performed in a nested transaction will not be committed until the outermost transaction commits.

Mnemosyne queries may be performed in all these activity contexts (transaction, async_dirty, sync_dirty and ets). The ets activity will only work if the table has no indexes.

17.26.1.7 An alternate commit protocol has been added

Both the new and old protocols are still used. Mnesia selects the most appropriate transaction commit protocol depending on which tables that has been involved in the transaction.

The old commit protocol is a very fast protocol, with a simple algorithm for recovery. But the drawback is that it only guarantees consistency after recovery for symmetrically replicated tables. If all tables involved in the transaction have the same replica pattern they are regarded as symmetric replicated. For example, if all tables involved in the transaction have a ram_copies replica on node A and a disc_copies replica on node B they are symmetrically replicated.

If the tables are asymmetrically replicated or if the schema table (containing the table definitions) is involved, the new heavy weight protocol is used to commit the transaction. The new protocol ensures consistency for all kinds of table updates. The protocol is able to recover the tables to a consistent state regardless of provocative applications or when a node crash occurs.

During commit, the new protocol will cause more network messages and disc accesses than the old protocol, but it is safer. At startup after a crash, there may exist transactions in the log that we do not know the outcome of. This should be very rare since the protocol has been deliberately designed to make the period of uncertainty as short as possible. When this rare situation occurs, Mnesia on recovering the node will not be able to start without asking Mnesia on other nodes for an outcome of the transaction in order to decide whether to commit or abort.

With this new approach several of the problems, described in earlier release notes have disappeared. See below:

17.26.1.8 New startup procedure

When Mnesia starts on one node it may come to the conclusion that some of the tables can be loaded from local disc since no other nodes may hold a replica that is newer than the one on the starting node. The start function now returns without performing any loading of tables. The tables are loaded later in background allowing early availability of the transaction manager for those tables that have been loaded. The application will not have to wait for all tables to be loaded before it can start.

mnesia:start/0 now returns the atom ok or {error, Reason}. In embedded systems this function is not used. In such systems application:start(mnesia) is used.

A new function mnesia:start(Config) is introduced. The Config argument is a list of {Name, Val} tuples, where Name is a name of an application parameter and Val is its value. The effect of the new values will remain until the application Mnesia is reloaded. After the transient change of application parameters, Mnesia will be started with mnesia:start/0 and return its return value. For example mnesia:start([{extra_db_nodes, [svarte_bagge@switchboard]}]) would override any old setting of extra_db_nodes until the application Mnesia was reloaded.

When an application is started it must synchronize with Mnesia to ensure that all the tables that it needs to access, have really been loaded before it attempts access. The function mnesia:wait_for_tables(Tables, Timeout) should be used for this purpose. Note:It is even more important to do this now since the start function will return earlier without loading any tables compared to previous releases. Do not forget this, otherwise the application will be less robust.

Each Mnesia table should be owned by only one module. This module is responsible for the life cycle of the table. When the application is installed for the first time in a network of nodes, this module must create the necessary tables. In each subsequent release of the application the module owning the table is responsible for performing changes of the table definition if required, (e.g. invoking mnesia:transform/3 to transform the table).

When the application ceases to exist, it must be uninstalled from the network and the module that owns the table must delete its tables. It might also be a good idea to let this module export functions that allow customized interfaces to some of the Mnesia functions (eg. a special wait_for_tables that only waits for certain hard coded tables).

Other applications that need direct access to tables owned by a module in another application must declare a dependency to the other application in its .app-file in order to allow the code change, start and stop algorithms in the application_controller and supervisor modules to work.

17.26.1.9 mnesia:dump_tables/1

mnesia:dump_tables/1 is now performed as a transaction and returns {atomic, ok} or {aborted, Reason} as the other functions performed in transaction context.

17.26.1.10 mnesia:dirty_update_counter/2

mnesia:dirty_update_counter(Counter, Incr) now returns the new counter value instead of the atom ok.

17.26.1.11 New dirty functions have been introduced

They perform the same work as the corresponding function without 'dirty_' prefix but in dirty context.

17.26.1.12 Easier use of indexes

Now attribute names are also allowed to specify index positions. Index positions may still be given as field positions in the tuple corresponding to the record definitions.

The functions mnesia:match_object/1 and mnesia:dirty_match_object/1 automatically make use of indexes if any exist. No heuristics are performed in order to select the best index. Use Mnemosyne if this is an issue.

17.26.1.13 Enhanced control of thresholds for dump of transaction log

The operations found in the transaction log will occasionally be performed on the actual dets tables. The frequency of the dumps is configurable with two application parameters. One is the dump_log_time_threshold which is an integer that specifies the dump log interval in milliseconds (it defaults to 3 minutes). If a dump has not been performed within dump_log_time_threshold milliseconds, a new dump will be performed regardless of how many writes have been performed.

The other is dump_log_write_threshold which is an integer specifying how many writes to the transaction log that is allowed, before a new dump of the log is to be performed. It defaults to 100 log writes.

Both thresholds may be configured independently of each other. When one of them is exceeded a dump will be performed.

As explained elsewhere, there may occur situations when the application performs updates at a pace faster than Mnesia is able to propagate the updates from the log to the table files. When this occurs, overload events are generated. If availability is important, subscribe to these events and perform a regulation of the update intensity for your application! If you ignore this, you may exhaust the disc space.

The old application parameter -mnesia_dump_log_interval has been replaced by the two operations mentioned above.

The function mnesia:change_dump_log_config/1 has been removed from the API.

17.26.1.14 Tables must have at least arity 3

Mnesia will not allow tables with arity 2. All tables must at least have one extra attribute besides the key. Now, an extra check is performed to disallow creation of tables with an arity of less then 3. In earlier releases the table creation succeeded but since Mnesia was not designed for such peculiar tables strange things happened and records were lost.

Check your schema for such tables if you load an old backup file.

17.26.1.15 Non-blocking emulator

As mentioned in earlier release notes, match operations in ets tables will block the emulator for a long time if the tables are large. Now, a new BIF which performs partial matching of tables has been introduced. Mnesia is using the new BIF in match operations in order to avoid blocking the emulator with time consuming matches in large ets tables.

17.26.1.16 Configuration parameters

As Mnesia has evolved and conformed to the application concept, the style of the configuration parameters have been changed. Mnesia is now configured by arguments to the erl script using the syntax stated by the application module in stdlib.Below is an example of parameters in the old style:

Parameters should now resemble:

Following is a brief summary of the new configuration parameters:

17.26.1.17 Fixed Bugs and malfunctions

The following

Own Identities; and,

Aux Identities have been solved:

17.26.1.18 Incompatibilities

See the chapter regarding improvements.

17.26.1.19 Known bugs and problems

17.27 Mnesia 0.X

See the historical archives.


Copyright © 1991-97 Ericsson Telecom AB