The test_server
module provides support for test suite
authors, within the following areas
For more information on how to write test cases and for examples, please turn to the Test Server User's Guide.
To run test cases use the Test Sever Framework ts
. This
is also described in the user's guide and the the reference manual
for the ts
module.
The following functions are supposed to be used inside a test suite. There are quite a few functions useful and/or necessary for test suite developers.
OSType = term()
os:type/0
This function can be called on controller or target node, and it will always return the OS type of the target node.
Reason = term()
This will make the test suite fail with a given reason, or
with suite_failed
if no reason was given. Use this
function if you want to terminate a test case, as this will
make it easier to read the log- and HTML files. Reason
will appear in the comment field in the HTML log.
timetrap(Timout) -> Handle
timetrap(Timeout, Pid) -> Handle
Timeout = integer()
Pid = pid()
self()
by
default)
Sets up a time trap for a process. By default self()
is used, but a different pid may be specified. An expired
timetrap kills the process it was set up for with reason
timetrap_timeout
. The returned handle is to be given
as argument to timetrap_cancel
before the timetrap
expires. The timeout should be given in milliseconds.
Handle = term()
timetrap
This function cancels a timetrap. This must be done before the timetrap expires.
MSecs = integer() | float() | infinity
This function suspends the calling process for atleast the
supplied number of milliseconds. There are two major reasons
why you should use this function instead of
timer:sleep
, the first being that the module
timer
may be unavaliable at the time the test suite is
run, and the second that it also accepts floating point
numbers.
hours(N) -> MSecs
minutes(N) -> MSecs
seconds(N) -> MSecs
N = integer()
Theese functions convert N
number of hours, minutes
or seconds into milliseconds.
Use this function when you want to
test_server:sleep/1
for a number of seconds, minutes or
hours (!).
format(Format) -> ok
format(Format, Args)
format(Pri,Format)
format(Pri, Format, Args)
Format = string()
io_:format
.
Args = list()
Formats output just like io:format
but sends the
formatted string to a logfile. If the urgency value,
Pri
, is lower than some threshold value, it will also
be written to the test person's console. Default urgency is
50, default threshold for display on the console is 1.
Typically, the test person don't want to see everything a
test suite outputs, but is merely interested in if the test
cases succeeded or not, wich the test server tells him. If he
would like to see more, he could manually change the threshold
values by using the test_server_ctrl:set_levels/3
function.
capture_start() -> ok
capture_stop() -> ok
capture_get() -> list()
Theese functions makes it possible to capture all output to
stdout from a process started by the test suite. The list of
characters captured can be purged by using capture_get
.
This function will empty and return all the messages currently in the calling process' message queue.
timecall(M, F, A) -> {Time, Value}
M = atom()
F = atom()
A = list()
Time = integer()
Value = term()
This function measures the time (in seconds) it takes to call a certain function. The function call is not caught within a catch.
do_times(N, M, F, A) -> ok
do_times(N, Fun)
N = integer()
M = atom()
F = atom()
A = list()
Calls MFA or Fun N times. Useful for extensive testing of a sensitive function.
m_out_of_n(M, N, Fun) -> ok | exit({m_out_of_n_failed, {R,left_to_do}}
N = integer()
M = integer()
Repeatedly evaluates the given function until it succeeds (doesn't crash) M times. If, after N times, M successful attempts have not been accomplished, the process crashes with reason {m_out_of_n_failed, {R,left_to_do}}, where R indicates how many cases that was still to be successfully completed.
For example:
m_out_of_n(1,4,fun() -> tricky_test_case() end)
Tries to run tricky_test_case() up to 4 times, and is
happy if it succeeds once.
m_out_of_n(7,8,fun() -> clock_sanity_check() end)
Tries running clock_sanity_check() up to 8 times,and
allows the function to fail once. This might be useful if
clock_sanity_check/0 is known to fail if the clock crosses an
hour boundary during the test (and the up to 8 test runs could
never cross 2 boundaries)
call_crash(M, F, A) -> Result
call_crash(Time, M, F, A) -> Result
call_crash(Time, Crash, M, F, A) -> Result
Result = ok | exit(call_crash_timeout) | exit({wrong_crash_reason, Reason})
Crash = term()
Time = integer()
M = atom()
F = atom()
A = list()
Spawns a new process that calls MFA. The call is considered
successful if the call crashes with the gives reason
(Crash
) or any reason if not specified. The call must
terminate within the given time (default infinity
), or
it is considered a failure.
Stem = string()
Returns a unique filename starting with Stem
with
enough extra characters appended to make up a unique
filename. The filename returned is guaranteed not to exist in
the filesystem at the time of the call.
start_node(Name, Type, Options) -> {ok, Node} | {error, Reason}
Name = atom() | string()
Type = slave | peer
Options = [{atom(), term()]
This functions starts a node, possibly on a remote machine,
and guarantees cross architecture transparency. Type is set to
either slave
or peer
.
slave
means that the new node will have a master,
i.e. the slave node will terminate if the master terminates,
TTY output produced on the slave will be sent back to the
master node and file I/O is done via the master. The master is
normally the target node unless the target is itself a slave
as is the case for OSE/Delta targets.
peer
means that the new node is an independent node
with no master.
Options
is a tuplelist wich can contain one or more
of
{remote, true}
{args, Arguments}
{wait, false}
{fail_on_error, false}
{error, Reason}
rather than failing the
test case.
fail_on_error=false
{erl, ReleaseList}
{cleanup, false}
NodeName = term()
This functions stops a node previously started with
start_node/3
. Use this function to stop any node you
start, or the test server will produce a warning message in
the test logs, and kill the nodes automatically unless it was
started with the {cleanup, false}
option.
Mod = atom()
Checks wether the module is natively compiled or not
app_test(App) -> ok | test_server:fail()
app_test(App,Mode)
App = term()
Mode = pedantic | tolerant
Checks an applications .app file for obvious errors. The following is checked:
Mode==tolerant
this only produces a warning, as all
modules does not have to be included)
This test is always skipped on OSE/Delta targets.
Comment = string()
The given String will occur in the comment field of the table on the HTML result page. If called several times, only the last comment is printed. comment/1 is also overwritten by the return value {comment,Comment} from a test case or by fail/1 (which prints Reason as a comment).
The following functions must be exported from a test suite module.
all(suite) -> TestSpec | {skip, Comment}
TestSpec = list()
Comment = string()
This function must return the test specification for the test suite module. The syntax of a test specification is described in the reference manual for the Test Server application.
init_per_testcase(Case,Config) -> Config
Case = atom()
Config = term()
This function is called before each test case. The
Case
argument is the name of the test case, and
Config
is the configuration which can be modified
here. Whatever is returned from this function is given as
Config
to the test case.
fin_per_testcase(Case,Config) -> void()
Case = atom()
Config = term()
This function is called after each test case, and can be used to clean up whatever the test case has done. The return value is ignored.
Case(doc) -> [Decription]
Case(suite) -> [] | TestSpec | {skip, Comment}
Case(Config) -> {skip, Comment} | {comment, Comment} | Ok
Description = string()
TestSpec = list()
Comment = string()
Ok = term()
Config = term()
The documentation clause (argument doc
) can
be used for automatic generation of test documentation or test
descriptions.
The specification clause (argument spec
)
shall return an empty list, the test specification for the
test case or {skip,Comment}
. The syntax of a test
specification is described in the reference manual for the
Test Server application.
Note that the specification clause always is executed on the controller host.
The execution clause (argument Config
) is
only called if the specification clause returns an empty list.
The execution clause is the real test case. Here you must call
the functions you want to test, and do whatever you need to
check the result. If someting fails, make sure the process
crashes or call test_server:fail/0/1
(which also will
cause the process to crash).
You can return {skip,Comment}
if you decide not to
run the test case after all, e.g. if it is not applicable on
this platform.
You can return {comment,Comment}
if you wish to
print some information in the 'Comment' field on the HTML
result page.
If the execution clause returns anything else, it is considered a success.
A conf test case is a group of test cases with an
init and a cleanup function. The init and cleanup functions
are also test cases, but they have spcial rules:
They do not need a specification clause.
They must always have the execution clause.
They must return the Config
parameter, a modified
version of it or {skip,Comment}
from the execution
clause.
init_per_testcase
and fin_per_testcase
are
not called before and after these functions.
There are some macros defined in the test_server.hrl
that are quite useful for test suite programmers.
First of all, there is the line
macro, which is quite
essential when writing test cases. It tells the test server
exactly what line of code that is being executed, so that it can
report this line back if the test case fails. Use this macro at
the beginning of every test case line of code.
Second, we have the config
macro, which is used to
retrieve information from the Config
variable sent to all
test cases. It is used with two arguments, where the first is the
name of the configuration variable you wish to retrieve, and the
second is the Config
variable supplied to the test case
from the test server.
Possible configuration variables include:
data_dir
- Data file directory.
priv_dir
- Scratch file directory.
nodes
- Nodes specified in the spec file
nodenames
- Generated nodenames.
init_per_testcase/2
Examples of the line
and config
macros can be
seen in the Examples chapter in the user's guide.
If the line_trace
macro is defined, you will get a
timestamp (erlang:now()
) in your minor log for each
line
macro in your suite. This way you can at any time see
which line is currently being executed, and when the line was
called.
The line_trace
macro can e.g. be defined as a compile
option, like this:
erlc -W -Dline_trace my_SUITE.erl