TCP Experiment Automation Controlled Using Python (TEACUP) -- Documentation
This page gives an overview of TEACUP's capabilities, traffic generators, information loggers, the configuration file, how to run experiment, and
how to analyse the results (based on v0.9). More comprehensive usage information for v1.0 can be found in the following CAIA technical reports:
Note that on June 9th 2016 the source files for the above three technical reports were released under a CC BY-SA 4.0 license and then committed to TEACUP's Sourceforge repository
TEACUP is a
software tool for running TCP performance experiments in a controlled
physical testbed. The typical use-case involves TEACUP
private testbed containing multiple hosts either side of a Linux-based
bottleneck router in the classic dumbbell topology. TEACUP itself runs
separate control host, orchestrating the configuration of end hosts and
bottleneck router as required for particular experiments or range of
experiments. (The following technical report, CAIA Testbed for TEACUP Experiments Version 2, describes the specific testbed we used in-house while developing TEACUP.)
Experiments with multiple permutations
TEACUP utilises a configuration file
to define experiments
as combinations of parameters specifying network path and end host
conditions. When multiple values are provided (e.g. for the bottleneck
rate limit, or TCP congestion control algorithm), an experiment is made
up of multiple tests
-- consecutively run instances of the experiment run for each parameter
combination. For each experiment and test, TEACUP collects a range of
data, such as tcpdump files of traffic seen at all network interfaces,
logs and Linux Web10G
logs. TEACUP also collects a variety of metadata from the end hosts and
bottleneck router (such as the actual OS/kernel version(s) in use,
network interface configuration, NTP state and so forth).
Based on Python & Fabric
TEACUP is build on the Python Fabric toolkit.
Fabric is a Python (2.5 or higher) library and command line
tool for remote application deployment
or system administration tasks using SSH. Fabric
provides several basic operations for executing local
or remote shell commands and uploading/downloading
files, as well as auxiliary functions, such as prompting
the user for input, or aborting execution of the current
TEACUP is implemented as a number of Fabric tasks as well
as auxiliary functions.
TEACUP tasks can then be executed directly from the
command line using the Fabric tool fab. The entry point
is a file commonly named fabfile.py, typically located in a directory from which
we execute Fabric tasks. For TEACUP this is the directory
under which we will store the experiment results. The command 'fab -l'
lists all available TEACUP tasks. The behaviours of many TEACUP tasks
are modified by additional string parameters supplied on the command
TEACUP currently only support a topology where we have one bottleneck router with two
testbed network interfaces (NICs) connected to two
testbed subnets. Hosts in either subnet act as
traffic sources and sinks. In the future we plan to extend TEACUP to support
multiple routers and more than two subnets.
The following list describes what TEACUP can control/select on an appropriately configured testbed:
- Host Operating Systems (OS): Currently TEACUP supports FreeBSD,
Linux, Windows 7 (with Cygwin), and Mac OS X
- OS-specific TCP algorithms: Depending on the OS in each host, TEACUP currently supports the selection and use of
NewReno and CUBIC under Linux or FreeBSD (representing classic loss-based algorithms), CompoundTCP
(Microsoft’s hybrid), CDG (CAIA’s hybrid under FreeBSD),
and HD (Hamilton Institutes’ delay-based TCP under FreeBSD)
- Path characteristics: TEACUP allows to configure bottleneck bandwidth
limits to represent likely consumer experience (e.g.
ADSL), and some data centre scenarios. It also supports emulation of
constant path delay and loss in either direction
to simulate different conditions between traffic
sources and sinks. The emulation is implemented by the
bottleneck node (router)
- Bottleneck AQM: TEACUP can select from the Active Queuing
Management (AQM) mechanisms supported by the Linux kernel, such as Tail-
Drop/FIFO, CoDel, FQ CoDel, PIE, RED, or the FreeBSD kernel, such as FIFO
- ECN Support: TEACUP can enable/disable
Explicit Congestion Notification (ECN) on hosts and/or the
- Traffic loads: Traffic generators exist to synthesise the following traffic types:
- Streaming media over HTTP/TCP (DASH-like)
- TCP bulk transfer (iperf)
- UDP flows (VoIP-like)
- Data centre query/response patterns (one query to N responders)
Please refer to the INSTALL file in the TEACUP distribution on how to install TEACUP.
TEACUP assumes that all hosts participating in an experiment are
time synchronised (for example, by using NTP) so we can align data in
the time domain that was collected across multiple hosts. At the start
of an experiment TEACUP checks every host clock, and aborts if any
clock differs by a user definable threshold (specified in seconds).
Techniques like NTP may not fully synchronise a
host's clock soon after rebooting, so TEACUP also supports an optional
mechanism for correcting timestamps after the fact. When enabled,
TEACUP broadcasts or multicasts ICMP pings at regular intervals
throughout an experiment. These packets are sent over the testbed's
separate control network, and received by each host on their control
network NIC. Assuming the latency variance of the network (i.e. sender
NIC, switch, receiver NIC) is small these ICMP packets arrive at all
hosts essentially simultaneously. Their arrival times can be used to
estimate, and correct for, any clock offset that may still exist
between testbed hosts during each experiment.
Experiment Process Flow
For each series of experiments TEACUP will carry out the following steps:
The main step is step 3, which can be separated into the following steps:
- Initialise and check config file
- Get parameter combination for next experiment
- Start experiment based on config and parameter configuration
- If there is another parameter combination to run
go to step 3, otherwise finish
- Log experiment test ID in file experiments_started.txt
- Get host information: OS, NIC names, NIC MAC addresses
- Reboot hosts: reboot hosts as required given the configuration
- Topology configuration: put hosts in subnet configured
- Get host information again: OS, NIC names, NIC MAC addresses
- Run sanity checks, e.g. check that tools to be used exist
- Initialise hosts, e.g. configure network interfaces
- Configure router queues based on configuration
- Log host state: log host information
- Start all logging processes, e.g. tcpdump
- Start all traffic generators
- Wait for experiment to finish
- Stop all running processes on hosts
- Collect all log files from logging and traffic generating processes
TEACUP can generate the following types of traffic.
- TCP bulk transfer
- HTTP traffic
- DASH-like HTTP video streaming traffic
- One querier, N responders (incast) traffic
- Unidirectional UDP flows with fixed rate
- VoIP-like flows (constant bit rate)
The tool iperf is used to generate
TCP bulk transfer flows. Note that the iperf client pushes
data to the iperf server, so the data flows in the opposite
direction compared to httperf. iperf is also used
to generate unidirectional UDP flows with a specified
bandwidth and two iperfs can be combined to generate
bidirectional UDP flows.
A modified httperf is used to simulate
an HTTP client. It can generate simple request patterns,
such as accessing some .html file n times per second.
It can also generate complex workloads based on work
session log files (c.f. httperf man page).
httperf is also used to emulate a TCP video streaming session.
The httperf client emulates the behaviour of DASH
or other similar TCP streaming algorithms. In the
initial buffering phase the client will download the first
part of the content as fast as possible. Then the client will
fetch another block of content every t seconds.
The video rate and the cycle length are configurable (and the size of
the data blocks depends on these).
httperf is also used to emulate the incast scenario.
The httperf client will request a block of content
from n servers every t seconds. The requests are sent
as close together as possible to make sure the servers
respond simultaneously. The size of the data blocks is
As web server TEACUP uses lighttpd. TEACUP automatically sets up
fake content for DASH-like video
streaming and incast scenario traffic. However, for
specific experiments one may need to setup web server
content manually or create new scripts to do this.
The tool nttcp is used to emulate simple
constant bit rate UDP VoIP flows.
The fixed packet size and inter-packet time can
All traffic during an experiment is logged with tcpdump
and TCP state information is logged with different
The tool tcpdump is used to capture the traffic
on all testbed NICs on all hosts including the router.
Different tools are used to log
TCP state information on all hosts except the router.
On FreeBSD SIFTR
is used. On Linux Web10G is used, which implements the TCP EStats MIB (RFC 4898)
inside the Linux kernel. We implemented our
own logging tool based on the Web10G library, which is now also in the official Web10G code distribution.
For Windows 7 we implemented our own logging tool, which
can access the TCP EStats MIB inside the Windows 7
kernel and logs to a Web10G-compatible format. For Mac OS X we implemented our own logging tool
based on DTRACE, which logs in SIFTR-compatible format.
The statistics collected by SIFTR are described
in the SIFTR README. The statistics collected by
our Web10G client and the Windows 7 EStats logger
are identical (based on the Web10G statistics) and are
described as part of the web100 (predecessor of
TEACUP also logs the output of traffic generators.
In addition it collects per-host information.
The following information is gathered before an experiment is started.
The following information is collected after an experiment has finished.
- Output of ifconfig (FreeBSD/Linux/Mac) or ipconfig (Windows)
- Output of uname -a
- Information about routing obtained with netstat -r
- Information about the NTP status based on ntpq -p
- List of all running processes (output of ps).
- Output of sysctl -a (FreeBSD/Linux/Mac) and various information for Windows.
- Information about all of TEACUP's V_ parameters in config.py
- Information of the TCP congestion control algorithm used on each host, and any TCP parameter settings specified
- TCP congestion control kernel module parameter settings (Linux only)
- Network interface configuration information provided by ethtool (Linux only)
- Information about the router queue setup (including all queue discipline parameters) and router queue and filtering
statistics based on the output of tc (Linux) or ipfw (FreeBSD).
Log File Naming
All log files generated by TEACUP adhere to the following naming scheme.
The test ID prefix <test_ID_pfx> is the start of
the file name and either specified in the config file
(TPCONF_test_id) or on the command line.
The [<par_name>_<par_val>_]* is the zero to n
parameter names and parameter values (separated by an
underscore). Parameter names (<par_name>) should not
contain underscores by definition and all underscores
in parameter values (<par_val>) are changed to hyphens
(this allows later parsing of the names and values
using the underscores as separators). An experiment
may have zero parameter names and values. If an experiment was
started with run_experiment_multiple there are as
many parameters names and values as specified in
TPCONF_vary_parameters. We also refer to the part
part before the <host>) as test ID.
The <host> part specifies the IP or name of the testbed
host a log file was collected from. This corresponds to
an entry in TPCONF_router or TPCONF_hosts.
If the log file is from a traffic generator specified
in TPCONF_traffic_gens, the traffic generator number
follows the host identifier ([<traffgen_ID>]). Otherwise,
<traffgen_ID> does not exist.
The <file_name> depends on the process which
logged the data, for example it set to ‘uname’ for the uname
information collected, it is set to ‘httperf_dash’ for
an httperf client emulating DASH, or it set to ‘web10g’
for a Web10G log file. tcpdump files are special in that
they have an empty file name for tcpdumps collected on
hosts (assuming they only have one testbed NIC), or the
file name is <int_name>_router for tcpdumps collected
on the router (where <int_name> is the name of the NIC,
The <extension> is either ‘dmp’ indicating a tcpdump
file or ‘log’ for all other log files. All log files are usually
gzip’d, hence their file names end with ‘.gz’.
The following is an example name for a tcpdump file
collected on host testhost2 for an experiment where two
parameters (dash, tcp) where varied, and an example
name for the output of one httperf traffic generator
(traffic generator number 3) executed on host testhost2
for the same experiment.
All log files for one experiment or a series of experiments
are stored under a sub directory named test_ID_pfx created inside the
directory where fabfile.py is located.
The setup of hosts other than the router is relatively
straight-forward. First each host is booted into the
selected OS. Then hardware offloading, such as TCP
segmentation offloading (TSO), is disabled on testbed
interfaces (all OS), the TCP host cache is disabled
(Linux) or configured with a very short timeout and
purged (FreeBSD), and TCP receive and send buffers
are set to 2MB or more (FreeBSD, Linux).
Next ECN is enabled or disabled depending on the configuration.
Then the TCP congestion control algorithm
is configured for FreeBSD and Linux (including loading
any necessary kernel modules). Then the parameters
for the current TCP congestion control algorithm are
configured if specified by the user (FreeBSD, Linux).
Finally, custom user-specified commands are executed on
hosts as specified in the configuration (these can overrule
the general setup).
The router setup differs between FreeBSD (where ipfw
and Dummynet is used) and Linux (where tc and netem
is used). Here we only describe the setup for Linux.
We use the term pipe to refer to the virtual
pipe an incoming packet traverses before it leaves the router
on the other interface. The pipe does the queuing, delay and
First, hardware offloading, such as TCP segmentation offloading
(TSO) is disabled on the two testbed interfaces.
Then the queuing is configured. We use the following approach. Shaping,
AQM and delay/loss emulation is done on the egress NIC
(as usual). The hierarchical token bucket (HTB) queuing
discipline is used for rate limiting with the desired AQM
queuing discipline (e.g. pfifo, codel) as leave node.
After rate shaping and AQM constant loss and delay is emulated with
For each pipe we set up a new tc class on the two
testbed NICs of the router. If pipes are unidirectional a
class is only used on one of the two interfaces. Otherwise
it is used on both interfaces. In future work we could
optimise the unidirectional case and omit the creation of
The traffic flow is as follows:
- Arriving packets are marked at the netfilter mangle
table’s POSTROUTING hook depending on source
and destination IP address with a unique mark for
- Marked packets are classified into the appropriate
class based on the mark (a one-to-one mapping
between marks and classes) and redirected to a
pseudo interface. With pseudo device we refer to
the so-called intermediate function block (IFB)
- The traffic control rules on the pseudo interface do
the shaping with HTB (bandwidth as per config)
and the chosen AQM (as a leaf queuing discipline).
- Packets go back to actual outgoing interface.
- The traffic control rules on the actual interface
do network delay/loss emulation with netem. We
still need classes here to allow for pipe specific
delay/loss settings. Hence we use a HTB again, but
with the bandwidth set to the maximum possible
rate (so there is effectively no rate shaping or AQM
here) and netem plus pfifo are used as leaf queuing
- Packets leave the router via stack / network card
The main reason for this setup with pseudo interfaces is
to cleanly separate the rate limiting and AQM from the
netem delay/loss emulation. One could combine both on
one interface, but then there are certain limitation, such
as netem must be before the AQM. Also, a big
advantage is that with our setup it is possible to emulate
different delay or loss for different flows that share the
The following figure shows the packet flow assuming in the upstream
direction our outgoing interface is eth3 and in the downstream direction
our outgoing interface is eth2.
The TEACUP configuration file is where one defines the parameters of
an experiment or a series of experiments. Configuration files are simply
Python files that define a number of TPCONF_ parameters. Thus their syntax
must comply with the syntax of Python files. On the other hand this allows
to use any Python commands or constructs in configuration files.
Here we will only discuss the main sections of a config file. Please refer
to the tech report
to see an explanation of all the parameters. We also discuss the most commonly
used parameters as part of the example usage scenarios.
To iterate over parameter settings for each experiment
TEACUP uses V_variables. These are identifiers
of the form V_<name>, where <name> must consist
of only letters, numbers, hyphens (-) or underscores
(_). V_variables can be used in router queue settings,
traffic generator settings, TCP algorithm settings or
host setup commands. The tech report
describes how to define new V_variables.
TEACUP configuration files have multiple sections:
- Fabric env options (see Fabric documentation);
- Testbed host definitions;
- General experiment settings;
- Router queue configuration;
- Traffic generator configuration;
- Parameter ranges;
- Parameters to vary in a series of experiments.
Unless public key authentication is configured one must define
the user name and password for the SSH sessions used to control the
remote hosts as Fabric options.
The testbed host definition part must specify the router host and the list
of hosts used as traffic sources and sinks. For each host it must also list
the IP addresses of the testbed network interfaces.
As part of the general settings one must define the prefix used for all log files,
which we also refer to as the test ID prefix. Furthermore, there are a variety of
options that control different behaviour, for example one can specify the directory
on the remote hosts where the log files are created.
The router queue configuration needs to specify all router queues, their bandwidth,
emulated delay and loss rate, AQM technique used, and queue size. Most importantly
one must specify the source and destination IP addresses or subnets of the traffic
that should traverse the queue.
In the traffic generator section one must define all the traffic generators used, the
times the traffic generators are started, and the parameters to be used for the
To be able to iterate over different values of a parameter one must define a set
of parameters values for each variable and this set must be mapped to a V_variable.
Also one must define all constant parameters. While there are some default parameters,
for example for network delay, packet loss rate, bandwidth, TCP algorithm used and
so on, the experimenter can define new parameters and V_variables as required.
Finally, one must specify which parameters exactly are varied in a series of experiments
and which are kept constant.
First you should create a new directory for the experiment
or series of experiments. Copy the files fabfile.py
and run.sh (and run_resume.sh) from the TEACUP distribution
into that new directory. Then create a config.py
file in the directory (e.g. start with the provided example config.py as a basis
and modify it as necessary).
There are two TEACUP tasks to start experiments:
run_experiment_single and run_experiment_multiple.
To run a single experiment with the default test ID prefix
> fab run_experiment_single
To run a series of experiment based on the TPCONF_
vary_parameters settings with the default test ID
prefix TPCONF_test_id, type:
> fab run_experiment_multiple
In both cases the Fabric log output will be printed out
on the current terminal and it can be redirected with the
usual means. The default test ID prefix TPCONF_test_id
is specified in the config file. However, the test ID prefix
can also be specified on the command line (overruling
the config setting).
For convenience a shell script run.sh shell exists. The shell
script logs the Fabric output in a <test_ID_prefix>.log
file inside the <test_ID_prefix> sub directory and is
The shell script generates a test ID prefix and then
executes the command:
> fab run_experiment_multiple:test_id=<test_ID_pfx> <test_ID_pfx>.log 2>&1
The test ID prefix is set to ‘date +"%Y%m%d-%H%M%S"‘_experiment. The output
is unbuffered, so one can use tail -f on the log file
and get timely output. The fabfile to be used can be
specified, i.e. to use the fabfile myfabfile.py instead
of fabfile.py run:
> run.sh myfabfile.py
The run_experiment_single and run_experiment_multiple tasks keeps track of
experiments using two files in the current directory:
Note that both of these files are never reset by TEACUP.
New test IDs are simply appended to the current files if
they already exist. It is the user’s responsibility to delete
the files in order to reset the list of experiments.
It is possible to resume an interrupted series of
experiments started with run_experiment_multiple
with the resume parameter (see tech report). All experiments of
the series that were not completed (not logged
in experiment_completed.txt) are done again.
- The file experiment_started.txt logs the test
IDs of all experiments started.
- The file experiment_completed.txt logs the
test IDs of all experiments successfully completed.
Analysing Experiment Data
TEACUP provides a number of tasks to analyse the data of an
experiment or a series of experiments. Here we only describe
the basic analysis functions for plotting time series.
Currently analysis functions exist for:
- Plotting the throughput including all header bytes (based on tcpdump data);
- Plotting the Round Trip Time (RTT) using SPP (based on tcpdump data);
- Plotting the TCP congestion window size (CWND) (based on SIFTR and Web10G data);
the TCP RTT estimate (based on SIFTR and Web10G data). The function can
plot both, the smoothed estimate and an unsmoothed estimate (also for
SIFTR the unsmoothed estimate is the improved ERTT estimate);
- Plotting an arbitrary TCP statistics from SIFTR and Web10G data.
A convenience function exists that plots graphs 1--4 listed
above. The easiest way to generate all graphs for all
experiments is to run the following command in the
directory containing the sub directories with experiment
> fab analyse_all
This command will generate results for all experiments
listed in the file experiments_completed.txt. By default
the TCP RTT graphs generated are for the smoothed
RTT estimates and in case of SIFTR this is not the ERTT
estimates (if the smoothed parameter is set to ‘0’, nonsmoothed
estimates are plotted and in the case of SIFTR
this is the ERTT estimates). The analysis can be run for a
single experiment only by specifying a test ID. The following
command generates all graphs for the experiment
> fab analyse_all:test_id=20131206-102931_dash_2000_tcp_newreno
One can specify a list of test
IDs with the test_id parameter. The test IDs must be
separated by semicolons. (If only one test ID is specified,
no trailing semicolon is needed.) If multiple IDs are
specified the graphs will be created in the sub directory
of the test ID specified first. If multiple experiments
are plotted on the same graph(s) the file name(s) will
be the first test ID specified followed by the string
"_comparison" to distinguish from graphs where only
one experiment is plotted.
To generate a particular graph for a particular experiment
one can use the specific analysis function (analyse_throughput,
analyse_spp_rtt, analyse_cwnd, analyse_tcp_rtt) together with a (list of) test ID's) (specifying
the test ID(s) is mandatory in this case). For example,
the following command only generates the TCP RTT
graph for the non-smoothed estimates:
> fab analyse_tcp_rtt:test_id=20131206-102931_dash_2000_tcp_newreno,smoothed=0
Note, the smoothed parameter can also be used with
analyse_all. The following command only generates the
> fab analyse_throughput:test_id=20131206-102931_dash_2000_tcp_newreno
The analyse_tcp_stat function can be used to plot any
TCP statistic from SIFTR or Web10G logs. For example,
we can plot the number of kilo bytes in the send buffer
at any given time with the command:
ylabel="Snd buf (kbytes)",yscaler=0.001
The siftr_index defines the index of the column
of the statistic to plot for SIFTR log files. The
web10g_index defines the index of the column of the
statistic to plot for Web10G log files. If one has only
SIFTR or only Web10G log files the other index does
not need to be specified. But for experiments with SIFTR
and Web10G log files both indexes must be specified.
By default both indexes are set to plot CWND. The lists
of available statistics (including the column numbers)
are in the SIFTR README and the Web10G
Analysis task have a number of parameters that can control the
plot behaviour. These are described in the tech report.
The analysis functions also have a rudimentary filter mechanism.
Note, that this mechanism filters only what is
plotted, but not what data is extracted from the log files.
The source_filter parameter indicates the flows to be
used to generate data series for plotting. Flows may be
specified using combinations of patterns matching source
and/or destination IP address and port numbers. The filter
string format is:
The following command only plots data for flows from
host 172.16.10.2 port 80:
> fab analyse_all:source_filter="S_172.16.10.2_80"
Note, that the notion of flow here is unidirectional. Thus
in the above example flows from 172.16.10.2 port 80 are
shown, but flows to 172.16.10.2 port 80 are not shown.
We can only select flows to host 172.16.10.2 port
80 by specifying:
> fab analyse_all:source_filter="D_172.16.10.2_80"
As a side effect, the specified filter string also determines
the order of the flows in the graph(s). Flows are plotted
in the order of the filters specified. For example, if there
are two flows, one from host 172.16.10.2 port 80 and
another from host 172.16.10.2 port 81 by default the
port 80 flow would be the first data series and the port
81 flow would be the second data series. One can reverse
the two flows in the graphs by specifying:
> fab analyse_all:source_filter="S_172.16.10.2_81;S_172.16.10.2_80"
Instead of an actual port number on can specify the wildcard
character (’*’). This allows to filter on a specific
source or destination with any port number.
TEACUP provides a number of utility functions
available as Fabric tasks. Here we only list some commands, for
a full list please view the tech report.
The exec_cmd task can be used to execute one command
on multiple hosts. For example, the following
command executes the command uname -s on a number
> fab -H testhost1,testhost2,testhost3 exec_cmd:cmd="uname -s"
If no hosts are specified on the command line, the
exec_cmd command is executed on all hosts listed in
the config file (the union set of TPCONF_router and
TPCONF_hosts). For example, the following command
is executed on all testbed hosts:
> fab exec_cmd:cmd="uname -s"
The copy_file task can be used to copy a local file to
a number of testbed hosts. For example, the following
command copies the web10g-logger executable to all
testbed hosts except the router (this assumes all the hosts
run Linux when the command is executed):
> fab -H testhost2,testhost3 copy_file:file_name=/usr/bin/web10g-logger,remote_path=/usr/bin
If no hosts are specified on the command line, the
command is executed for all hosts listed in the config file
(the union set of TPCONF_router and TPCONF_hosts).
For example, the following command copies the file to
all testbed hosts:
> fab copy_file:file_name=/usr/bin/web10g-logger,remote_path=/usr/bin
The parameter method controls the method used for
copying. By default (method=’put’) copy_file will use
the Fabric put function to copy the file. However,
the Fabric put function is slow. For large files setting
method=’scp’ provides much better performance using
the scp command. While scp is faster, it may prompt
for the password if public key authentication is not
The init_os task can be used to reboot hosts into specific
operating systems (OSs). For example, the following
command reboots the hosts testhost1 and testhost2 into
the OSs Linux and FreeBSD respectively:
> fab -H testhost1,testhost2 init_os:os_list="Linux\,FreeBSD",force_reboot=1
Note that the commas in os_list need to be escaped
with backslashes (\), since otherwise Fabric interprets
the commas as parameter delimiters. By default
force_reboot is 0, which means hosts that are already
running the desired OS are not rebooted. Setting
force_reboot to 1 enforces a reboot. By default the
script waits 100 seconds for a host to reboot. If the
host is not responsive after this time, the script will give
up unless the do_power_cycle parameter is set to 1.
The check_host command can be used to check if the
required software is installed on the hosts. The task only
checks for the presence of necessary tools, but it does
not check if the tools actually work. For example, the
following command checks all testbed hosts:
> fab -H testhost1,testhost2,testhost3 check_host
The check_connectivity task can be used to check connectivity
between testbed hosts with ping. This task
only checks the connectivity of the internal testbed
network, not the reachability of hosts on their control
interface. For example, the following command checks
whether each host can reach each other host across the
> fab -H testhost1,testhost2,testhost3 check_connectivity