As part of a broader organisational restructure, data networking research at Swinburne University of Technology has moved from the Centre for Advanced Internet Architecture (CAIA) to the Internet For Things (I4T) Research Lab.

Although CAIA no longer exists, this website reflects CAIA's activities and outputs between March 2002 and February 2017, and is being maintained as a service to the broader data networking research community.

TCP Experiment Automation Controlled Using Python (TEACUP) -- Usage Examples

USAGE EXAMPLES

Here we provide a few example scenarios of how TEACUP can be used. Each scenario involves traffic senders and receivers distributed across two subnets connected by a bottleneck router (as TEACUP currently only supports a single router topology). All hosts and the router are time synchronised using NTP.

  • Scenario 1: We have two TCP flows with data flowing from a source in subnet A to a destination in subnet B. We emulate different delays on the router. In this scenario the rebooting functionality of TEACUP is not used, which means the experimenter has to boot all three machines into the desired OS before the experiment is started
  • Scenario 2: As with scenario 1, with TEACUP also automatically booting the desired OS
  • Scenario 3: As with scenario 2, with TEACUP using a power controller to power cycle hosts if they don't respond after the reboot
  • Scenario 4: As with scenario 1, with each TCP flow between a different sender/receiver pair and different network path delay emulated for each flow (both flows still go through the same bottleneck)
  • Scenario 5: We have three staggered bulk transfer flows going through the router, each between a different sender/receiver pair. We use different emulated delay for each sender/receiver pair and also vary the bandwidths. We now also vary the TCP congestion control algorithm.
  • Scenario 6: We have three hosts plus the router. One host in subnet A acts as web server. The two other hosts in subnet B act as clients that both use DASH-like video streaming over HTTP. We emulate different network delays and AQM mechanisms.
  • Scenario 7: Investigating the incast problem. On host in subnet A queries 10 responders in subnet B. Again, we emulate different network delays, AQM mechanisms and TCP algorithms. We also vary the size of the responses.
(The following technical report, CAIA Testbed for TEACUP Experiments Version 2, describes the specific testbed we used in-house while developing TEACUP. It may provide additional context for the examples below.)

Scenario 1: Two TCP flows from data sender to receiver

Topology

In this scenario we have two hosts: newtcp20 connected to the 172.16.10.0/24 network and newtcp27 connected to the 172.16.11.0/24 network. The machine newtcprt3 connects the two experiment subnets. All three machines also have a second network interface that is used to control the experiment via TEACUP. newtcp20 and newtcp27 must run Linux, FreeBSD, Windows 7 or Mac OS X and newtcprt3 must run FreeBSD or Linux. newtcp20 and newtcp27 must have the traffic generator and logging tools installed as described in the tech report CAIA Testbed for TEACUP Experiments Version 2. However, PXE booting or a multi-OS installation is not needed for this scenario.

Test Traffic

Two TCP bulk transfer flows are created using iperf.

Variable Parameters

We emulate three different delays and two different bandwidth settings, six different experiments in total. We do also define a variable parameter for AQM, but define only one AQM (default pfifo). This causes the used AQM to be logged as part of the experiment ID.

TEACUP Config File

You can download the configuration file from here. To use the configuration rename it to config.py. In the following we explain the configuration file.

Note that TEACUP configuration files can be split across multiple files (using Python's execfile()). This allows to split a large config file into multiple files for better readability. It also allows to include part of configuration files into multiple different config files, which makes it possible to reuse parts of the config that is not changed across multiple experiments. An example of a this is this config file, which is functionally identical to the above config file. Only here the main config file includes separate files to specify the testbed machines, the router setup and the traffic generated.

At the top of the configuration file we need to import the Fabric env structure and we also need to import any other Python functionality used. First we need to configure the user and password used by Fabric via Fabrics env parameters. A password is not required if public key authentication is set up. We also need to tell Fabric how to execute commands on the remote machines. TEACUP uses /bin/sh by default.


# User and password
env.user = 'root'
env.password = 'rootpw'

# Set shell used to execute commands
env.shell = '/bin/sh -c'	

The next part of the config file defines the path to the TEACUP scripts and the testbed hosts. TPCONF_router is used to define the router and TPCONF_hosts is used to define the list of hosts. For each host and the router the testbed network interfaces need to be defined with TPCONF_host_internal_ip. The router obviously has two testbed network interfaces, whereas the hosts have only one.


# Path to teacup scripts
TPCONF_script_path = '/home/teacup/teacup-0.8'
# DO NOT remove the following line
sys.path.append(TPCONF_script_path)

# Set debugging level (0 = no debugging info output) 
TPCONF_debug_level = 0

# Host lists
TPCONF_router = ['newtcprt3', ]
TPCONF_hosts = [ 'newtcp20', 'newtcp27', ]

# Map external IPs to internal IPs
TPCONF_host_internal_ip = {
    'newtcprt3': ['172.16.10.1', '172.16.11.1'],
    'newtcp20':  ['172.16.10.60'],
    'newtcp27':  ['172.16.11.67'],
}

In the next part of the configuration file we need to define some general experiment settings. TEACUP_max_time_diff specifies the maximum clock offset in seconds allowed. This is a very coarse threshold as the offset estimation performed by TEACUP is not very accurate. Currently, TEACUP simply attempts to find out if the synchronisation is very bad and if yes it will abort; it does try to enforce high accuracy for the time synchronisation. TPCONF_test_id defines the name prefix for the output files for an experiment or a series of experiments. TPCONF_remote_dir specifies where on the machines the log files are stored until they are moved to the host running TEACUP after each experiment.


# Maximum allowed time difference between machines in seconds
# otherwise experiment will abort cause synchronisation problems
TPCONF_max_time_diff = 1

# Experiment name prefix used if not set on the command line
# The command line setting will overrule this config setting
now = datetime.datetime.today()
TPCONF_test_id = now.strftime("%Y%m%d-%H%M%S") + 'experiment'

# Directory to store log files on remote host
TPCONF_remote_dir = '/tmp/'

Then we define the router queues/pipes using TPCONF_router_queues. Each entry of this list is a tuple. The first value is the queue number and the second value is a comma separated list of parameters. The queue numbers must be unique. Note that variable parameters must be either constants or or variable names defined by the experimenter. Variables are evaluated during runtime. Variable names must start with a 'V_'. Parameter names can only contain numbers, letter (upper and lower case), underscores (_), and hyphen/minus (-). All V_variables must be defined in TPCONF_variable_list (see below).


TPCONF_router_queues = [
    # Set same delay for every host
    ('1', " source='172.16.10.0/24', dest='172.16.11.0/24', delay=V_delay, "
     " loss=V_loss, rate=V_up_rate, queue_disc=V_aqm, queue_size=V_bsize "),
    ('2', " source='172.16.11.0/24', dest='172.16.10.0/24', delay=V_delay, "
     " loss=V_loss, rate=V_down_rate, queue_disc=V_aqm, queue_size=V_bsize "),
]

Next we need to define the traffic generated during the experiments. TEACUP implements a number of traffic generators, in this example we use iperf to generate TCP bulk transfer flows. TPCONF_traffic_gens defines the traffic generator but here we use a temporary variable as well, which allows to have multiple traffic generator definitions and switch between them by changing the variable assigned to TPCONF_traffic_gens.

Each entry in is a 3-tuple. The first value of the tuple must be a float and is the time relative to the start of the experiment when tasks are executed. If two tasks have the same start time their start order is arbitrary. The second entry of the tuple is the task number and must be a unique integer (used as ID for the process). The last value of the tuple is a comma separated list of parameters; the first parameter of this list must be the task name. The TEACUP manual lists the task name and possible parameters. Client and server can be specified using the external/control IP addresses or host names. In this case the actual interface used is the _first_ internal address (according to TPCONF_host_internal_ip). Alternatively, client and server can be specified as internal addresses, which allows to use any internal interfaces configured.


traffic_iperf = [
    ('0.0', '1', " start_iperf, client='newtcp27', server='newtcp20', port=5000, "
     " duration=V_duration "),
    ('0.0', '2', " start_iperf, client='newtcp27', server='newtcp20', port=5001, "
     " duration=V_duration "),
    # Or using internal addresses
    #( '0.0', '1', " start_iperf, client='172.16.11.2', server='172.16.10.2', "
    #              " port=5000, duration=V_duration " ),
    #( '0.0', '2', " start_iperf, client='172.16.11.2', server='172.16.10.2', "
    #              " port=5001, duration=V_duration " ),
]

# THIS is the traffic generator setup we will use
TPCONF_traffic_gens = traffic_iperf

Next, we define all the parameter values used in the experiments. TPCONF_duration defines the duration of the traffic. TPCONF_runs specifies the number of runs carried out for each unique combination of parameters. TPCONF_TCP_algos specifies the congestion control algorithms. Here we only use Newreno.

In this simple case all hosts use the same TCP congestion control algorithm, but TEACUP allows to specify per-host algorithms with TPCONF_host_TCP_algos. Parameter settings for TCP congestion control algorithms can be specified with TPCONF_host_TCP_algo_params (assuming parameters can be controlled with sysctl), but here we do not make use of that and simply use the default settings.

TPCONF_delays specifies the delay values (delay in each direction), TPCONF_loss_rates specifies the possible packet loss rates and TPCONF_bandwidths specifies the emulated bandwidths (in downstream and upstream directions). TPCONF_aqms specifies the AQM mechanism to use (here pfifo which is the default). TPCONF_buffer_sizes specifies the size of the queue. This is normally specified in packets, but it depends on the type of AQM. For example, if bfifo was used the size would need to be specified in bytes.


# Duration in seconds of traffic
TPCONF_duration = 30

# Number of runs for each setting
TPCONF_runs = 1

# TCP congestion control algorithm used
TPCONF_TCP_algos = ['newreno', ]

# Emulated delays in ms
TPCONF_delays = [0, 25, 50]

# Emulated loss rates
TPCONF_loss_rates = [0]

# Emulated bandwidths (downstream, upstream)
TPCONF_bandwidths = [
    ('8mbit', '1mbit'),
    ('20mbit', '1.4mbit'),
]

# AQM
TPCONF_aqms = ['pfifo', ]

# Buffer size
TPCONF_buffer_sizes = [100]

Finally, we need to specify which parameters will be varied and which parameters will be fixed for a series of experiments and we also need to define how our parameter ranges above map to V_variables used for the queue setup and traffic generators and the log file names generated by TEACUP.

TPCONF_parameter_list is a map of the parameters we potentially want to vary. The key of each item is the identifier that can be used in TPCONF_vary_parameters (see below). The value of each item is a 4-tuple containing the following things. First, a list of variable names. Second, a list of short names uses for the file names. For each parameter varied a string '_<short_name>_<value>' is appended to the log file names (appended to chosen prefix). Note, short names should only be letters from a-z or A-Z. Do not use underscores or hyphens! Third, the list of parameters values. If there is more than one variable this must be a list of tuples, each tuple having the same number of items as the number of variables. Fourth, an optional dictionary with additional variables, where the keys are the variable names and the values are the variable values.

The parameters that are actually varied are specified with TPCONF_vary_parameters. Only parameters listed in TPCONF_vary_parameters will appear in the name of TEACUP output files. So it can make sense to add a parameter in TPCONF_vary_parameters that only has a single value, because only then will the parameter short name and value be part of the file names.

TPCONF_variable_defaults specifies the default value for each variable, which is used for variables that are not varied. The key of each item is the parameter name. The value of each item is the default parameter value used if the variable is not varied.


TPCONF_parameter_list = {
#   Vary name		V_ variable	  file name	values			extra vars
    'delays' 	    :  (['V_delay'], 	  ['del'], 	TPCONF_delays, 		 {}),
    'loss'  	    :  (['V_loss'], 	  ['loss'], 	TPCONF_loss_rates, 	 {}),
    'tcpalgos' 	    :  (['V_tcp_cc_algo'],['tcp'], 	TPCONF_TCP_algos, 	 {}),
    'aqms'	    :  (['V_aqm'], 	  ['aqm'], 	TPCONF_aqms, 		 {}),
    'bsizes'	    :  (['V_bsize'], 	  ['bs'], 	TPCONF_buffer_sizes, 	 {}),
    'runs'	    :  (['V_runs'],       ['run'], 	range(TPCONF_runs), 	 {}),
    'bandwidths'    :  (['V_down_rate', 'V_up_rate'], ['down', 'up'], TPCONF_bandwidths, {}),
}

TPCONF_variable_defaults = {
#   V_ variable			value
    'V_duration'  	:	TPCONF_duration,
    'V_delay'  		:	TPCONF_delays[0],
    'V_loss'   		:	TPCONF_loss_rates[0],
    'V_tcp_cc_algo' 	:	TPCONF_TCP_algos[0],
    'V_down_rate'   	:	TPCONF_bandwidths[0][0],
    'V_up_rate'	    	:	TPCONF_bandwidths[0][1],
    'V_aqm'	    	:	TPCONF_aqms[0],
    'V_bsize'	    	:	TPCONF_buffer_sizes[0],
}

# Specify the parameters we vary through all values, all others will be fixed
# according to TPCONF_variable_defaults
TPCONF_vary_parameters = ['delays', 'bandwidths', 'aqms', 'runs',]

Running Experiment

Create a directory with the above config.py and copy fabfile.py and run.sh from the TEACUP code distribution into this directory. To run the series of experiments with all parameter combinations go into the experiment directory (that contains fabfile.py and run.sh) and execute:


./run.sh

This will create a sub directory with the name of the test ID prefix and store all output files in this subdirectory.

Analysing Results

TEACUP will create a file names <test_id_prefix>.log file in the experiment data sub directory. It will also create a file experiments_started.txt and experiments_completed.txt in the parent directory. The file experiments_started.txt contains the names of all experiments started and the file experiments_completed.txt contains the names of all experiments successfully completed. If some experiments were not completed, check the log file for errors.

Assuming all experiments of a series completed successfully, we can now start to analyse the data. To create time series of CWND, RTT as estimated by TCP, RTT as estimated using SPP and the throughput over time for each experiment of the series, use the following command which creates intermediate files and the .pdf plot files in a sub directory named results inside the test data directory (the command needs to be executed in the directory where fabfile.py is).


fab analyse_all:out_dir="./results"

The following figures produced by TEACUP show the throughput (left) and TCP's smoothed RTT estimates (right) for the experiment with an emulated RTT of 50 ms, a bandwidth of 8 mbit/s downstream and 1 mbit/s upstream, and a buffer size of 100 packets. The hosts and the router ran Linux kernel 3.17.4. The throughput graph shows that both flows share the downstream link equally, while the upstream link carrying only ACKs is not fully utilised. The RTT graph shows that the total RTT reaches almost 100 ms due to the buffering (the plotted RTT estimates for the ACK streams are useless).


Scenario 2: Scenario 1 plus automatic booting of hosts

Topology

Like in scenario 1 we have two hosts: newtcp20 connected to the 172.16.10.0/24 network and newtcp27 connected to the 172.16.11.0/24 network. The machine newtcprt3 connects the two experiment subnets. All three machines also have a second network interface that is used to control the experiment via TEACUP. In this scenario we assume that the hosts have been installed according to the tech report CAIA Testbed for TEACUP Experiments Version 2 including PXE booting and a multi-OS installation on each host as described in the tech report.

Test Traffic

Like in scenario 1 two TCP bulk transfer flows are created using iperf.

Variable Parameters

Like in scenario 1 we emulate three different delays and two different bandwidth settings, six different experiments in total. We do also define a variable parameter for AQM, but define only one AQM (default pfifo). This causes the used AQM to be logged as part of the experiment ID.

TEACUP Config File

You can download the configuration file from here. To use the configuration rename it to config.py. In the following we explain the configuration file.

Most of the configuration is identical to the configuration used in scenario 1. The only difference is that now TEACUP will reboot the machines automatically into the specified OS, which is very useful if machines have a multi-boot setup, i.e. can run different OS. The automatic rebooting of multi-OS machines requires that machines are configured with PXE+GRUB as described in the tech report. If machines have only a single OS the reboot function is still useful to put machines into a clean state before an experiment and in this case PXE booting is not needed.

Below is the additional configuration needed, compared to scenario 1. TPCONF_tftpboot_dir specifies the directory where TEACUP will put the files that GRUB will read (after loaded via PXE) which specify from which hard disk partition to boot from. TPCONF_host_os specifies the operating for each host and router. For Linux TEACUP allows to specify the kernel that is booted. TPCONF_linux_kern_router specifies the kernel booted on the router and TPCONF_linux_kern_hosts specifies the kernel booted on the other hosts. If TPCONF_force_reboot is not set to '0', TEACUP will only reboot a host if the currently running OS is different from the OS specified in TPCONF_host_os (Linux hosts will also be rebooted if the currently running kernel is different from the kernel specified in TPCONF_linux_kern_router or TPCONF_linux_kern_hosts). TPCONF_boot_timeout specifies the amount of time TEACUP will wait for a host to be rebooted and accessible via SSH again. If a host is not rebooted and running the desired OS within this time TEACUP will abort with an error.


# Path to tftp server handling the pxe boot
# Setting this to an empty string '' means no PXE booting, and TPCONF_host_os
# and TPCONF_force_reboot are simply ignored
TPCONF_tftpboot_dir = '/tftpboot'

# Operating system config, machines that are not explicitly listed are
# left as they are (OS can be 'Linux', 'FreeBSD', 'CYGWIN', 'Darwin')
TPCONF_host_os = {
    'newtcprt3': 'Linux',
    'newtcp20':  'Linux',
    'newtcp27':  'Linux',
}

# Specify the Linux kernel to use, only used for machines running Linux
# (basically the full name without the vmlinuz-)
TPCONF_linux_kern_router = '3.17.4-vanilla-10000hz'
TPCONF_linux_kern_hosts = '3.17.4-vanilla-web10g'

# Force reboot
# If set to '1' will force a reboot of all hosts
# If set to '0' only hosts where OS is not the desired OS will be rebooted
TPCONF_force_reboot = '0'

# Time to wait for reboot in seconds (integer)
# Minimum timeout is 60 seconds
TPCONF_boot_timeout = 120

# Map OS to partition on hard disk (note the partition must be specified
# in the GRUB4DOS format, _not_ GRUB2 format) 
TPCONF_os_partition = {
        'CYGWIN':  '(hd0,0)',
        'Linux':   '(hd0,1)',
        'FreeBSD': '(hd0,2)',
}

Currently, by default TEACUP expects Windows on partition 1, Linux on partition 2, FreeBSD on partition 3 on the first hard disk. However, the variable TPCONF_os_partition can be used to specify the partitions in GRUB4DOS format. PXE booting of MacOS is not supported currently.

Running Experiment

See scenario 1.

Analysing Results

See scenario 1.

Scenario 3: Scenario 1 plus power control

Topology

Like in scenario 1 we have two hosts: newtcp20 connected to the 172.16.10.0/24 network and newtcp27 connected to the 172.16.11.0/24 network. The machine newtcprt3 connects the two experiment subnets. All three machines also have a second network interface that is used to control the experiment via TEACUP. We assume that the hosts have been installed according to the tech report CAIA Testbed for TEACUP Experiments Version 2. Furthermore, this scenario only works with installed TEACUP-compatible power controllers that are set up to control the power of the three hosts.

Test Traffic

Like in scenario 1 two TCP bulk transfer flows are created using iperf.

Variable Parameters

Like in scenario 1 we emulate three different delays and two different bandwidth settings, six different experiments in total. We do also define a variable parameter for AQM, but define only one AQM (default pfifo). This causes the used AQM to be logged as part of the experiment ID.

TEACUP Config File

You can download the configuration file from here. To use the configuration rename it to config.py. In the following we explain the configuration file.

Most of the configuration is identical to the configuration used in scenario 2. The only different is that now TEACUP will automatically power cycle machines that to not come up within the reboot timeout. TEACUP will only power cycle machines once. If after a power cycle and reboot the machines are still unresponsive TEACUP will give up. Power cycling can only used if the machines are connected via a power controller supported by TEACUP. Currently, TEACUP supports two power controllers: IP Power 9258HP (9258HP) and Serverlink SLP-SPP1008-H (SLP-SPP1008)

Below is the additional configuration needed, compared to scenario 1. TPCONF_do_power_cycle must be set to '1' to perform power cycling. If power cycling is used TPCONF_host_power_ctrlport must define the IP address of the responsible power controller for each machine as well as the power controller's port the machine is connected to. TPCONF_power_admin_name and TPCONF_power_admin_pw must specify the admin user's name and password required to login to the power controller's web interface. TPCONF_power_ctrl_type specifies the type of power controller.


# If host does not come up within timeout force power cycle
# If set to '1' force power cycle if host not up within timeout
# If set to '0' never force power cycle
TPCONF_do_power_cycle = '1'

# Maps host to power controller IP (or name) and power controller port number
TPCONF_host_power_ctrlport = {
    'newtcprt3': ('10.0.0.100', '1'),
    'newtcp20':  ('10.0.0.100', '2'),
    'newtcp27':  ('10.0.0.100', '3'),
}

# Power controller admin user name
TPCONF_power_admin_name = 'admin'
# Power controller admin user password
TPCONF_power_admin_pw = env.password

# Type of power controller. Currently supported are only:
# IP Power 9258HP (9258HP) and Serverlink SLP-SPP1008-H (SLP-SPP1008)
TPCONF_power_ctrl_type = 'SLP-SPP1008'

Running Experiment

See scenario 1.

Analysing Results

See scenario 1.

Scenario 4: Two TCP flows through same bottleneck queue but with different emulated delay

Topology

We now have two hosts in each subnet: newtcp20 and newtcp21 connected to the 172.16.10.0/24 network, newtcp27 and newtcp28 connected to the 172.16.11.0/24 network. The machine newtcprt3 connects the two experiment subnets. All three machines also have a second network interface that is used to control the experiment via TEACUP. Like scenario 1 this scenario requires that hosts have the traffic generator and logging tools installed as described in the tech report CAIA Testbed for TEACUP Experiments Version 2, but PXE booting or a multi-OS installation is not needed.

Test Traffic

Two TCP bulk transfer flows are created using iperf. One flow is between newtcp20 and newtcp27 and the second flow is between newtcp21 and newtcp28.

Variable Parameters

Like in scenario 1 we emulate two different bandwidth settings. However, we setup different delays for each flow. Since each flow can have two different values, we have eight different experiments in total. We do also define a variable parameter for AQM, but define only one AQM (default pfifo). This causes the used AQM to be logged as part of the experiment ID.

TEACUP Config File

You can download the configuration file from here. To use the configuration rename it to config.py. In the following we explain the parts of the configuration file that have changed compared to scenario 1.

Our host configuration now looks as follows.


# Host lists
TPCONF_router = ['newtcprt3', ]
TPCONF_hosts = [ 'newtcp20', 'newtcp21', 'newtcp27', 'newtcp28', ]

# Map external IPs to internal IPs
TPCONF_host_internal_ip = {
    'newtcprt3': ['172.16.10.1', '172.16.11.1'],
    'newtcp20':  ['172.16.10.60'],
    'newtcp21':  ['172.16.10.61'],
    'newtcp27':  ['172.16.11.67'],
    'newtcp28':  ['172.16.11.68'],
}

To configure different delays for each flow we also need to change the router setup.


TPCONF_router_queues = [
    ('1', " source='172.16.10.60', dest='172.16.11.67', delay=V_delay, "
     " loss=V_loss, rate=V_up_rate, queue_disc=V_aqm, queue_size=V_bsize "),
    ('2', " source='172.16.11.67', dest='172.16.10.60', delay=V_delay, "
     " loss=V_loss, rate=V_down_rate, queue_disc=V_aqm, queue_size=V_bsize "),
    ('3', " source='172.16.10.61', dest='172.16.11.68', delay=V_delay2, " 
     " loss=V_loss, rate=V_up_rate, queue_disc=V_aqm, queue_size=V_bsize, "
     " attach_to_queue='1' "),
    ('4', " source='172.16.11.68', dest='172.16.10.61', delay=V_delay2, "
     " loss=V_loss, rate=V_down_rate, queue_disc=V_aqm, queue_size=V_bsize, "
     " attach_to_queue='2' "),
]

We also need to define both of the V_ variables and make sure we iterate over both in the experiments.


TPCONF_delays =  [5, 50]
TPCONF_delays2 = [5, 50]

TPCONF_parameter_list = {
#   Vary name		V_ variable	  file name	values			extra vars
    'delays' 	    :  (['V_delay'], 	  ['del1'], 	TPCONF_delays, 		 {}),
    'delays2' 	    :  (['V_delay2'], 	  ['del2'], 	TPCONF_delays2, 	 {}),
    'loss'  	    :  (['V_loss'], 	  ['loss'], 	TPCONF_loss_rates, 	 {}),
    'tcpalgos' 	    :  (['V_tcp_cc_algo'],['tcp'], 	TPCONF_TCP_algos, 	 {}),
    'aqms'	    :  (['V_aqm'], 	  ['aqm'], 	TPCONF_aqms, 		 {}),
    'bsizes'	    :  (['V_bsize'], 	  ['bs'], 	TPCONF_buffer_sizes, 	 {}),
    'runs'	    :  (['V_runs'],       ['run'], 	range(TPCONF_runs), 	 {}),
    'bandwidths'    :  (['V_down_rate', 'V_up_rate'], ['down', 'up'], TPCONF_bandwidths, {}),
}

TPCONF_variable_defaults = {
#   V_ variable			value
    'V_duration'  	:	TPCONF_duration,
    'V_delay'  		:	TPCONF_delays[0],
    'V_delay2' 		:	TPCONF_delays2[0],
    'V_loss'   		:	TPCONF_loss_rates[0],
    'V_tcp_cc_algo' 	:	TPCONF_TCP_algos[0],
    'V_down_rate'   	:	TPCONF_bandwidths[0][0],
    'V_up_rate'	    	:	TPCONF_bandwidths[0][1],
    'V_aqm'	    	:	TPCONF_aqms[0],
    'V_bsize'	    	:	TPCONF_buffer_sizes[0],
}

TPCONF_vary_parameters = ['delays', 'delays2', 'bandwidths', 'aqms', 'runs',]

Finally, we need to define the traffic generators to create one TCP bulk transfer flow for each host pair.


traffic_iperf = [
    ('0.0', '1', " start_iperf, client='newtcp27', server='newtcp20', port=5000, "
     " duration=V_duration "),
    ('0.0', '2', " start_iperf, client='newtcp28', server='newtcp21', port=5001, "
     " duration=V_duration "),
]

TPCONF_traffic_gens = traffic_iperf

Running Experiment

See scenario 1.

Analysing Results

See scenario 1.

Scenario 5: Three partially overlapping TCP flows with different TCP congestion control

Topology

We now have three hosts in each subnet: newtcp20, newtcp21 and newtcp22 connected to the 172.16.10.0/24 network, and newtcp27, newtcp28 and newtcp29 connected to the 172.16.11.0/24 network. The machine newtcprt3 connects the two experiment subnets. All three machines also have a second network interface that is used to control the experiment via TEACUP. Like scenario 1 this scenario requires that hosts have the traffic generator and logging tools installed as described in the tech report CAIA Testbed for TEACUP Experiments Version 2, but PXE booting or a multi-OS installation is not needed.

Test Traffic

Three TCP bulk transfer flows are created using iperf. One flow is between newtcp20 and newtcp27, the second flow is between newtcp21 and newtcp28, and the third flow is between newtcp22 and newtcp29. The flows do not longer start at the same time. Flow one start at the start of the experiment, while flow 2 starts 10 seconds after the first flow and flow 3 starts 10 seconds after flow 2. All flows have a duration of 30 seconds as in scenario 1.

Variable Parameters

Like in scenario 1 we emulate three different delays (same delays for all flows) and two different bandwidth settings. However, we now also vary the used TCP congestion control algorithm between Newreno and Cubic. This means we have 12 different experiments in total. We do also define a variable parameter for AQM, but define only one AQM (default pfifo). This causes the used AQM to be logged as part of the experiment ID.

TEACUP Config File

You can download the configuration file from here. To use the configuration rename it to config.py. In the following we explain the parts of the configuration file that have changed compared to scenario 1.

Our host configuration now looks as follows.


TPCONF_router = ['newtcprt3', ]
TPCONF_hosts = [ 'newtcp20', 'newtcp21', 'newtcp22', 'newtcp27', 'newtcp28', 'newtcp29', ]

TPCONF_host_internal_ip = {
    'newtcprt3': ['172.16.10.1', '172.16.11.1'],
    'newtcp20':  ['172.16.10.60'],
    'newtcp21':  ['172.16.10.61'],
    'newtcp22':  ['172.16.10.62'],
    'newtcp27':  ['172.16.11.67'],
    'newtcp28':  ['172.16.11.68'],
    'newtcp29':  ['172.16.11.69'],
}

The traffic generator setup now creates three staggered flows.


traffic_iperf = [
    ('0.0', '1', " start_iperf, client='newtcp27', server='newtcp20', port=5000, "
     " duration=V_duration "),
    ('10.0', '2', " start_iperf, client='newtcp28', server='newtcp21', port=5001, "
     " duration=V_duration "),
    ('20.0', '3', " start_iperf, client='newtcp29', server='newtcp22', port=5002, "
     " duration=V_duration "),
]

TPCONF_traffic_gens = traffic_iperf

We also need to configure the different TCP congestion control algorithms and instruct TEACUP to vary this parameter.


TPCONF_TCP_algos = ['newreno', 'cubic', ]

TPCONF_parameter_list = {
#   Vary name		V_ variable	  file name	values			extra vars
    'delays' 	    :  (['V_delay'], 	  ['del'], 	TPCONF_delays, 		 {}),
    'loss'  	    :  (['V_loss'], 	  ['loss'], 	TPCONF_loss_rates, 	 {}),
    'tcpalgos' 	    :  (['V_tcp_cc_algo'],['tcp'], 	TPCONF_TCP_algos, 	 {}),
    'aqms'	    :  (['V_aqm'], 	  ['aqm'], 	TPCONF_aqms, 		 {}),
    'bsizes'	    :  (['V_bsize'], 	  ['bs'], 	TPCONF_buffer_sizes, 	 {}),
    'runs'	    :  (['V_runs'],       ['run'], 	range(TPCONF_runs), 	 {}),
    'bandwidths'    :  (['V_down_rate', 'V_up_rate'], ['down', 'up'], TPCONF_bandwidths, {}),
}

TPCONF_variable_defaults = {
#   V_ variable			value
    'V_duration'  	:	TPCONF_duration,
    'V_delay'  		:	TPCONF_delays[0],
    'V_loss'   		:	TPCONF_loss_rates[0],
    'V_tcp_cc_algo' 	:	TPCONF_TCP_algos[0],
    'V_down_rate'   	:	TPCONF_bandwidths[0][0],
    'V_up_rate'	    	:	TPCONF_bandwidths[0][1],
    'V_aqm'	    	:	TPCONF_aqms[0],
    'V_bsize'	    	:	TPCONF_buffer_sizes[0],
}

TPCONF_vary_parameters = ['tcpalgos', 'delays', 'bandwidths', 'aqms', 'runs',]

Running Experiment

See scenario 1.

Analysing Results

See scenario 1.

Scenario 6: Two HTTP-based video streaming clients

Topology

We now have hosts newtcp20, newtcp21 connected to the 172.16.10.0/24 network and host newtcp27 connected to the 172.16.11.0/24 network. The machine newtcprt3 connects the two experiment subnets. All three machines also have a second network interface that is used to control the experiment via TEACUP. Like scenario 1 this scenario requires that hosts have the traffic generator and logging tools installed as described in the tech report CAIA Testbed for TEACUP Experiments Version 2, but PXE booting or a multi-OS installation is not needed.

Test Traffic

In this scenario we simulate DASH-like HTTP video streaming. Host newtcp27 runs a web server. Host newtcp20 and newtcp21 are clients and use httperf to emulate DASH-like streaming -- there is one DASH-like flow between each client and the server. In this scenario we have fixed the video rates and on/off cycle times, but the rate and cycle time differs for both streams.

Variable Parameters

Like in scenario 1 we emulate three different delays (same delays for all flows) and two different bandwidth settings. We now also vary the AQM mechanism used on the router between FIFO, CoDel and FQ CoDel. This means we have 18 experiments in total.

TEACUP Config File

You can download the configuration file from here. To use the configuration rename it to config.py. In the following we explain the parts of the configuration file that have changed compared to scenario 1.

Our host configuration now looks as follows.


TPCONF_router = ['newtcprt3', ]
TPCONF_hosts = [ 'newtcp20', 'newtcp21', 'newtcp27', ]

TPCONF_host_internal_ip = {
    'newtcprt3': ['172.16.10.1', '172.16.11.1'],
    'newtcp20':  ['172.16.10.60'],
    'newtcp21':  ['172.16.10.61'],
    'newtcp27':  ['172.16.11.67'],
}

The traffic generator setup now starts a web server and generates fake streaming content on newtcp27 before starting the DASH-like flows.


traffic_dash = [
    # Start server and create content (server must be started first)
    ('0.0', '1', " start_http_server, server='newtcp27', port=80 "),
    ('0.0', '2', " create_http_dash_content, server='newtcp27', duration=2*V_duration, "
     " rates='500, 1000', cycles='5, 10' "),

    # Create DASH-like flows
    ('0.5', '3', " start_httperf_dash, client='newtcp20', server='newtcp27', port=80, "
     " duration=V_duration, rate=500, cycle=5, prefetch=2.0, "
     " prefetch_timeout=2.0 "),
    ('0.5', '4', " start_httperf_dash, client='newtcp20', server='newtcp27', port=80, "
     " duration=V_duration, rate=1000, cycle=10, prefetch=2.0, "
     " prefetch_timeout=2.0 "),

]

TPCONF_traffic_gens = traffic_dash

Finally, we need to configure the different AQM mechanisms used. TPCONF_vary_parameters already included the 'aqms' parameters in scenario 1, so TPCONF_parameter_list and TPCONF_variable_defaults look like in scenario 1. We only have to change TPCONF_aqms.


TPCONF_aqms = ['pfifo', 'codel', 'fq_codel', ]

TPCONF_vary_parameters = ['delays', 'bandwidths', 'aqms', 'runs',]

Running Experiment

See scenario 1.

Analysing Results

See scenario 1.

Scenario 7: Incast problem scenario

Topology

We now have the host newtcp20 connected to the 172.16.10.0/24 network and hosts newtcp22--30 connected to the 172.16.11.0/24 network. The machine newtcprt3 connects the two experiment subnets. All three machines also have a second network interface that is used to control the experiment via TEACUP. Like scenario 1 this scenario requires that hosts have the traffic generator and logging tools installed as described in the tech report CAIA Testbed for TEACUP Experiments Version 2, but PXE booting or a multi-OS installation is not needed.

Test Traffic

In this scenario we emulate traffic to investigate the incast problem. The host newtcp20 is the querier and in regular intervals sends simultaneous queries to hosts newtcp21, newtcp22, newtcp23, newtcp24, newtcp25, newtcp26, newtcp27, newtcp28, newtcp29 and newtcp30 which then respond simultaneously. The querier uses httperf to send the queries to web servers running on all of the responders.

Variable Parameters

Like in scenario 6 we emulate three different delays (same delays for all flows), two different bandwidth settings, and three different AQM mechanism (FIFO, CoDel and FQ CoDel). We now also vary the size of the response between six different values. This means we have 108 experiments in total.

TEACUP Config File

You can download the configuration file from here. To use the configuration rename it to config.py. In the following we explain the parts of the configuration file that have changed compared to scenario 6.

Our host configuration now looks as follows.


TPCONF_router = ['newtcprt3', ]
TPCONF_hosts = [ 'newtcp20', 'newtcp21', 'newtcp22', 'newtcp23', 'newtcp24',
                 'newtcp25', 'newtcp26', 'newtcp27', 'newtcp28', 'newtcp29',
                 'newtcp30', ]

TPCONF_host_internal_ip = {
    'newtcprt3': ['172.16.10.1', '172.16.11.1'],
    'newtcp20':  ['172.16.10.60'], # querier
    'newtcp21':  ['172.16.11.61'], # responders...
    'newtcp22':  ['172.16.11.62'],
    'newtcp23':  ['172.16.11.63'],
    'newtcp24':  ['172.16.11.64'],
    'newtcp25':  ['172.16.11.65'],
    'newtcp26':  ['172.16.11.66'],
    'newtcp27':  ['172.16.11.67'],
    'newtcp28':  ['172.16.11.68'],
    'newtcp29':  ['172.16.11.69'],
    'newtcp30':  ['172.16.11.70'],
}

The traffic generator setup now starts a web server and generates fake streaming content on each responder. Then after a delay of one second it starts the querier.


traffic_incast = [
    # Start servers and create contents (server must be started first)
    ('0.0', '1', " start_http_server, server='newtcp21', port=80 "),
    ('0.0', '2', " start_http_server, server='newtcp22', port=80 "),
    ('0.0', '3', " start_http_server, server='newtcp23', port=80 "),
    ('0.0', '4', " start_http_server, server='newtcp24', port=80 "),
    ('0.0', '5', " start_http_server, server='newtcp25', port=80 "),
    ('0.0', '6', " start_http_server, server='newtcp26', port=80 "),
    ('0.0', '7', " start_http_server, server='newtcp27', port=80 "),
    ('0.0', '8', " start_http_server, server='newtcp28', port=80 "),
    ('0.0', '9', " start_http_server, server='newtcp29', port=80 "),
    ('0.0', '10', " start_http_server, server='newtcp30', port=80 "),

    ('0.0', '11', " create_http_incast_content, server='newtcp21', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),
    ('0.0', '12', " create_http_incast_content, server='newtcp22', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),
    ('0.0', '13', " create_http_incast_content, server='newtcp23', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),
    ('0.0', '14', " create_http_incast_content, server='newtcp24', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),
    ('0.0', '15', " create_http_incast_content, server='newtcp25', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),
    ('0.0', '16', " create_http_incast_content, server='newtcp26', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),
    ('0.0', '17', " create_http_incast_content, server='newtcp27', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),
    ('0.0', '18', " create_http_incast_content, server='newtcp28', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),
    ('0.0', '19', " create_http_incast_content, server='newtcp29', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),
    ('0.0', '20', " create_http_incast_content, server='newtcp30', duration=2*V_duration, "
     " sizes=V_inc_content_sizes_str "),

    # Start querier 
    ('1.0', '30', " start_httperf_incast, client='newtcp20', "
     " servers='newtcp21:80,newtcp22:80,newtcp23:80,newtcp24:80,newtcp25:80,newtcp26:80, "
     " newtcp27:80,newtcp28:80,newtcp29:80,newtcp30:80', "
     " duration=V_duration, period=V_inc_period, response_size=V_inc_size"),
]

TPCONF_traffic_gens = traffic_incast

Next, we need to configure the value ranges for response sizes and the time period between sending queries.


TPCONF_inc_content_sizes = [8, 16, 32, 64, 128, 256]
TPCONF_inc_content_sizes_str = ','.join(
    str(x) for x in TPCONF_inc_content_sizes)
TPCONF_inc_periods = [10]

Finally, we need to configure TEACUP to make sure the V_ variables used in the traffic generator setup are defined and we vary the response sizes.


TPCONF_parameter_list = {
#   Vary name		V_ variable	  file name	values			extra vars
    'delays' 	    :  (['V_delay'], 	  ['del'], 	TPCONF_delays, 		 {}),
    'loss'  	    :  (['V_loss'], 	  ['loss'], 	TPCONF_loss_rates, 	 {}),
    'tcpalgos' 	    :  (['V_tcp_cc_algo'],['tcp'], 	TPCONF_TCP_algos, 	 {}),
    'aqms'	    :  (['V_aqm'], 	  ['aqm'], 	TPCONF_aqms, 		 {}),
    'bsizes'	    :  (['V_bsize'], 	  ['bs'], 	TPCONF_buffer_sizes, 	 {}),
    'runs'	    :  (['V_runs'],       ['run'], 	range(TPCONF_runs), 	 {}),
    'bandwidths'    :  (['V_down_rate', 'V_up_rate'], ['down', 'up'], TPCONF_bandwidths, {}),
    'incast_periods':  (['V_inc_period'], ['incper'],   TPCONF_inc_periods,      {}),
    'incast_sizes'  :  (['V_inc_size'],   ['incsz'],    TPCONF_inc_content_sizes,{}),
}

TPCONF_variable_defaults = {
#   V_ variable			value
    'V_duration'  	:	TPCONF_duration,
    'V_delay'  		:	TPCONF_delays[0],
    'V_loss'   		:	TPCONF_loss_rates[0],
    'V_tcp_cc_algo' 	:	TPCONF_TCP_algos[0],
    'V_down_rate'   	:	TPCONF_bandwidths[0][0],
    'V_up_rate'	    	:	TPCONF_bandwidths[0][1],
    'V_aqm'	    	:	TPCONF_aqms[0],
    'V_bsize'	    	:	TPCONF_buffer_sizes[0],
    'V_inc_period'      :       TPCONF_inc_periods[0],
    'V_inc_size'        :       TPCONF_inc_content_sizes[0],
    'V_inc_content_sizes_str':  TPCONF_inc_content_sizes_str,
}

TPCONF_vary_parameters = ['incast_sizes', 'delays', 'bandwidths', 'aqms', 'runs',]

Running Experiment

See scenario 1.

Analysing Results

See scenario 1.



Last Updated: Wednesday 17-Jun-2015 12:16:36 AEST | No longer maintained. Pre-2018 was maintained and authorised by Grenville Armitage, garmitage@swin.edu.au