Collaborations

CAIA members engage in collaborative research with colleagues from a number of other institutions -- both in academia and industry. The table below summarises many current and past collaborations that may or may not have dedicated project web sites. (See also our funding sources page for information about projects related to specific grants.)

Date begun
Date ended
CAIA researchers
External collaborators
Title and Description
Oct 2009 (active) Lawrence Stewart, Grenville Armitage Greg Chesson (Google, USA) MCC: Microburst Congestion Control

Microburst triggered congestion affects environments where unsynchronised data sources send data via a common path, which converges on the wire as a highly synchronised and bursty traffic spike. Clustered data centre environments with low latency, high bandwidth connectivity are the obvious places where microbursts and incast can be observed. See the MCC project page for more details.
Sep 2009 (active) Mattia RossiGrenville Armitage Geoff Huston (APNIC) Exploring the Utilisation of IPv4 Address Space and Size of the NATed IPv4 Internet

With IPv4 address pool exhaustion being imminent, it is of major interest to canvass to what proportion the allocated address space is utilised, to allow the development of new strategies for distribution of the remaining IPv4 addresses. It is also of major interest to approximately quantify the amount of devices using IP addresses to connect to the Internet to be able to plan proper IPv6 address distribution. With NAT (actually NAPT) being widely in use to allow sharing of a single globally routable IPv4 address, and hosts behind NATs being practically invisible, it is quite difficult to provide detailed host counts. See the STING project page for more details.
Apr 2009 (active) Warren Harrop, Grenville Armitage Fred Baker (Cisco) IPv4 and IPv6 Greynets

Based on various work (including earlier work at CAIA on Greynets), we are developing an eventual IETF RFC documenting the potential roll of IPv4 and IPv6 Greynets
2009 (active) Lachlan Andrew

Steven Low and Jayakrishnan Nair (Caltech)

Avoiding heavy tails due to protocol interaction.

Many communication networks experience delays with "heavy tailed" delay distributions, which means that very long delays are more likely than traditional theory predicts. It was recently shown that retransmission protocols can cause heavy tails where none existed. This work showed that this effect is very fragile, and that most realistic protocols will not induce heavy tails.
Feb 2008 (active) Lawrence Stewart, Grenville Armitage Dr Michael Welzl (University of Innsbruck, Austria;University of Oslo, Norway.) Evaluating next generation TCP congestion control

Empirical and simulation evaluation of how TCP congestion control algorithms like H-TCP and CUBIC interact with NewReno over consumer broadband links.
June 2007 (active) Lachlan Andrew, Grenville Armitage, Thuy Nguyen, Andrew Sucevic, Dragi Klimovski, Adam Black

Adam Wierman (Caltech),

Kevin Tang (Nithin Michael (Cornell))

Mung Chiang (Yannis Kamitsos (Princeton))

GREEN - Global Research into Energy Efficient Networking

The energy consumption of the network infrastructure underpinning the Internet is becoming increasingly important. It is estimated to take about 2% of the energy consumption of industrialised nations, or about the same as the aviation industry. The GREEN program is investigating ways to reduce this energy consumption. See the
GREEN project page for more details.
2007 (active) Lachlan Andrew

Taib Znati and Ihsan Qazi (University of Pittsburgh)

Congestion control using minimal explicit network feedback.

Users want to send data as fast as the network will allow. When the network becomes congested, it must signal users to slow down. This is currently done implicitly, by discarding data. Many alternatives have been proposed, but most require major changes to network infrastructure. This project investigated a simple method by which the network can tell users when to slow down, and when they can speed up.
2006 (active) Lachlan Andrew

Fernando Paganini (ORT), Ao Tang (Cornell University) and Andres Feragut (ORT)

Flow-level stability of TCP with general file size distributions.

How much traffic is too much for the Internet? If people try to send too much data over a path through the Internet, congestion control will slow down all of the data, making each transfer take longer. If requests to transfer data come too quickly, the total number of transfers in progress will grow indefinitely. It has long been known that if file sizes are exponentially distributed, then typical ("alpha-fair") congestion control will be stable provided each individual link is. This work extends that result to realistic (non-exponential) file size distributions.
Nov 2004 Dec 2005 Irena Atov, Grenville Armitage and David Kennedy 

Bartek Wydrowski (NETLAB, California Institute of Technology, USA),

Lachlan Andrew and A/Prof. Stephen Hanly (CUBIN, University of Melbourne, Australia)

Evaluation of FAST TCP using Swinburne University's Broadband Access Research Testbed (BART)

There is strong evidence that the efficiency of the Internet is limited by its existing TCP congestion control system. A replacement, called FAST TCP, is being designed at Caltech to improve performance and it is emerging as a strong candidate for a new IETF TCP standard. For its standardization and deployment, it must be tested in a wide variety of environments, and it is necessary that these tests be repeated by independent groups. To date, FAST has been tested by Caltech and independent groups such as SLAC (Stanford Linear Accelerator Center) and CERN (The European Particle Physics Laboratory) in a wide range of high speed environments. However, there is a pressing need for testing in low speed environments which are more typical of the existing Internet. The current and medium term future of access networks is in the 1-10 Mbps range, using such technologies as ADSL and cable modem. FAST needs to work in these environments as well as being able to scale to the high-speed regime. This project aims to experimentally evaluate the performance of FAST under typical ‘edge of network’ scenarios involving ADSL modems, cable modems and 802.11 wireless LANs. In particular, it will perform experiments using Swinburne University's Broadband Access Research Testbed (BART) . It will seek to identify all possible failure modes of FAST in the test environments. The understanding gained will also allow optimal parameter settings to be determined for a range of conditions, such as link bandwidths, error rates and propagation delays. More importantly, if weaknesses are discovered, it will provide an opportunity to contribute to the evolving FAST protocol.
June 2004 Dec 2005 Sebastian Zander Tanja Zseby (Fraunhofer Fokus, Germany) Sampling Techniques for Non-Intrusive Statistical SLA Validation

Service Level Agreements (SLAs) specify the network Quality of Service (QoS) that providers are expected to deliver. Providers have to verify if the actual quality of the network traffic complies with the SLA without introducing significant additional network load and operational costs. We propose a novel approach for non-intrusive SLA validation that uses statistical SLAs and direct samples of the customer traffic for the quality assessment. Based on pre-defined thresholds for QoS metrics, we model the validation problem as proportion estimation of non-conformant traffic. We compare the sampling errors of different sampling techniques and present a novel solution for estimating the error prior to the sampling. We also derive a solution for computing the minimum sample fraction depending on the SLA parameters. Finally we evaluate the proposed approach using real traffic from multiplayer online games and prove that only a small fraction of the traffic needs to be sampled to provide a customer with statistical SLA guarantees.
March 2004 March 2006 Jason But and Grenville Armitage Urs Keller (Ecole Polytechnique Federale de Lausanne, Switzerland) NetSniff - a Multi-Network-Layered Real-Time Traffic Capture and Analysis Tool

The recent widespread uptake of broadband access technologies has led to a shift in how the Internet is being used. The availability of an always-connected, high-speed Internet connection means that home users are increasingly likely to use the Internet as an information repository and content delivery resource. Higher content access speeds coupled with a zero connection time means that Internet usage can become more spontaneous rather than planned for. The ICE3 project considers whether the traditional Internet access model (where bandwidth at the edge of the network is orders of magnitude lower than within the core of the network) could support an explosion in the usage of new and evolving Internet applications, and particularly if the network capacity hieracrchy was inverted (more bandwdith at the edge than within the core of the network). In order to do this we need to statistically analyse the performance of networked applications in either environment. This is achieved using , a multi-layered network capture and analysis tool. This tool is under ongoing development to increase the number of supported applications and develop an increasing dataset of real-world traffic statistics. For more information please visit the ICE3 website.
June 2004 Dec 2005 Irena Atov

Richard J. Harris (Massey University, New Zealand), 
Sanjaj K. Bose (Nanyang Technological University, Singapore)

Determining Class-Based Bandwidth Allocations on Links in Multi-Service IP Networks

The growth of the Internet has brought with it problems of service quality that were not really thought of when the “best-effort” design of the network was originally envisaged. The Internet is now planned to be used by a variety of different services which have different kinds of service requirements. These range from the old-fashioned “best-effort” services to ones which require real-time traffic like voice or video to be carried with reasonable delay, delay jitter and data loss. The control of the network with this kind of traffic requires careful resource provisioning as standard Weighted Fair Queueing (WFQ) service disciplines employed in IP QoS networks can only provide tight end-to-end delay guarantees for the classes if an adequate level of resources (in terms of bandwidth and buffer space) is allocated along their respective data paths through the network. In this project we focus on development and analysis of recursive methods that can be used for inversion of some of the well known traffic decomposition models (e.g., QNA) and can provide basis for network dimensioning with multiple service classes. The goal is to develop computationally efficient algorithms for determining class-based bandwidth allocations on the links subject to satisfying varying end-to-end QoS constraints for the classes.
2003 2004 Philip Branch

IBAP Pty Ltd.

Lawful Interception of Emerging Technolgoies

This project investigates some of the issues dealing with lawful interception of emerging network technologies.
Sep 2002
Jan 2005
Grenville Armitage Les Cottrell (IEPM  group, Stanford Linear Accelerator Centre, USA) An Australian node of PingER

PingER (Ping End-to-end Reporting) is the name given to the Internet End-to-end Performance Measurement ( IEPM ) project to monitor end-to-end performance of Internet links. PingER involves hundreds of sites in many countries all over the world. CAIA collaborated with IEPM to provide an Australian node to the PingER project, which involved running regular 'ping' tests against a list of international sites every 30 minutes and reporting our results back to the IEPM team at Stanford's Linear Accelerator Centre. Our site began operation in September 2002. As of January 2005 our PingER node is temporarily offline.

 

 

Last Updated: Tuesday 10-Aug-2010 11:14:02 EST | Maintained by: Thuy Nguyen (tnguyen@swin.edu.au) | Authorised by: Grenville Armitage (garmitage@swin.edu.au)