REPETITA: Repeatable Experiments for Performance Evaluation of Traffic-Engineering Algorithms

by   Steven Gay, et al.

In this paper, we propose a pragmatic approach to improve reproducibility of experimental analyses of traffic engineering (TE) algorithms, whose implementation, evaluation and comparison are currently hard to replicate. Our envisioned goal is to enable universally-checkable experiments of existing and future TE algorithms. We describe the design and implementation of REPETITA, a software framework that implements common TE functions, automates experimental setup, and eases comparisons (in terms of solution quality, execution time, etc.) of TE algorithms. In its current version, REPETITA includes (i) a dataset for repeatable experiments, consisting of more than 250 real network topologies with complete bandwidth and delay information as well as associated traffic matrices; and (ii) the implementation of state-of-the-art algorithms for intra-domain TE with IGP weight tweaking and Segment Routing optimization. We showcase how our framework can successfully reproduce results described in the literature, and ease new analyses of qualitatively-diverse TE algorithms. We publicly release our REPETITA implementation, hoping that the community will consider it as a demonstration of feasibility, an incentive and an initial code basis for improving experiment reproducibility: Its plugin-oriented architecture indeed makes REPETITA easy to extend with new datasets, algorithms, TE primitives and analyses. We therefore invite the research community to use and contribute to our released code and dataset.


page 1

page 2

page 3

page 4


A Backend Platform for Supporting the Reproducibility of Computational Experiments

In recent years, the research community has raised serious questions abo...

SRPerf: a Performance Evaluation Framework for IPv6 Segment Routing

Segment Routing is a form of loose source routing. It provides the abili...

Don't Repeat Yourself: Seamless Execution and Analysis of Extensive Network Experiments

This paper presents MACI, the first bespoke framework for the management...

Experiments as Code: A Concept for Reproducible, Auditable, Debuggable, Reusable, Scalable Experiments

A common concern in experimental research is the auditability and reprod...

Wanted: standards for automatic reproducibility of computational experiments

Those seeking to reproduce a computational experiment often need to manu...

Flexible failure detection and fast reroute using eBPF and SRv6

Segment Routing is a modern variant of source routing that is being grad...

Code Repositories


REPETITA: Repeatable Experiments for Performance Evaluation of Traffic-Engineering Algorithms

view repo

Please sign up or login with your details

Forgot password? Click here to reset