Assembling a kernel test grid with autotest

This is a companion article for the presentation given during LinuxCon Europe 2012, held in Barcelona. Here’s a link with the slides.

The case for automated testing

It’s hard to keep track of problems on a fast moving target. Specially on a *very* fast moving target, such as the Linux kernel. Thousands of commits get into the kernel git repos every week, and although the subsystem maintainers are very careful and competent, they can’t predict or catch every problem that might get into the tree, so the more testing we give patchsets proposed to linux, the better.

But it’s difficult for the lone kernel developer to test all his/her patches systematically. On this article, we’ll discuss using the autotest project ( to assemble a test grid that kernel developers can use to thest their own git repos, helping to catch bugs earlier and improving the overall quality of the kernel.

Why to avoid reliance on user testing only

As we can see from the diagram above Linux’s development model forms an hourglass starting highly distributed, with contributions being concentrated in maintainer trees before merging into the development releases and then into mainline itself. It is vital to catch problems here in the neck of the hourglass, before they spread out to the distros — even once a contribution hits mainline it is has not yet reached the general user population, most of whom are running distro kernels which often lag mainline by many months.

In the Linux development model, each actual change is usually small and attribution for each change is known making it easy to track the author once a problem is identified. It is clear that the earlier in the process we can identify there is a problem, the less the impact the change will have, and the more targeted we can be in reporting and fixing the problem.

By making it easier to test code, we can encourage developers to run the tests before ever submitting the patch; currently such early testing is often not extensive or rigorous, where it is performed at all. Much developer effort is being wasted on bugs that are found later in the cycle when it is significantly less efficient to fix them.


Autotest is an open source project that was designed to test the linux kernel, although it can be used to test userspace programs just fine. It’s released under the GPLv2, and comprises a number of tools and libraries that people can use to automate their testing workflows. You can find a complete overview of autotest and quick links to its resources (code, documentation, issue tracker):

On a high level description, the autotest structure can be represented by the following diagram:

Autotest is composed by the following main modules:

Autotest client: The engine that executes the tests (dir client). Each autotest test is a a directory inside (client/tests) and it is represented by a python class that implements a minimum number of methods. The client is what you need if you are a single developer trying out autotest and executing some tests.

Autotest client executes ”client side control files”, which are regular python programs, and leverage the API of the client.

Autotest server: A program that copies the client to remote machines and controls their execution. Autotest server executes ”server side control files”, which are also regular python programs, but leverage a higher level API, since the autotest server can control test execution in multiple machines.

If you want to perform tests slightly more complex involving more than one machine you might want the autotest server.

Autotest database: For test grids, we need a way to store test results, and that is the purpose of the database component. This DB is used by the autotest scheduler and the frontends to store and visualize test results.

Autotest scheduler: For test grids, we need an utility that can schedule and trigger job execution in test machines, the autotest scheduler is that utility.

Autotest web frontend: For test grids, A web app, whose backend is written in django ( and UI written in gwt (, lets users to trigger jobs and visualize test results

Autotest command line interface: Alternatively, users also can use the autotest CLI, written in python

Installing autotest

In order to install autotest, you’ll need:

  • 2 machines, one that will serve as an autotest server, another that will serve as an autotest client. You can use virtual machines, if you want to experiment first before deploying on your actual bare metal machines. In fact, this presentation’s demo will be made using 2 Fedora 17 virtual machines.
  • Download the server install script that comes with autotest:

As the script help will tell you, the distros on which this script is supposed
to work are:

  • Fedora 16
  • Fedora 17
  • RHEL 6.2
  • Ubuntu 12.04

Once downloaded, you can give the script permissions:

chmod +x

Then execute it, passing the autotest user password (-u) and the database
admin password (-d):

./ -u linuxcon -d linuxcon

The script is supposed to install any missing packages. For more information
on the install procedure (what it does, troubleshooting), see:

Adding initial configuration

Once you have the autotest server installed, you can go to the web interface

admin interface (top right corner, Admin) and add:

  • In the tab Labels: A platform label, say, x86_64
  • In the tab Hosts: A host, using the ip of your client machine, using the machine you’ve installed in the previous step

With this, we’re close to our goal.

Setting passwordless SSH between client and server

It’s important for the tests to work that you have ssh connection among the
server machine and the client:

server machine @autotest -> client machine @root

So, log on the server as autotest (you did chose the password when running the
script). Make sure you create an ssh key:

ssh-keygen -t rsa

You may leave passphrases empty. Then, copy the key to the client

ssh-copy-id root@your-client-ip

Type your client password and there, you’re done. Please verify if

ssh root@your-client-ip

Logs you directly as root on the client.

Set up cobbler if you have it

Just go to global_config.ini and add the appropriate cobbler install server
data (copied it as a reference)

# Install server type
type: cobbler
# URL for xmlrpc_server, such as
# XMLRPC user, in case the server requires authentication
# XMLRPC password, in case the server requires authentication
# Profile to install by default on your client machine
# Number of installation/reset attempts before failing it altogether
num_attempts: 2

Send your first job

Go to “Create Job” tab on the web interface, create your first job with a
simple client side sleeptest, without kernel install. You have to:

  • Specify job name
  • Check the ‘Sleeptest’ checkbox
  • In ‘Browse Hosts’, select your client
  • And that’s it. Hit the ‘Submit Job’ button

After less than a minute, you should have your results. Hopefully there won’t
be any major problems.

More sophisticated job

As we don’t want to run ‘hello world’ type of tests on a machine, soon you’ll
find yourself going to more sophisticated jobs. Let’s talk about what is a job.

A job in autotest terminology is a description of the operations that will be
performed on a test machine, it is represented by a file, called control file.
A control file in autotest might be of 2 types:

  • client side: Scope restricted to only the machine executing the tests.
  • server side: Broader scope, you can control multiple machines.

Example: Here’s a client side control file used to build a kernel from a
git repo. It was a tad simplified, you can have even more fine grained control
of the entire process:

def step_init():
    repo = 'git://'
    repo_base = '/nfs_mount/linux-2.6/'
    branch = 'master'
    commit = None
    dest_dir = os.path.join("/tmp", 'kernel_src')

    repo = git.GitRepoHelper(uri=repo, branch=branch,
                             commit=commit, base_uri=repo_base)
    kernel = job.kernel(repo.destination_dir)

    if patch_list:
    if kernel_config:

def step_test():

All the operations there are supposed to be executed on a single machine.
Note that step_init and step_test are part of what we call ‘step engine’, that
helps handle testing where reboots are necessary. In this case, we build
a new kernel from a given git repo, then we reboot into this new kernel, then
we run the ‘kernbench’ autotest module.

Now, here’s an example of a server side control file, that accomplishes the
same, except that we can install a machine with a given cobbler profile:

control = '''
def step_init():
    repo = 'git://'
    repo_base = '/nfs_mount/linux-2.6/'
    branch = 'master'
    commit = None
    dest_dir = os.path.join("/tmp", 'kernel_src')

    repo = git.GitRepoHelper(uri=repo,
    kernel = job.kernel(repo.destination_dir)

    if patch_list:
    if kernel_config:

def step_test():

def run(machine):
    host = hosts.create_host(machine)
    profile = 'Fedora17-x86_64'
    timeout = 3600
    at = autotest_remote.Autotest(host)

job.parallel_simple(run, machines)

We pretty much just run the client side control file inside the server side
scope, through the method run() of the autotest_remote.Autotest() object. The
idea here is to install the machine using cobbler, then reboot the machine
and then executing the client side control file. This article is not supposed to
dwelve into the details of the control files API, for more information
you can check our documentation in control files:

Controlling your machine: Conmux and Cobbler

A big part of running a test grid is being able to remotely control and see
what’s going on with the nodes of your grid. In autotest, we use 2 tools to
control the following aspects of the machines:

  • Provisioning
  • Power cycle
  • Serial console

And those tools are:

  • Conmux: Lives inside the autotest tree, it’s a console machine multiplexer, provides console access through the ‘console’ command. You can install and configure your autotest server as the conmux server, see:

for more information. The idea is to:

  • Install conmux system wide
  • Place config files for your machines (see the examples dir on conmux dir)
  • Test your connection to the console/hardreset
  • Cobbler: It’s a separate set of tools to provision machines, you can see more info in

Cobbler can handle machine power on/off/reset machines, so if you don’t need
console control (your lab already provides console access), you can just
use cobbler. Configuration in autotest is simple:

  • Fill global_config.ini with base xmlrpc URL and auth data
  • Fill your systems in cobbler with matching IPs/Hostnames with the autotest DB
  • Import distros in there to install
  • That should be it. All else can be handled by autotest and you only have to use the control file APIs or the web interface to choose profiles to install.

Our limitations: Embedded

As a result of the scenario we had back 7 years ago when autotest started,
all the design was strongly based on intel and ppc architectures, so the

  • choosing the kernel to boot
  • provisioning the machine with a distro
  • controlling power and remote consoles

Work for intel, powerpc, S390, among others, but not for embedded systems
using an SD card or other arrangements common for embedded. Autotest can
still be used, if the facilities above are not needed, and it is an area where
we need help from contributors to implement the missing parts.

Recent work in autotest

During the last year, we’ve made a lot of work to make the core framework
development separate from test module development, as well as reorganized
the development model: Now we have 3 git repos:

  • autotest (core)
  • autotest-client-tests
  • autotest-server-tests

Each one has a master branch, with current stable code, next, that is the
integration branch, regularly QA’ed to try catching problems as soon as
possible, and release branches, for more long term maintenance code. Other
things worth mentioning:

  • Major namespace cleanup: We bit the bullet and cleaned a lot of cruft from the early autotest days
  • Packaging work: Now autotest programs can also run on a system wide install, allowing distros to package and ship it
  • Stand alone RPC client, that will be the base for other applications that can use the autotest RPC server to submit jobs, view and query results.


We’re working to fill the gaps and make autotest better. Things we plan to do
on the coming months:

  • Advanced machine scheduling (per machine hardware)
  • Evolve RPC applications
  • Better embedded support
  • Model automated bisecting

You’re welcome to join us!

Getting started and contact info

Autotest code is just one git clone away:

git clone git://

Check the testsuite documentation

And let us know your findings. Please refer to:

If you want to subscribe to our mailing list/join our IRC channel.


Published by lmr

I'm yet another Software Engineer working to improve to Open Source Software (OSS) Testing Stack. About me

One thought on “Assembling a kernel test grid with autotest

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: