Remove rpm packages with failed scriptlets execution

Have you ever been in that awkward spot where an rpm package had a scriptlet error, and you can’t remove it from your system, ever? I have to do this from time to time, but end up forgetting how it’s done. Let’s change that by keeping it in this blog (incidentally helping other people):

rpm -e --noscripts --justdb [packagename-version]

That was easy ūüôā


Autotest 0.15.0 released!

Autotest 0.15.0 is a new major release of autotest! The goal here is to provide the latest advances on autotest while providing a stable ground for groups and organizations looking for autotest, such as distro packagers and newcomers.

For the impatient

Check out our main web page at:

Direct download link:

Now that github removed arbitrary uploads, we now will release directly fom git tags. The tags now will be signed with my GPG key. So get your fresh copy of autotest (remember, the tarball does not contain the tests anymore, those must be picked at the new test repos).


Tests modules split

Since test modules are fairly independent from the core framework, they have been split into their own new git repos, and were added to autotest as git submodules. Don’t worry, if you want the autotest repo with full tests, you may only need to execute

git clone –recursive git://

The new repositories for the client and server tests:

API cleanup

Following the example of autotest 0.14.0, 0.15.0 brings more API cleanups.

1) global_config -> settings

The library global_config was renamed to settings. As an example of how
the change works, here’s an example of how to access settings on autotest

from autotest.client.shared import global_config

c = global_config.global_config.get_config_value
c.get_value("mysection", "mykey")

This is how the same process look on autotest 0.15.0:

from autotest.client.shared.settings import settings

settings.get_value("mysection", "mykey")

As tests usually have no business with the autotest settings, this means
pretty much no update required from test authors.

Machine installs from the autotest web interface

In autotest 0.14, we introduced preliminary integration between autotest

and the Cobbler install server ( was introduced. However, there was not enough of that integration visible if you were using the web interface. Now it is possible to select which cobbler profiles you want to use on your test machines from the web interface.

DB migration moved to Django South

When the autotest RPC server application was developed, there was a database code in place already. In order to move away from having 2 ways to access the database, this version of autotest introduces a new migration system, based on Django South (

For people looking for upgrading the database to the latest version, we’ve put up a procedure here

Other changes

* Support for creation of debian packages

* Simplified module import logic
* Simplified unittests execution
* Improved install scripts. Now it is possible to install autotest from an arbitrary git repo and branch, which will make it easier to test and validate changes that involve the rpc client/server, scheduler, among others.

What’s next?

We believe the foundation for an autotest version 1.0 is already in place. So, the next release will be autotest 1.0, bringing polish and bugfixing to the framework being developed during the previous six years.

You can see the issues open for the Milestone 1.0.0 here:

and here:

For autotest 1.0 we plan on upgrading the stack to the latest versions
of the foundation technologies involved in the server operation, and
package generation:

* Update the web interface (RPC client) to the latest GWT

* Update the RPC server to the latest Django

* Functional server packages for Debian
* Automated generation of both rpm and debian packages
* Functional regression test suite for server installs, working out of the box

We also want to improve the usefulness and usability of our existing tools

* Introduce a ‘machine reserve’ mode, where you may provision one bare metal machine with the OS of choice and use the installed machine for a period of time, in a convenient way.

* Allow to schedule jobs based on machine hardware capabilities, for example:
– Number of CPUS
– Available Memory
– Virtualization support
among others

* Allow autotest to function well as an alternative test harness for beaker (

* Support to parametrized jobs (be able to set parameters in tests, and enable people to simply pass parameters to those tests in the cli/web interface).

FUDCon 2013

One more great FUDCon is over, my goals in the conference were accomplished:

  • Discussed with the Fedora QA folks about distribution-wide and package specific test automation. I also presented a refresher on Autotest architecture and recent improvements.
  • Configured Fedora virtualization daily Autotest jobs, as an auxiliary tool for Cole to detect problems in virt-related Fedora packages. We also went through the basics of the virtualization test suite, that can be a useful tool for both Fedora virt developers and users.
  • Helped David to get started with writing an Autotest wrapper for the pygobject 3 test suite.

All of these activities deserve a more thorough post, for any of those interested in test automation in Fedora.

Of course, I also had a great time with all my talented (and sometimes goofy) friends that work hard to bring Fedora to you. Here are a couple of photos (I guess there’ll be more to come):

774708_10200382840113838_1650735348_o 775055_531094553591112_881784947_o

Happy 2013

I wish I had a long, well thought post to write about 2012, but I don’t ūüôā Some things that come to my mind looking at the year retrospectively:

* I’ve completed 1 year living in Piracicaba.
* Quality of life was improved all round.
* Work is great. Autotest and virt-tests had many improvements this year.
* We (myself and Silvia) visited Europe for the first time, spending time in the beautiful Barcelona.

It was yet another excellent year, and I keep great optimism that we’ll keep improving, and as someone already said, making a dent in our little corner of the universe.

Thank you to all my friends for being, well, awesome, and I hope to share yet another great year with you all.

KVM autotest: It is not just a QA tool anymore

This is a companion article for the presentation given during KVM Forum 2012, held in Barcelona.¬†Here’s the link with the slides.

The juggler problem

The biggest problem we have with regards to development of test automation,and achieving reasonable test code coverage on KVM code is similar to the problems found on many projects and organizations, which I believe it could be nicknamed ‘the juggler problem’: Software developers have to handle daily a large number of tasks, ranging from developing new features to reading and debugging existing code.

Then, there’s testing.

It’s hard to write testcases given that you have to implement that brand new¬†algorithm you’ve figured out last week, work on 10 new bugs found by customers,¬†review 3 new large patchsets sent to the upstream mailing list. If the test¬†tools are difficult to use, they’ll end up being dropped on the floor.

Therefore, to avoid the juggler problem, testing tools have to be easy:

  1. To understand;
  2. To modify;
  3. To access new code functionality (do useful stuff with the tests).

If the testing infrastructure does not have the required properties, it is likely that the developer will start rolling out their own testing programs. Since some testing is better than no testing, this is OK.

The problem is when you want to do more complex things.

You’ll start to write functions to do things like migration, hotplug, virtio¬†console, then encapsulate other functionality to it. You’ll start to slowly¬†re-implement the test frameworks. You’ll keep yourself alienated from what your¬†QA team has been developing, and benefit from the code that automates¬†complicated parts of your test workflow.

The solution here is to make the test tools to have the desired properties. In KVM, we do have a codebase being actively developed to cover a lot of testing scenarios, ranging from functional to regression testing, that is being used to run tests against several branches of the KVM tools:

  • Upstream code
  • RHEL 5 product
  • RHEL 6 product
  • RHEV product

That codebase was known as KVM Autotest. These days, the name KVM autotest is not entirely appropriate, since it aims to cover more than KVM, but also other virtualization backends, such as libvirt. It can define and handle multiple vms, disks, network cards, run vms, migrate them, hotplug NICs, put the VMs in S3 mode, among others. It’s now known as ‘virt tests’.

Even with all the features, this code base being located inside a separate repo¬†of a larger test framework made general core developer adherence low, and with¬†that came the perception that the test framework is a QA only affair. This¬†certainly is a sub optimal arrangement, and for a long time we’ve been looking¬†for a solution for it.

Resolution features

How to keep the QA functionality of the virt tests? By QA functionality we

  • Comprehensive tests, that involve guest OS install
  • Test jobs that involve kernel build, qemu build, install of windows and linux guests

And still provide a fast, unittest like way of executing the tests?

The answer we found was:

  • Leverage the autotest modularity, and make the virt tests self contained on a single module, separate from the rest of the framework
  • Implement a minimal test harness that allows executing virt tests outside autotest
  • Create a test runner that uses this minimal test harness, and outputs only the very minimum amount of information, similar to a unittest execution
  • Provide a minimum guest image ready to be used, in order to save bandwidth and time of folks wanting to try the tests.


About 18 months ago, when people started asking that KVM autotest could support other virtualization backends, the solution we devised was:

  • Create a shared autotest library that any test could use
  • Re-factor the kvm test to use this library
  • Implement other tests that can use this library. That’s how the libvirt test was born

While the approach is reasonable and worked for the purposes mentioned, it
implied that changes on virt tests mean changes on autotest core, a process that
goes against giving autonomy to developers wanting to implement tests outside
the autotest tree.

So, in order to separate tests from core autotest, the virt tests were all
restructured to be a single test module of autotest. Then it was possible to
separate them from core autotest cleanly.

Minimal test harness

When executed by autotest, the test job is expressed as a control file, a list of operations to execute on an autotest client. Each test runs a lot of code after/before execution, that is autotest housekeeping, log collection, among other things. For people for whom autotest is unnecessary, we had to make the code to execute outside that harness.

So we implemented only the bare minimum methods required to have the code to run outside autotest, and created a small test harness. Just as a POC, we did extract all autotest code needed to run the virt tests outside autotest. Since we had to extract way too many LOC to make things work (about 2k LOCs), we decided to keep a light dependency on autotest, that can be fulfilled by installing an autotest rpm, which is available on Fedora repo (about 2 MB of rpm data).

With this harness, a test runner was implemented, displaying minimal info about what tests are running, their status and running time:

# ./run -t kvm
SETUP: PASS (0.23 s)
DEBUG LOG: /home/lmr/Code/virt-test.git/logs/run-2012-10-15-14.58.24/debug.log
kvm.virtio_blk.smp2.virtio_net.JeOS.17.64.boot: PASS (22.87 s)
kvm.virtio_blk.smp2.virtio_net.JeOS.17.64.shutdown: PASS (8.61 s)

With this runner, it is possible to list available tests

# ./run -t kvm --list-tests

Specify which tests to run

# ./run -t kvm --qemu /path/to/my/qemu --tests "boot shutdown"

And provide a specific qemu path to test against

# ./run -t kvm --qemu /path/to/my/qemu

Minimum guest image

In order to have a small guest to run tests on developer’s machines, we did look
into Buildroot ( and managed to build a very small
guest to use on virt tests. However, concerns about how to properly manage
the availability of source code used to build this guest, as well as ease of build
and use, made us to reconsider it and take another approach: Getting a minimal
base Fedora system as the minimal guest.

By tweaking a kickstart file used to install Fedora, we managed to get a guest
that is still small, but more complete, easy to recreate, and with no problems
with regards to redistribution. We preferred that approach over buildroot.


Although all those changes greatly improve the usability situation, there are
more areas we want to tackle:

  • Being able to run tests written in any language, provided they return 0 upon
  • PASS and != 0 upon FAIL.
  • Wrap test functionality on scripts/language bindings, so people can use complex functionality automation on their own scripts using different languages

Getting started and contact info

If you are interested in running the virt tests, please clone our virt tests
git repo:

git clone git://

Check the testsuite documentation

And let us know your findings. You can always get in contact with me and Cleber,
the maintainers of the suite:

lmr AT redhat DOT com
cleber AT redhat DOT com

Also, the virt tests have a development mailing list

Virt-test-devel AT redhat DOT com

To subscribe see:


Assembling a kernel test grid with autotest

This is a companion article for the presentation given during LinuxCon Europe 2012, held in Barcelona.¬†Here’s a link with the slides.

The case for automated testing

It’s hard to keep track of problems on a fast moving target. Specially on a¬†*very* fast moving target, such as the Linux kernel. Thousands of commits get¬†into the kernel git repos every week, and although the subsystem maintainers¬†are very careful and competent, they can’t predict or catch every problem¬†that might get into the tree, so the more testing we give patchsets proposed¬†to linux, the better.

But it’s difficult for the lone kernel developer to test all his/her patches¬†systematically. On this article, we’ll discuss using the autotest project¬†( to assemble a test grid that kernel developers¬†can use to thest their own git repos, helping to catch bugs earlier and¬†improving the overall quality of the kernel.

Why to avoid reliance on user testing only

As we can see from the diagram above Linux’s development model forms an¬†hourglass starting highly distributed, with contributions being concentrated in¬†maintainer trees before merging into the development releases and then into¬†mainline itself. It is vital to catch problems here in the neck of the¬†hourglass, before they spread out to the distros — even once a contribution¬†hits mainline it is has not yet reached the general user population, most of¬†whom are running distro kernels which often lag mainline by many months.

In the Linux development model, each actual change is usually small and attribution for each change is known making it easy to track the author once a problem is identified. It is clear that the earlier in the process we can identify there is a problem, the less the impact the change will have, and the more targeted we can be in reporting and fixing the problem.

By making it easier to test code, we can encourage developers to run the tests before ever submitting the patch; currently such early testing is often not extensive or rigorous, where it is performed at all. Much developer effort is being wasted on bugs that are found later in the cycle when it is significantly less efficient to fix them.


Autotest is an open source project that was designed to test the linux kernel,¬†although it can be used to test userspace programs just fine. It’s released¬†under the GPLv2, and comprises a number of tools and libraries that people can¬†use to automate their testing workflows. You can find a complete overview of¬†autotest and quick links to its resources (code, documentation, issue tracker):

On a high level description, the autotest structure can be represented by the following diagram:

Autotest is composed by the following main modules:

Autotest client: The engine that executes the tests (dir client). Each autotest test is a a directory inside (client/tests) and it is represented by a python class that implements a minimum number of methods. The client is what you need if you are a single developer trying out autotest and executing some tests.

Autotest client executes ”client side control files”, which are regular¬†python programs, and leverage the API of the client.

Autotest server: A program that copies the client to remote machines and¬†controls their execution. Autotest server executes ”server side control¬†files”, which are also regular python programs, but leverage a higher level¬†API, since the autotest server can control test execution in multiple machines.

If you want to perform tests slightly more complex involving more than one machine you might want the autotest server.

Autotest database: For test grids, we need a way to store test results, and that is the purpose of the database component. This DB is used by the autotest scheduler and the frontends to store and visualize test results.

Autotest scheduler: For test grids, we need an utility that can schedule and trigger job execution in test machines, the autotest scheduler is that utility.

Autotest web frontend: For test grids, A web app, whose backend is written in django ( and UI written in gwt (, lets users to trigger jobs and visualize test results

Autotest command line interface: Alternatively, users also can use the autotest CLI, written in python

Installing autotest

In order to install autotest, you’ll need:

  • 2 machines, one that will serve as an autotest server, another that will serve as an autotest client. You can use virtual machines, if you want to experiment first before deploying on your actual bare metal machines. In fact, this presentation’s demo will be made using 2 Fedora 17 virtual machines.
  • Download the server install script that comes with autotest:

As the script help will tell you, the distros on which this script is supposed
to work are:

  • Fedora 16
  • Fedora 17
  • RHEL 6.2
  • Ubuntu 12.04

Once downloaded, you can give the script permissions:

chmod +x

Then execute it, passing the autotest user password (-u) and the database
admin password (-d):

./ -u linuxcon -d linuxcon

The script is supposed to install any missing packages. For more information
on the install procedure (what it does, troubleshooting), see:

Adding initial configuration

Once you have the autotest server installed, you can go to the web interface

admin interface (top right corner, Admin) and add:

  • In the tab Labels: A platform label, say, x86_64
  • In the tab Hosts: A host, using the ip of your client machine, using the machine you’ve installed in the previous step

With this, we’re close to our goal.

Setting passwordless SSH between client and server

It’s important for the tests to work that you have ssh connection among the
server machine and the client:

server machine @autotest -> client machine @root

So, log on the server as autotest (you did chose the password when running the
script). Make sure you create an ssh key:

ssh-keygen -t rsa

You may leave passphrases empty. Then, copy the key to the client

ssh-copy-id root@your-client-ip

Type your client password and there, you’re done. Please verify if

ssh root@your-client-ip

Logs you directly as root on the client.

Set up cobbler if you have it

Just go to global_config.ini and add the appropriate cobbler install server
data (copied it as a reference)

# Install server type
type: cobbler
# URL for xmlrpc_server, such as
# XMLRPC user, in case the server requires authentication
# XMLRPC password, in case the server requires authentication
# Profile to install by default on your client machine
# Number of installation/reset attempts before failing it altogether
num_attempts: 2

Send your first job

Go to “Create Job” tab on the web interface, create your first job with a
simple client side sleeptest, without kernel install. You have to:

  • Specify job name
  • Check the ‘Sleeptest’ checkbox
  • In ‘Browse Hosts’, select your client
  • And that’s it. Hit the ‘Submit Job’ button

After less than a minute, you should have your results. Hopefully there won’t
be any major problems.

More sophisticated job

As we don’t want to run ‘hello world’ type of tests on a machine, soon you’ll
find yourself going to more sophisticated jobs. Let’s talk about what is a job.

A job in autotest terminology is a description of the operations that will be
performed on a test machine, it is represented by a file, called control file.
A control file in autotest might be of 2 types:

  • client side: Scope restricted to only the machine executing the tests.
  • server side: Broader scope, you can control multiple machines.

Example: Here’s a client side control file used to build a kernel from a
git repo. It was a tad simplified, you can have even more fine grained control
of the entire process:

def step_init():
    repo = 'git://'
    repo_base = '/nfs_mount/linux-2.6/'
    branch = 'master'
    commit = None
    dest_dir = os.path.join("/tmp", 'kernel_src')

    repo = git.GitRepoHelper(uri=repo, branch=branch,
                             commit=commit, base_uri=repo_base)
    kernel = job.kernel(repo.destination_dir)

    if patch_list:
    if kernel_config:

def step_test():

All the operations there are supposed to be executed on a single machine.
Note that step_init and step_test are part of what we call ‘step engine’, that
helps handle testing where reboots are necessary. In this case, we build
a new kernel from a given git repo, then we reboot into this new kernel, then
we run the ‘kernbench’ autotest module.

Now, here’s an example of a server side control file, that accomplishes the
same, except that we can install a machine with a given cobbler profile:

control = '''
def step_init():
    repo = 'git://'
    repo_base = '/nfs_mount/linux-2.6/'
    branch = 'master'
    commit = None
    dest_dir = os.path.join("/tmp", 'kernel_src')

    repo = git.GitRepoHelper(uri=repo,
    kernel = job.kernel(repo.destination_dir)

    if patch_list:
    if kernel_config:

def step_test():

def run(machine):
    host = hosts.create_host(machine)
    profile = 'Fedora17-x86_64'
    timeout = 3600
    at = autotest_remote.Autotest(host)

job.parallel_simple(run, machines)

We pretty much just run the client side control file inside the server side
scope, through the method run() of the autotest_remote.Autotest() object. The
idea here is to install the machine using cobbler, then reboot the machine
and then executing the client side control file. This article is not supposed to
dwelve into the details of the control files API, for more information
you can check our documentation in control files:

Controlling your machine: Conmux and Cobbler

A big part of running a test grid is being able to remotely control and see
what’s going on with the nodes of your grid. In autotest, we use 2 tools to
control the following aspects of the machines:

  • Provisioning
  • Power cycle
  • Serial console

And those tools are:

  • Conmux: Lives inside the autotest tree, it’s a console machine multiplexer, provides console access through the ‘console’ command. You can install and configure your autotest server as the conmux server, see:

for more information. The idea is to:

  • Install conmux system wide
  • Place config files for your machines (see the examples dir on conmux dir)
  • Test your connection to the console/hardreset
  • Cobbler: It’s a separate set of tools to provision machines, you can see more info in

Cobbler can handle machine power on/off/reset machines, so if you don’t need
console control (your lab already provides console access), you can just
use cobbler. Configuration in autotest is simple:

  • Fill global_config.ini with base xmlrpc URL and auth data
  • Fill your systems in cobbler with matching IPs/Hostnames with the autotest DB
  • Import distros in there to install
  • That should be it. All else can be handled by autotest and you only have to use the control file APIs or the web interface to choose profiles to install.

Our limitations: Embedded

As a result of the scenario we had back 7 years ago when autotest started,
all the design was strongly based on intel and ppc architectures, so the

  • choosing the kernel to boot
  • provisioning the machine with a distro
  • controlling power and remote consoles

Work for intel, powerpc, S390, among others, but not for embedded systems
using an SD card or other arrangements common for embedded. Autotest can
still be used, if the facilities above are not needed, and it is an area where
we need help from contributors to implement the missing parts.

Recent work in autotest

During the last year, we’ve made a lot of work to make the core framework
development separate from test module development, as well as reorganized
the development model: Now we have 3 git repos:

  • autotest (core)
  • autotest-client-tests
  • autotest-server-tests

Each one has a master branch, with current stable code, next, that is the
integration branch, regularly QA’ed to try catching problems as soon as
possible, and release branches, for more long term maintenance code. Other
things worth mentioning:

  • Major namespace cleanup: We bit the bullet and cleaned a lot of cruft from the early autotest days
  • Packaging work: Now autotest programs can also run on a system wide install, allowing distros to package and ship it
  • Stand alone RPC client, that will be the base for other applications that can use the autotest RPC server to submit jobs, view and query results.


We’re working to fill the gaps and make autotest better. Things we plan to do
on the coming months:

  • Advanced machine scheduling (per machine hardware)
  • Evolve RPC applications
  • Better embedded support
  • Model automated bisecting

You’re welcome to join us!

Getting started and contact info

Autotest code is just one git clone away:

git clone git://

Check the testsuite documentation

And let us know your findings. Please refer to:

If you want to subscribe to our mailing list/join our IRC channel.


Autotest 0.14.3 released!

Hi everyone,

A new bugfix release of autotest was released! This release contains a
number of bugfixes to problems reported by packagers. In particular:

* Fixed problems with scheduler running on an rpm install
* Fixed database problems on an rpm install
* Fixed autotest-remote issues running on an rpm install

If you want to see a complete changelog, make sure you check out:…0.14.3

Check our website:

As always, report bugs in our issue tracking system:

Happy hacking and testing!