Lucas 34.0 released!

Hello boys and girls, on 04/04 Lucas 34.0 was just released! Without further ado, here are the new features of the project:

The offspring project called Victoria had its 11th release last December and is starting to show the first bugs of a project in its teenage cycle 

The dating subsystem is about to complete its 3rd year, being one of the highest quality subsystems to date!

Many of the long standing bugs related to work-life balance were fixed, and add up to the extended maintenance initiative.

Once again, thanks to all the wonderful people in project management, development and QA! If it weren’t for you, this project wouldn’t ever leave its first prototypes. Cheers!

Remove rpm packages with failed scriptlets execution

Have you ever been in that awkward spot where an rpm package had a scriptlet error, and you can’t remove it from your system, ever? I have to do this from time to time, but end up forgetting how it’s done. Let’s change that by keeping it in this blog (incidentally helping other people):

rpm -e --noscripts --justdb [packagename-version]

That was easy :)

Lucas version 33.0 released!

This is an auspicious post, given it is my 100th post on this blog. I thought I wouldn’t make this far, but anyway…

Is that time of the year again! Lucas 33.0 was just released. This mature yet ever evolving project reaches its 33rd milestone. Among the new features and news we have:

  • The offspring project called Victoria had its 10th release last December and is expected to reach teenage status soon :) She is growing to be a lovely, caring and compassionate offspring project, which makes her parent project very happy!
  • The parenting, and housekeeping subsystems once again were improved, in the beautiful Piracicaba, which is proving to be an excelent hosting city for this project. The dating subsystem is about to have a release 2.0 (and I wish many more releases to come).
  • 33.0 continues to be very solid and reliable, due to the extended maintenance initiative. Let’s keep the extended maintenance, by hitting the gym, healthy eating and plenty of sleep!

There has been some especulation that the project team is moving to a rolling release, but these are only speculations. The yearly releases are expected to come for the forseeable future, since it is a meaningful measure of human progress ;) It might be that in the future we’ll all get bored with the release notes, but so far, it’s been a lot of fun!

Autotest 0.15.0 released!

Autotest 0.15.0 is a new major release of autotest! The goal here is to provide the latest advances on autotest while providing a stable ground for groups and organizations looking for autotest, such as distro packagers and newcomers.

For the impatient

Check out our main web page at:

http://autotest.github.com/

Direct download link:

https://github.com/autotest/autotest/archive/autotest-0.15.0.tar.gz

Now that github removed arbitrary uploads, we now will release directly fom git tags. The tags now will be signed with my GPG key. So get your fresh copy of autotest (remember, the tarball does not contain the tests anymore, those must be picked at the new test repos).

Changes

Tests modules split

Since test modules are fairly independent from the core framework, they have been split into their own new git repos, and were added to autotest as git submodules. Don’t worry, if you want the autotest repo with full tests, you may only need to execute

git clone –recursive git://github.com/autotest/autotest.git

The new repositories for the client and server tests:

https://github.com/autotest/autotest-client-tests

https://github.com/autotest/autotest-server-tests

API cleanup

Following the example of autotest 0.14.0, 0.15.0 brings more API cleanups.

1) global_config -> settings

The library global_config was renamed to settings. As an example of how
the change works, here’s an example of how to access settings on autotest
0.14.X:

from autotest.client.shared import global_config

c = global_config.global_config.get_config_value
c.get_value("mysection", "mykey")

This is how the same process look on autotest 0.15.0:

from autotest.client.shared.settings import settings

settings.get_value("mysection", "mykey")

As tests usually have no business with the autotest settings, this means
pretty much no update required from test authors.

Machine installs from the autotest web interface

In autotest 0.14, we introduced preliminary integration between autotest

and the Cobbler install server (http://cobbler.github.com/) was introduced. However, there was not enough of that integration visible if you were using the web interface. Now it is possible to select which cobbler profiles you want to use on your test machines from the web interface.

DB migration moved to Django South

When the autotest RPC server application was developed, there was a database code in place already. In order to move away from having 2 ways to access the database, this version of autotest introduces a new migration system, based on Django South (http://south.aeracode.org/).

For people looking for upgrading the database to the latest version, we’ve put up a procedure here

https://github.com/autotest/autotest/wiki/MigrateDatabaseToAutotest0.15

Other changes

* Support for creation of debian packages

* Simplified module import logic
* Simplified unittests execution
* Improved install scripts. Now it is possible to install autotest from an arbitrary git repo and branch, which will make it easier to test and validate changes that involve the rpc client/server, scheduler, among others.

What’s next?

We believe the foundation for an autotest version 1.0 is already in place. So, the next release will be autotest 1.0, bringing polish and bugfixing to the framework being developed during the previous six years.

You can see the issues open for the Milestone 1.0.0 here:

https://github.com/autotest/autotest/issues/milestones

and here:

https://github.com/autotest/autotest/issues?milestone=2&state=open

For autotest 1.0 we plan on upgrading the stack to the latest versions
of the foundation technologies involved in the server operation, and
package generation:

* Update the web interface (RPC client) to the latest GWT

http://google-web-toolkit.googlecode.com/files/gwt-2.5.1.zip

* Update the RPC server to the latest Django

https://www.djangoproject.com/download/1.5/tarball/

* Functional server packages for Debian
* Automated generation of both rpm and debian packages
* Functional regression test suite for server installs, working out of the box

We also want to improve the usefulness and usability of our existing tools

* Introduce a ‘machine reserve’ mode, where you may provision one bare metal machine with the OS of choice and use the installed machine for a period of time, in a convenient way.

* Allow to schedule jobs based on machine hardware capabilities, for example:
- Number of CPUS
- Available Memory
- Virtualization support
among others

* Allow autotest to function well as an alternative test harness for beaker (http://beaker-project.org/).

* Support to parametrized jobs (be able to set parameters in tests, and enable people to simply pass parameters to those tests in the cli/web interface).

FUDCon 2013

One more great FUDCon is over, my goals in the conference were accomplished:

  • Discussed with the Fedora QA folks about distribution-wide and package specific test automation. I also presented a refresher on Autotest architecture and recent improvements.
  • Configured Fedora virtualization daily Autotest jobs, as an auxiliary tool for Cole to detect problems in virt-related Fedora packages. We also went through the basics of the virtualization test suite, that can be a useful tool for both Fedora virt developers and users.
  • Helped David to get started with writing an Autotest wrapper for the pygobject 3 test suite.

All of these activities deserve a more thorough post, for any of those interested in test automation in Fedora.

Of course, I also had a great time with all my talented (and sometimes goofy) friends that work hard to bring Fedora to you. Here are a couple of photos (I guess there’ll be more to come):

774708_10200382840113838_1650735348_o 775055_531094553591112_881784947_o

Happy 2013

I wish I had a long, well thought post to write about 2012, but I don’t :) Some things that come to my mind looking at the year retrospectively:

* I’ve completed 1 year living in Piracicaba.
* Quality of life was improved all round.
* Work is great. Autotest and virt-tests had many improvements this year.
* We (myself and Silvia) visited Europe for the first time, spending time in the beautiful Barcelona.

It was yet another excellent year, and I keep great optimism that we’ll keep improving, and as someone already said, making a dent in our little corner of the universe.

Thank you to all my friends for being, well, awesome, and I hope to share yet another great year with you all.

KVM autotest: It is not just a QA tool anymore

This is a companion article for the presentation given during KVM Forum 2012, held in Barcelona. Here’s the link with the slides.

The juggler problem

The biggest problem we have with regards to development of test automation,and achieving reasonable test code coverage on KVM code is similar to the problems found on many projects and organizations, which I believe it could be nicknamed ‘the juggler problem’: Software developers have to handle daily a large number of tasks, ranging from developing new features to reading and debugging existing code.

Then, there’s testing.

It’s hard to write testcases given that you have to implement that brand new algorithm you’ve figured out last week, work on 10 new bugs found by customers, review 3 new large patchsets sent to the upstream mailing list. If the test tools are difficult to use, they’ll end up being dropped on the floor.

Therefore, to avoid the juggler problem, testing tools have to be easy:

  1. To understand;
  2. To modify;
  3. To access new code functionality (do useful stuff with the tests).

If the testing infrastructure does not have the required properties, it is likely that the developer will start rolling out their own testing programs. Since some testing is better than no testing, this is OK.

The problem is when you want to do more complex things.

You’ll start to write functions to do things like migration, hotplug, virtio console, then encapsulate other functionality to it. You’ll start to slowly re-implement the test frameworks. You’ll keep yourself alienated from what your QA team has been developing, and benefit from the code that automates complicated parts of your test workflow.

The solution here is to make the test tools to have the desired properties. In KVM, we do have a codebase being actively developed to cover a lot of testing scenarios, ranging from functional to regression testing, that is being used to run tests against several branches of the KVM tools:

  • Upstream code
  • RHEL 5 product
  • RHEL 6 product
  • RHEV product

That codebase was known as KVM Autotest. These days, the name KVM autotest is not entirely appropriate, since it aims to cover more than KVM, but also other virtualization backends, such as libvirt. It can define and handle multiple vms, disks, network cards, run vms, migrate them, hotplug NICs, put the VMs in S3 mode, among others. It’s now known as ‘virt tests’.

Even with all the features, this code base being located inside a separate repo of a larger test framework made general core developer adherence low, and with that came the perception that the test framework is a QA only affair. This certainly is a sub optimal arrangement, and for a long time we’ve been looking for a solution for it.

Resolution features

How to keep the QA functionality of the virt tests? By QA functionality we
mean:

  • Comprehensive tests, that involve guest OS install
  • Test jobs that involve kernel build, qemu build, install of windows and linux guests

And still provide a fast, unittest like way of executing the tests?

The answer we found was:

  • Leverage the autotest modularity, and make the virt tests self contained on a single module, separate from the rest of the framework
  • Implement a minimal test harness that allows executing virt tests outside autotest
  • Create a test runner that uses this minimal test harness, and outputs only the very minimum amount of information, similar to a unittest execution
  • Provide a minimum guest image ready to be used, in order to save bandwidth and time of folks wanting to try the tests.

Modularity

About 18 months ago, when people started asking that KVM autotest could support other virtualization backends, the solution we devised was:

  • Create a shared autotest library that any test could use
  • Re-factor the kvm test to use this library
  • Implement other tests that can use this library. That’s how the libvirt test was born

While the approach is reasonable and worked for the purposes mentioned, it
implied that changes on virt tests mean changes on autotest core, a process that
goes against giving autonomy to developers wanting to implement tests outside
the autotest tree.

So, in order to separate tests from core autotest, the virt tests were all
restructured to be a single test module of autotest. Then it was possible to
separate them from core autotest cleanly.

Minimal test harness

When executed by autotest, the test job is expressed as a control file, a list of operations to execute on an autotest client. Each test runs a lot of code after/before execution, that is autotest housekeeping, log collection, among other things. For people for whom autotest is unnecessary, we had to make the code to execute outside that harness.

So we implemented only the bare minimum methods required to have the code to run outside autotest, and created a small test harness. Just as a POC, we did extract all autotest code needed to run the virt tests outside autotest. Since we had to extract way too many LOC to make things work (about 2k LOCs), we decided to keep a light dependency on autotest, that can be fulfilled by installing an autotest rpm, which is available on Fedora repo (about 2 MB of rpm data).

With this harness, a test runner was implemented, displaying minimal info about what tests are running, their status and running time:

# ./run -t kvm
SETUP: PASS (0.23 s)
DEBUG LOG: /home/lmr/Code/virt-test.git/logs/run-2012-10-15-14.58.24/debug.log
TESTS: 2
kvm.virtio_blk.smp2.virtio_net.JeOS.17.64.boot: PASS (22.87 s)
kvm.virtio_blk.smp2.virtio_net.JeOS.17.64.shutdown: PASS (8.61 s)

With this runner, it is possible to list available tests

# ./run -t kvm --list-tests

Specify which tests to run

# ./run -t kvm --qemu /path/to/my/qemu --tests "boot shutdown"

And provide a specific qemu path to test against

# ./run -t kvm --qemu /path/to/my/qemu

Minimum guest image

In order to have a small guest to run tests on developer’s machines, we did look
into Buildroot (http://buildroot.uclibc.org/) and managed to build a very small
guest to use on virt tests. However, concerns about how to properly manage
the availability of source code used to build this guest, as well as ease of build
and use, made us to reconsider it and take another approach: Getting a minimal
base Fedora system as the minimal guest.

By tweaking a kickstart file used to install Fedora, we managed to get a guest
that is still small, but more complete, easy to recreate, and with no problems
with regards to redistribution. We preferred that approach over buildroot.

Roadmap

Although all those changes greatly improve the usability situation, there are
more areas we want to tackle:

  • Being able to run tests written in any language, provided they return 0 upon
  • PASS and != 0 upon FAIL.
  • Wrap test functionality on scripts/language bindings, so people can use complex functionality automation on their own scripts using different languages

Getting started and contact info

If you are interested in running the virt tests, please clone our virt tests
git repo:

git clone git://github.com/autotest/virt-test.git

Check the testsuite documentation

https://github.com/autotest/virt-test/wiki

And let us know your findings. You can always get in contact with me and Cleber,
the maintainers of the suite:

lmr AT redhat DOT com
cleber AT redhat DOT com

Also, the virt tests have a development mailing list

Virt-test-devel AT redhat DOT com

To subscribe see:

http://www.redhat.com/mailman/listinfo/virt-test-devel

Thanks!