When having a test run in beaker, especially the virt testings, we need to install
the OS/host/guest every time, which is time-consuming. If some OS are frequently used, is it able to have them pre-provisoned and just power-on the machine (without a re-installation) when we need test?
This is a tracker of discussion on this problem. The following are the previous discussion in the mailing list.
------8<------ copied from maillist ------8<------
----- "David Kovalsky" <email@example.com> wrote:
> On Tue, 3 Aug 2010 05:21:14 -0400 (EDT)
> Caspar Zhang <firstname.lastname@example.org> wrote:
> > Hi, I collected some ideas on new Beaker system from Kernel-QE,
> > are the details:
> > - Cai, Qian: Is it able to decrease re-installation time? Since
> > installing a system in Beaker is always time-comsuming, we may have
> > some frequently-used system pre-provisioned, when need testing,
> > power on and start run testcases. To keep the system clean before
> > each test run, we can design a recover machanism, which is similar
> > caching. The "cache" algorithm for the Beaker scheduler is based on
> > the previous usage data for OS/host/guest, and pre-provision the
> > OS/host/guest that most likely will be used in the future, then
> > power-off to wait in the "cache". If there is a request specified
> > must for those systems but for different OS or we are running low
> > the system pool, those "cache" systems can be reclaimed to fulfill
> > the immediate request. This is especially important for virt, Right
> > now you have to re-install a host and then a guest before each test
> > run inside the guest, if a "cache" algorithm is supported,
> > installation time will be decreased.
> Note that soon the defualt will be 'default install', so the
> installation time will possibly increate. Would specifying minimum
> install in kickstart help you?
> Also, since we're going to have images for Windows, it shouldn't be
> hard to install a system, boot from PXE, save the image into some
> lookaside cache and use for later. Or LVM snapshots.
> To be clear - what is the typical reinstallation time you are
Since before every test we always need to re-install the system, if we have a virt testing, at least 2 systems(host/guest) will be installed. If a machine with pre-provisioned system on it, we can use it directly. When have another test based on the
same system, it doesn't need to have the machine re-installed, just recover the system back to initial state by using a 'cache' algorithm, and the time on system installation can be decreased.
I've CCed this to Cai Qian, maybe he can provide some further opinions.
One good idea in this space was to use RHEV-M to manage an image library for those use cases where provisioning was not required.
We need just such a process for windows and stable systems. But would this go as far as nightlies? We'd need a process to make images from nightly builds and make them available.
(In reply to comment #1)
> One good idea in this space was to use RHEV-M to manage an image library for
> those use cases where provisioning was not required.
> We need just such a process for windows and stable systems. But would this go
> as far as nightlies? We'd need a process to make images from nightly builds and
> make them available.
Thanks Kevin, a RHEV-M is maybe useful, but for the hosts, it still costs too much time on installation.
Some additional information to the original thinking -- the `cache' algorithm:
As we know, the cache may use hit and miss algorithm to do read operation in OS, if a cache hit occur, the word contained will be delivered to CPU and if a cache miss occur, the word will be discard.
So we can regard the lab-controller/scheduler as a `cache', let's imagine this situation: A tester needs to use RHEL6.0-Snapshot-7 for his testing, so he provisions the system and executes the tests, then return the machine to lab-controller. After that, the lab-controller can suppose that RHEL6.0-Snapshot-7 is the distro which the next tester will run for, so the lab-controller provisions RHEL6.0-Snapshot-7 automatically, powers off and waits for the next tester.
If the next tester does need to test on RHEL6.0-Snapshot-7, Okay, a `cache hit' occurs, after he click the `provison' button, the lab-controller just need to power on and send a email to tell him the machine is ready. Else if the next tester needs another distro, so a `cache miss' occurs, the lab-controller starts to provision the required distro as normal.
This is a basic idea and I think it's realizable. Any comments?
Bulk reassignment of issues as Bill has moved to another team.