Bug 1043789

Summary: Create an automated test for a full Beaker provisioning cycle & mulltihost tests
Product: [Retired] Beaker Reporter: Nick Coghlan <ncoghlan>
Component: testsAssignee: beaker-dev-list
Status: CLOSED WONTFIX QA Contact: tools-bugs <tools-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: developCC: dcallagh, ebaak, qwan, rjoost, tools-bugs
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-07-26 05:04:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Nick Coghlan 2013-12-17 08:06:08 UTC
The virtual Fedora quickstart guide shows that it is possible to create a pure virtual Beaker instance inside a private libvirt network:

https://beaker-project.org/dev/guide/virtual-fedora/

The current Beaker dogfood task *doesn't* perform end-to-end testing of a full Beaker provisioning cycle - it runs the integration test suite on a different version of Beaker, but can't test a new harness release, etc.

This proposal is to create a Beaker task that uses libvirt directly (not Beaker's normal guest provisioning capabilities) to:

1. Create a combined Beaker server/lab controller and a couple of test systems (as described in the virtual Fedora quickstart)
2. Adds the standard tasks and a supported distro tree
3. Manually reserves and returns a system (ensuring ssh access to the provisioned system is supported)
4. Automatically schedules a multihost job and ensures it completes successfully.

Comment 1 Dan Callaghan 2017-07-26 05:04:02 UTC
I am inclined to close this as WONTFIX, because I think that the approach suggested here (create an entire virtualised Beaker lab including test machines and assert that they can be provisioned) would be costly to implement, and does not necessarily give us any better coverage than our current two-sided approach, which is:

* the Python-level unit test + integration test suite in Beaker's source tree, which is run inside dogfood, and exercises all parts of Beaker itself with varying levels of mockery but *no* interaction with real hardware

* the workflow-selftest jobs, including the Jenkins job to invoke it (see bug 954265, bug 1299722), which runs the self-tests to exercise all of Beaker's functionality on *real* hardware with as many different distro/arch combinations as possible