Description of problem: When running guest testing, host's console.log contains data only for so long as host recipe is running. So if there is panic on host after last task on host finished you can't find out why the host panic'd. This is presumably because lab controller is not updating console.log to scheduler after host recipe finishes. Lab controller can still tell there was a panic and it will set recipe result to "panic". Version-Release number of selected component (if applicable): 0.8.2 How reproducible: 100% Steps to Reproduce: 1. run recipeset host + guest 2. install host, when last task on host finishes crash the host Actual results: panic message is not in console.log Expected results: console.log for host should contain all output while recipeset is running Additional info:
To make this as seemless as possible I could modify virt/start_stop to do this. The no additional task would be required.
(In reply to comment #7) > To make this as seemless as possible I could modify virt/start_stop to do > this. The no additional task would be required. I am guessing that the workflow of the virt/start_stop test will be something like : -- find the guest's recipeid from the LC -- start the guest -- keep on querying the LC about the guests state and wait until it's in finished state -- shutdown the guest. That's doable I guess, however everyone that uses /virt/start will have to change to /virt/start_stop and it won't be possible to have multiple guests running simultaneously.
We need to have multiple guests running simultaneously.
(In reply to comment #9) > (In reply to comment #7) > > To make this as seemless as possible I could modify virt/start_stop to do > > this. The no additional task would be required. > > I am guessing that the workflow of the virt/start_stop test will be > something like : > -- find the guest's recipeid from the LC > -- start the guest > -- keep on querying the LC about the guests state and wait until it's in > finished state > -- shutdown the guest. > > That's doable I guess, however everyone that uses /virt/start will have to > change to /virt/start_stop and it won't be possible to have multiple guests > running simultaneously. Why would it be limited to one guest running at a time?
(In reply to comment #11) > (In reply to comment #9) > > (In reply to comment #7) > > > To make this as seemless as possible I could modify virt/start_stop to do > > > this. The no additional task would be required. > > > > I am guessing that the workflow of the virt/start_stop test will be > > something like : > > -- find the guest's recipeid from the LC > > -- start the guest > > -- keep on querying the LC about the guests state and wait until it's in > > finished state > > -- shutdown the guest. > > > > That's doable I guess, however everyone that uses /virt/start will have to > > change to /virt/start_stop and it won't be possible to have multiple guests > > running simultaneously. > > Why would it be limited to one guest running at a time? In its current set up, it would be. Right now, it just loops thru all guests and does each one of them one by one synchronously. It can definitely be rewritten to have this asynchronously though, since virsh start is an asynchronous function anyway. I guess, we could do -- virsh start $guest -- register $guest with some sort of monitoring process to query the LC about the $guest's state -- move on to the next guest. -- When the monitoring process finds out that all the guests are in finished state, then finish the job. However, then EVERY test would be running the guests simultaneously. This might not be desirable, the host might not have enough resources to run all the guests at once . Pick your favorite poison;)
I found the bug which prevents console logs from being uploaded while the recipeset is active. Uploading the console logs for guests is a separate issue and should be filed in another bz.
http://gerrit.beaker-project.org/#/c/1272/
Beaker 0.9.2 has been released.