For some use cases, it's useful to use Beaker as a generic hardware pool, without taking advantage of the test execution and result reporting infrastructure. For such cases, it would be convenient if a recipe could skip installing a harness and tasks at all, and move directly to the Reserved state. Marking as low priority, as this can currently be approximated by using a dummy command as the sole required task (as long as it's acceptable to install the test harness as part of provisioning the system).
(In reply to Nick Coghlan from comment #0) > (as long as it's acceptable to > install the test harness as part of provisioning the system). Please note that this is not a requirement, the sole dummy task can currently fully replace /distribution/install, without ever requiring any harness.
(In reply to Jiri Jaburek from comment #1) > (In reply to Nick Coghlan from comment #0) > > (as long as it's acceptable to > > install the test harness as part of provisioning the system). > > Please note that this is not a requirement, the sole dummy task can > currently fully replace /distribution/install, without ever requiring any > harness. Ah, you're right - I was forgetting that with a custom kickstart, you can also already skip the harness installation. Perhaps a better near term answer would be to add some upstream documentation for the workaround? Or else ask-and-answer the question on Stack Overflow (with the beaker-testing tag: https://stackoverflow.com/questions/tagged/beaker-testing)
*** Bug 1295251 has been marked as a duplicate of this bug. ***
Has there been any progress on this (even design-wise)? The issue with a dummy job is that it creates a gap too wide between manual system usage and automated testing. Suppose you have a group of 5 people sharing 20 machines, passing them between each other as the nature of the work on the machines change (ie. debugging a userspace component leads to a bug in a kernel, requiring a different person to finish the job). However you also want to use these machines for automated test execution (which may later transform into manual debugging), quite possibly using machines from other pools as well (because ie. s390x isn't amonst your 20 machines). Having the 20 machines as Automated works for this purpose, but it's somewhat cumbersome as you need to Loan them to yourself to be able to Reserve them (and un-Loan them afterwards). Dealing with job management on top doesn't help as even the machine owner cannot cancel other people's recipes (and even when using job groups, it's a lot of extra work). Oh, and they need to be non-Machine so the scheduler won't use them for unrelated work. Having them as Manual + using custom scheduler for the automated execution works fine, except when you need systems outside of that 20-machine Manual pool (which we often do). Therefore I believe a good middle ground for implementation would be - Give Loans the ability to be time-limited and automatically expire - Teach the scheduler to provision time-limited loans instead of jobs - Allow time-limited loans to be extended (like watchdog-extend) Some of these features could be easily used for other use cases as well (ie. sysadmins of a big shared pool could loan machines for specific business needs without extra ticketing systems tracking the loan expirations).
Hello Jiri, is this RFE still relevant? Because from the discussion, I understand and that dummy install is good enough but you require 3 separate RFEs (1 main, 2 sub) that are not relevant to this one. * Give Loans the ability to be time-limited and automatically expire ** Teach the scheduler to provision time-limited loans instead of jobs ** Allow time-limited loans to be extended (like watchdog-extend)
Hello, yes the RFE is still relevant. The workaround works, but is not a clean solution to the problem (of loans being a manual human-to-human process).