Hide Forgot
Description of problem: after updating a vm twice with different payloads through RESTapi, vm fails to start. Version-Release number of selected component (if applicable): is12 Steps to Reproduce: 1. Create Vm and install RHEL 6.4 2. Stop vm and create template 3. create 5 vms pool from template. 4. with the RESTapi update one vm from the pool with payload: PUT .../api/vms/da37a39c-807b-4150-b369-a8698a696acb/ <vm> <payloads> <payload type="cdrom"> <file name="file1.update.cdrom"> <content>'some content cdrom 1!'</content> </file> </payload> </payloads> </vm> 5. start vm. 6. ssh vm and run command: mkdir /tmp/cdrom_payload1; mount /dev/cdrom1 /tmp/cdrom_payload1; 7. file.update.cdrom should be vissibl with its content in folder /tmp/cdrom_payload1. 8. umount /dev/cdrom1 9. check that file doesn't exit anymore. 10. stop vm 11. with the RESTapi update the same vm with a new payload: PUT .../api/vms/da37a39c-807b-4150-b369-a8698a696acb/ <vm> <payloads> <payload type="cdrom"> <file name="file2.update.cdrom"> <content>'some content cdrom 2!'</content> </file> </payload> </payloads> </vm> 12. start vm in order to reproduce the same procedure. Actual results: vm fails to start, both from GUI and RESTapi. Expected results: vm should start and new payload should be available for mounting as prior payload. Additional info: logs attached
Created attachment 792767 [details] logs - engine and vdsm
Hi, I cannot reproduce the error. Looking at the log its look like you have recurring error: --------------------------- Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid 'da37a39c-807b-4150-b369-a8698a696acb --------------------------- It may be related to storage? Please try to reproduce the problem with less steps (I don't think you need to create 5 VMs, only update the payload) If you do get the error again please upload the logs and post the exact time that the error occur in the VDSM and Engine (for easier pin point the problem).
Hi, You are right, after created a new vm unrelated to the pool I created earlir it worked fine.
(In reply to sefi litmanovich from comment #3) > Hi, > > You are right, after created a new vm unrelated to the pool I created earlir > it worked fine. So can we close the bug?