Bug 1065916
| Summary: | HotUnPlug vm disk after perform dd to it, cause vm to enter into paused status | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Retired] oVirt | Reporter: | Raz Tamir <ratamir> | ||||
| Component: | ovirt-engine-core | Assignee: | Allon Mureinik <amureini> | ||||
| Status: | CLOSED NOTABUG | QA Contact: | bugs <bugs> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.4 | CC: | abaron, acanan, acathrow, amureini, gklein, iheim, ratamir, yeylon | ||||
| Target Milestone: | --- | Keywords: | Triaged | ||||
| Target Release: | 3.5.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | storage | ||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2014-02-27 14:47:52 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
Sounds like the system tried to flush I/O to the non-existent disk. I'm assuming you did not perform the dd operation with conv=direct ? Although, if this is the case I wouldn't know why the os is ok with unplugging the disk without cleaning the page cache of any pending I/O. Hi Ayal, IMPORTANT part that i realized when i tried to perform the dd operation with conv=noerror,sync is: re-description of problem: setup: - first vm with 1 disk and snapshot - second vm with 3 disks: * (diskA) - bootable disk (diskA) * (diskB) - a disk that is the snapshot disk of the first vm attached to it * (diskC) - a disk that will be the dd destination disk . when perform 'dd' from (diskB) to (diskC) and after the operation is done, deactivates the destination disk (diskC), the vm enters into 'Paused' status due to storage I/O problem. - And about your question if it is still relevant: it doesn't matter if i use conv=noerror,sync or not (In reply to ratamir from comment #2) > Hi Ayal, > IMPORTANT part that i realized when i tried to perform the dd operation with > conv=noerror,sync is: > > re-description of problem: > setup: > - first vm with 1 disk and snapshot > - second vm with 3 disks: > > * (diskA) - bootable disk (diskA) > * (diskB) - a disk that is the snapshot disk of the first vm attached to > it > * (diskC) - a disk that will be the dd destination disk . > > when perform 'dd' from (diskB) to (diskC) and after the operation is done, > deactivates the destination disk (diskC), the vm enters into 'Paused' status > due to storage I/O problem. > > - And about your question if it is still relevant: > it doesn't matter if i use conv=noerror,sync or not This description bares almost no relation to the initial description. First, I have no idea how you reached 'conv=noerror,sync' or what its relevance here is. You need to either run dd if=... of=... bs=1M conv=direct or run 'sync' when the dd is finished (before you disconnect the disk). The 'sync' command is very different than what conv=sync does so that might be what through you off. After running 'sync' detach the disk and check to see if VM pauses. Second, you never mentioned anything about 2 VMs, only 1. It is not clear to me why you have 2 here. Does this reproduce if you simply have 1 VM, with 2 *raw* disks, no snapshots? Hi Ayal, For the first part of the comment, i used this: # dd if=/dev/vdc of=/dev/vdb bs=1M conv=direct and got this as output: dd: invalid conversion: 'direct' for the second part: As i wrote in comment 3, i realized that the scenario is different from the original one. so the reason i have 2 vms is that i need to reproduce a 'Backup api' scenario (i.e. snapshot disk of the first vm is attached to a second vm) in which, the attached disk is the source device for dd. And the second vm has more disk for dd destination device. This isn't reproduce with only 1 vm, The source device should be an attached snapshot disk. (In reply to ratamir from comment #4) > Hi Ayal, > For the first part of the comment, i used this: > # dd if=/dev/vdc of=/dev/vdb bs=1M conv=direct > and got this as output: > dd: invalid conversion: 'direct' right, it's oflag=direct > > for the second part: > As i wrote in comment 3, i realized that the scenario is different from the > original one. so the reason i have 2 vms is that i need to reproduce a > 'Backup api' scenario (i.e. snapshot disk of the first vm is attached to a > second vm) in which, the attached disk is the source device for dd. > And the second vm has more disk for dd destination device. > > This isn't reproduce with only 1 vm, The source device should be an attached > snapshot disk. ok, so: 1. please run sync after the dd and retest. 2. are you detaching the *source* disk (the one you're reading from) or the destination disk? Ok, i will retest it And i'm not detaching --> deactivating the destination disk. The 'oflag=direct' and syncing after the dd command solved the problem |
Created attachment 864008 [details] vdsm and engine logs Description of problem: setup: - vm with 2 disks (one of them bootable). when perform 'dd' from first vm disk to second vm disk and after the operation is done, deactivates the destination disk, the vm enters into 'Paused' status due to storage I/O problem. Version-Release number of selected component (if applicable): vdsm-4.14.1-2.el6.x86_64 ovirt-engine-3.4.0-0.5.beta1.el6.noarch How reproducible: 100% Steps to Reproduce: 1. # dd if=/dev/vdb of=/dev/vdc (vdb - source, vdc - destination) 2. deactivate destination disk 3. Actual results: The vm enters 'Paused' status Expected results: The vm status shouldn't change Additional info: