Bug 1470007
| Summary: | [RFE] [libvirt part] Add S3 PR support to qemu (similar to mpathpersist) | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Martin Tessun <mtessun> | |
| Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> | |
| Status: | CLOSED ERRATA | QA Contact: | yisun | |
| Severity: | high | Docs Contact: | ||
| Priority: | urgent | |||
| Version: | 7.4 | CC: | aliang, boruvka.michal, chayang, coli, cww, dyuan, jbelka, jdenemar, jsuchane, juzhang, knoel, lmen, michen, mprivozn, pbonzini, virt-maint, xuwei, xuzhang | |
| Target Milestone: | rc | Keywords: | FutureFeature, Upstream | |
| Target Release: | 7.5 | |||
| Hardware: | Unspecified | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | libvirt-4.5.0-3.el7 | Doc Type: | Enhancement | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | 1464908 | |||
| : | 1519021 1533158 (view as bug list) | Environment: | ||
| Last Closed: | 2018-10-30 09:49:58 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1464908, 1484075, 1533158 | |||
| Bug Blocks: | 1111783, 1111784, 1457437, 1477664, 1519021, 1558125 | |||
|
Comment 1
Paolo Bonzini
2017-08-22 16:29:55 UTC
Patches for the basic operations were posted here: https://www.redhat.com/archives/libvir-list/2018-January/msg00584.html What's missing is the counterpart for qemu emitting event on pr-helper disappearing. That'll be implemented once qemu implements it. I've pushed patches upstream: b0cd8045f0 qemu: Detect pr-manager-helper capability eba6467fed qemu_hotplug: Hotunplug of reservations 3f968fda7b qemu_hotplug: Hotplug of reservations 053d9e30e7 qemu: Start PR daemon on domain startup 8be74af168 qemu: Introduce pr_helper to qemu.conf d13179fe8d qemu_cgroup: Allow /dev/mapper/control for PR 5bf89434ff qemu_ns: Allow /dev/mapper/control for PR 13fe558fb4 qemu: Generate pr cmd line at startup 3c28602759 qemu: Introduce pr-manager-helper capability c7c9dea0a0 qemuDomainDiskChangeSupported: Deny changing reservations 687730540e virstoragefile: Introduce virStoragePRDef Also, couple of fixes by Peter that are needed: 9b3cbd33a7 qemu: hotplug: Replace qemuDomainDiskNeedRemovePR 8bebb2b735 util: storage: Store PR manager alias in the definition 26c72a76dc conf: domain: Add helper to check whether a domain def requires use of PR b4f113ee44 qemu: command: Move check whether PR manager object props need to be built 8f7c25ae39 qemu: process: Change semantics of functions starting PR daemon b571e7bad0 qemu: Assign managed PR path when preparing storage source e31f490458 util: storage: Allow passing <source> also for managed PR case 900fc66121 util: storage: Drop virStoragePRDefIsEnabled e72b3f0bbe util: storage: Drop pointless 'enabled' form PR definition 1efda36765 qemu: Move validation of PR manager support 64e3ae0d51 qemu: command: Fix comment for qemuBuildPRManagerInfoProps b5aec60cc4 qemu: alias: Allow passing alias of parent when generating PR manager alias 90309bcdc5 qemu: hotplug: Fix spacing around addition operator v4.3.0-217-g9b3cbd33a7 Hi Michal,
I saw there is a new commit about this function recently as follow:
======
commit 105bcdde76bc8c64f2d9aca9db684186a5e96e63
Author: Peter Krempa <pkrempa>
Date: Thu May 31 15:18:20 2018 +0200
qemu: hotplug: Fix detach of disk with managed persistent reservations
In commit 8bebb2b735d I've refactored how the detach of disk with a
managed persistent reservations object is handled. After the commit if
any disk with a managed PR object would be removed libvirt would also
attempt to remove the shared 'pr-manager-helper' object potentially used
by other disks.
Thankfully this should not have practical impact as qemu should reject
deletion of the object if it was still used and the rest of the code is
correct
=====
Do we need to let this bz depend on that commit, too?
(In reply to yisun from comment #14) > > Do we need to let this bz depend on that commit, too? Yes, we should backport that commit. I'll do that in a while. So there are some new patches upstream fixing this feature and introducing qemu event support. I strongly believe we need to backport them. Moving this back to ASSIGNED so that I can do the backport. Ooops. Updated wrong bug. Rolling back. Test with:
qemu-kvm-rhev-2.12.0-10.el7.x86_64
libvirt-4.5.0-6.virtcov.el7.x86_64
Positive tests carried out in this comment, the result is pass. Will add another comment for extensive&negative test later on.
Env:
Having a iscsi device on host
[root@ibm-x3250m5-04 ~]# lsscsi
...
[55:0:0:0] disk LIO-ORG device.logical- 4.0 /dev/sdf
Having test shell script in guest as follow:
[root@localhost ~]# cat test.sh
#! /bin/sh
sg_persist --no-inquiry -v --out --register-ignore --param-sark 123aaa "$@"
sg_persist --no-inquiry --in -k "$@"
sg_persist --no-inquiry -v --out --reserve --param-rk 123aaa --prout-type 5 "$@"
sg_persist --no-inquiry --in -r "$@"
sg_persist --no-inquiry -v --out --release --param-rk 123aaa --prout-type 5 "$@"
sg_persist --no-inquiry --in -r "$@"
sg_persist --no-inquiry -v --out --register --param-rk 123aaa --prout-type 5 "$@"
sg_persist --no-inquiry --in -k "$@"
Positive test:
Scenario 1: managed=yes, coldpluged disk.
1. edit vm xml and add following disk:
[root@ibm-x3250m5-04 ~]# virsh edit vm2
<disk type='block' device='lun'>
<driver name='qemu' type='raw' error_policy='stop'/>
<source dev='/dev/sdf'>
<reservations managed='yes'/>
</source>
<target dev='vdb' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
2. start the vm
[root@ibm-x3250m5-04 ~]# virsh start vm2
Domain vm2 started
3. check qemu-pr-helper process started
[root@ibm-x3250m5-04 ~]# ps -ef | grep qemu-pr-helper
root 19263 1 0 06:29 ? 00:00:00 /usr/bin/qemu-pr-helper -k /var/lib/libvirt/qemu/domain-35-vm2/pr-helper0.sock -f /var/lib/libvirt/qemu/domain-35-vm2/pr-helper0.pid
4. in guest, check the pr function can be used
[root@localhost ~]# sh test.sh /dev/sdb
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
PR out: command (Register and ignore existing key) successful
PR generation=0x2, 1 registered reservation key follows:
0x123aaa
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
PR out: command (Reserve) successful
PR generation=0x2, Reservation follows:
Key=0x123aaa
scope: LU_SCOPE, type: Write Exclusive, registrants only
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
PR out: command (Release) successful
PR generation=0x2, there is NO reservation held
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
PR out: command (Register) successful
PR generation=0x2, there are NO registered reservation keys
Scenario 2: managed=yes, hotpluged disk.
1. prepare disk xml as follow:
[root@ibm-x3250m5-04 ~]# cat disk.xml
<disk type='block' device='lun'>
<driver name='qemu' type='raw' error_policy='stop'/>
<source dev='/dev/sdf'>
<reservations managed='yes'/>
</source>
<target dev='vdb' bus='scsi'/>
</disk>
2. [root@ibm-x3250m5-04 ~]# virsh attach-device vm2 disk.xml
Device attached successfully
3. check the qemu-pr-helper process started
[root@ibm-x3250m5-04 ~]# ps -ef | grep qemu-pr-helper | grep -v grep
root 21104 1 0 06:39 ? 00:00:00 /usr/bin/qemu-pr-helper -k /var/lib/libvirt/qemu/domain-37-vm2/pr-helper0.sock -f /var/lib/libvirt/qemu/domain-37-vm2/pr-helper0.pid
4. in guest, check the pr function can be used
[root@localhost ~]# sh test.sh /dev/sdb
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
PR out: command (Register and ignore existing key) successful
PR generation=0x3, 1 registered reservation key follows:
0x123aaa
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
PR out: command (Reserve) successful
PR generation=0x3, Reservation follows:
Key=0x123aaa
scope: LU_SCOPE, type: Write Exclusive, registrants only
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
PR out: command (Release) successful
PR generation=0x3, there is NO reservation held
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
PR out: command (Register) successful
PR generation=0x3, there are NO registered reservation keys
5. detach the device
[root@ibm-x3250m5-04 ~]# virsh detach-device vm2 disk.xml
Device detached successfully
[root@ibm-x3250m5-04 ~]# ps -ef | grep qemu-pr-helper | grep -v grep; echo $?
1
Scenario 3: managed=no, coldpluged disk.
1. prepare qemu-pr-helper socket manually
[root@ibm-x3250m5-04 ~]# systemctl start qemu-pr-helper
[root@ibm-x3250m5-04 ~]# ll /var/run/qemu-pr-helper.sock
srwxr-xr-x. 1 root root 0 Aug 21 07:34 /var/run/qemu-pr-helper.sock
2. add following disk xml to vm
<disk type='block' device='lun'>
<driver name='qemu' type='raw' error_policy='stop'/>
<source dev='/dev/sdf'>
<reservations managed='no'>
<source type='unix' path='/var/run/qemu-pr-helper.sock' mode='client'/>
</reservations>
</source>
<target dev='vdb' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
3. start the vm
[root@ibm-x3250m5-04 ~]# virsh start vm2
error: Failed to start domain vm2
2018-08-21T11:35:55.968622Z qemu-kvm: -object pr-manager-helper,id=pr-helper-scsi0-0-0-1,path=/var/run/qemu-pr-helper.sock: Failed to connect socket /var/run/qemu-pr-helper.sock: Permission denied
4. chown the socket
[root@ibm-x3250m5-04 ~]# cat /etc/libvirt/qemu.conf | grep dynamic_ownership
#dynamic_ownership = 1
[root@ibm-x3250m5-04 ~]# chown qemu:qemu /var/run/qemu-pr-helper.sock
5. start the vm
[root@ibm-x3250m5-04 ~]# virsh start vm2
Domain vm2 started
6. check the pr cmds in guest
[root@ibm-x3250m5-04 ~]# ssh 192.168.122.192
Warning: Permanently added '192.168.122.192' (ECDSA) to the list of known hosts.
root.122.192's password:
Last login: Tue Aug 21 19:36:53 2018
[root@localhost ~]# sh test.sh /dev/sdb
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
PR out: command (Register and ignore existing key) successful
PR generation=0x5, 1 registered reservation key follows:
0x123aaa
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
PR out: command (Reserve) successful
PR generation=0x5, Reservation follows:
Key=0x123aaa
scope: LU_SCOPE, type: Write Exclusive, registrants only
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
PR out: command (Release) successful
PR generation=0x5, there is NO reservation held
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
PR out: command (Register) successful
PR generation=0x5, there are NO registered reservation keys
Scenario 4: managed=no, hotpluged disk.
1. prepare disk xml
[root@ibm-x3250m5-04 ~]# cat disk.xml
<disk type='block' device='lun'>
<driver name='qemu' type='raw' error_policy='stop'/>
<source dev='/dev/sdf'>
<reservations managed='no'>
<source type='unix' path='/var/run/qemu-pr-helper.sock' mode='client'/>
</reservations>
</source>
<target dev='vdb' bus='scsi'/>
</disk>
2.
[root@ibm-x3250m5-04 ~]# ps -ef | grep -v grep|grep qemu-pr-helper
root 24747 1 0 07:34 ? 00:00:00 /usr/bin/qemu-pr-helper
[root@ibm-x3250m5-04 ~]# ll /var/run/qemu-pr-helper.sock
srwxr-xr-x. 1 qemu qemu 0 Aug 21 07:34 /var/run/qemu-pr-helper.sock
3. [root@ibm-x3250m5-04 ~]# virsh attach-device vm2 disk.xml
Device attached successfully
4. in guest, check the pr cmds
[root@localhost ~]# sh test.sh /dev/sdb
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
PR out: command (Register and ignore existing key) successful
PR generation=0x6, 1 registered reservation key follows:
0x123aaa
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
PR out: command (Reserve) successful
PR generation=0x6, Reservation follows:
Key=0x123aaa
scope: LU_SCOPE, type: Write Exclusive, registrants only
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
PR out: command (Release) successful
PR generation=0x6, there is NO reservation held
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
PR out: command (Register) successful
PR generation=0x6, there are NO registered reservation keys
Scenario 5: Check the qemu-pr-helper process can be restarted when vm issue pr cmds
1. Having a reservations enabled vm
[root@ibm-x3250m5-04 ~]# virsh start vm2
pDomain vm2 started
[root@ibm-x3250m5-04 ~]# virsh dumpxml vm2
...
<disk type='block' device='lun'>
<driver name='qemu' type='raw' error_policy='stop'/>
<source dev='/dev/sdf'>
<reservations managed='yes'/>
</source>
<target dev='vdb' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
...
2. kill the qemu-pr-helper process
[root@ibm-x3250m5-04 ~]# ps -ef | grep -v grep|grep qemu-pr-helper
root 4867 1 0 08:12 ? 00:00:00 /usr/bin/qemu-pr-helper -k /var/lib/libvirt/qemu/domain-2-vm2/pr-helper0.sock -f /var/lib/libvirt/qemu/domain-2-vm2/pr-helper0.pid
[root@ibm-x3250m5-04 ~]# kill -9 4867
3. in the guest, issue pr cmds
[root@localhost ~]# sh test.sh /dev/sdb
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
PR out: command (Register and ignore existing key) successful
PR generation=0x1, 1 registered reservation key follows:
0x123aaa
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
PR out: command (Reserve) successful
PR generation=0x1, Reservation follows:
Key=0x123aaa
scope: LU_SCOPE, type: Write Exclusive, registrants only
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
PR out: command (Release) successful
PR generation=0x1, there is NO reservation held
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
PR out: command (Register) successful
PR generation=0x1, there are NO registered reservation keys
4. check the qemu-pr-helper process restarted
[root@ibm-x3250m5-04 ~]# ps -ef | grep -v grep|grep qemu-pr-helper
root 6985 1 0 08:29 ? 00:00:00 /usr/bin/qemu-pr-helper -k /var/lib/libvirt/qemu/domain-2-vm2/pr-helper0.sock -f /var/lib/libvirt/qemu/domain-2-vm2/pr-helper0.pid
Hi Michal,
IN above test Scenario 3, when I manually start a qemu-pr-helper daemon with user=root, the socket's ownership is root:root. VM cannot use it unless I changed the owner to qemu:qemu. Is this expected? libvirtd itself should have the ability to temporarily change it to qemu:qemu isn't it? The test steps are truncated as follow:
managed=no, coldpluged disk.
1. prepare qemu-pr-helper socket manually
[root@ibm-x3250m5-04 ~]# systemctl start qemu-pr-helper
[root@ibm-x3250m5-04 ~]# ll /var/run/qemu-pr-helper.sock
srwxr-xr-x. 1 root root 0 Aug 21 07:34 /var/run/qemu-pr-helper.sock
2. add following disk xml to vm
<disk type='block' device='lun'>
<driver name='qemu' type='raw' error_policy='stop'/>
<source dev='/dev/sdf'>
<reservations managed='no'>
<source type='unix' path='/var/run/qemu-pr-helper.sock' mode='client'/>
</reservations>
</source>
<target dev='vdb' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
3. start the vm
[root@ibm-x3250m5-04 ~]# virsh start vm2
error: Failed to start domain vm2
2018-08-21T11:35:55.968622Z qemu-kvm: -object pr-manager-helper,id=pr-helper-scsi0-0-0-1,path=/var/run/qemu-pr-helper.sock: Failed to connect socket /var/run/qemu-pr-helper.sock: Permission denied
4. chown the socket
[root@ibm-x3250m5-04 ~]# cat /etc/libvirt/qemu.conf | grep dynamic_ownership
#dynamic_ownership = 1
[root@ibm-x3250m5-04 ~]# chown qemu:qemu /var/run/qemu-pr-helper.sock
5. start the vm
[root@ibm-x3250m5-04 ~]# virsh start vm2
Domain vm2 started
(In reply to yisun from comment #21) > Hi Michal, > IN above test Scenario 3, when I manually start a qemu-pr-helper daemon with > user=root, the socket's ownership is root:root. VM cannot use it unless I > changed the owner to qemu:qemu. Is this expected? libvirtd itself should > have the ability to temporarily change it to qemu:qemu isn't it? I don't think that libvirt should touch pr-helper socket if it did not start the process. It is sysadmin responsibility to set the socket perms so that qemu can connect to it when using managed=no reservations. This is similar to openvswitch socket. Libvirt doesn't chown() its socket either. The reasoning behind is that: we can't be sure when qemu will connect to it. I mean, we could chown() the socket, let qemu connect to it, wait for the CONNECTED event and then chown() it back. But this wouldn't play nicely with other domain that is trying to do the same. Those chown() calls would fight with each other. Another reason is that sysadmin is supposed to know what they are doing when using unmanaged reservations. This is probably what Paolo had in mind too when writing the unit files for pr-helper. Extensive and negative test scenarios
Scenario1: Try to issue pr cmds without disk <reservations> set
1. [root@ibm-x3250m5-04 ~]# virsh dumpxml vm2 | grep vdb -a5
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='block' device='lun'>
<driver name='qemu' type='raw' error_policy='stop'/>
<source dev='/dev/sdf'/>
<target dev='vdb' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
2. [root@ibm-x3250m5-04 ~]# virsh start vm2
Domain vm2 started
[root@ibm-x3250m5-04 ~]# ps -ef | grep qemu-pr-helper | grep -v grep; echo $?
1
3. in guest, check the pr cmds, and they cannot be issued.
[root@localhost ~]# sh test.sh /dev/sdb
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
persistent reserve out: Fixed format, current; Sense key: Aborted Command
Additional sense: I/O process terminated
PR out: aborted command
PR in (Read keys): aborted command
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
persistent reserve out: Fixed format, current; Sense key: Aborted Command
Additional sense: I/O process terminated
PR out: aborted command
PR in (Read reservation): aborted command
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
persistent reserve out: Fixed format, current; Sense key: Aborted Command
Additional sense: I/O process terminated
PR out: aborted command
PR in (Read reservation): aborted command
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
persistent reserve out: Fixed format, current; Sense key: Aborted Command
Additional sense: I/O process terminated
PR out: aborted command
PR in (Read keys): aborted command
Scenario2: Invalid reservation mode/type when managed=no
1. test with <source type='unix' path='/var/run/qemu-pr-helper.sock' mode='server'/>
[root@ibm-x3250m5-04 ~]# virsh edit vm2
error: XML error: unsupported connection mode for <reservations/>: server
Failed. Try again? [y,n,i,f,?]:
2. test with <source type='unix' path='/var/run/qemu-pr-helper.sock' mode='haha'/>
[root@ibm-x3250m5-04 ~]# virsh edit vm2
error: XML document failed to validate against schema: Unable to validate doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
Element domain failed to validate content
Failed. Try again? [y,n,i,f,?]:
3. hot plug above disk xml
[root@ibm-x3250m5-04 ~]# virsh attach-device vm2 disk.xml
error: Failed to attach device from disk.xml
error: XML error: unsupported connection mode for <reservations/>: haha
[root@ibm-x3250m5-04 ~]# vim disk.xml
[root@ibm-x3250m5-04 ~]# virsh attach-device vm2 disk.xml
error: Failed to attach device from disk.xml
error: XML error: unsupported connection mode for <reservations/>: server
4.Invalid type=non_unix will also trigger xml validation failure.
<source type='non_unix' path='/var/run/qemu-pr-helper.sock' mode='client'/>
Scenario3: Use a local block device as source
1. having a usb disk on host
root@localhost ~ ## lsscsi
...
[9:0:0:0] disk SanDisk Cruzer Blade 1.26 /dev/sde
2. add it to vm and start the vm
#virsh dumpxml vm2
...
<disk type='block' device='lun'>
<driver name='qemu' type='raw'/>
<source dev='/dev/sde'>
<reservations managed='yes'/>
</source>
<backingStore/>
<target dev='vdb' bus='scsi'/>
<alias name='scsi0-0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
...
3. issue pr cmds in guest and it should not work
[root@localhost ~]# sh test.sh /dev/sda
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
persistent reserve out: Fixed format, current; Sense key: Illegal Request
Additional sense: Invalid command operation code
Info fld=0x0 [0]
PR out:, command not supported
PR in (Read keys): command not supported
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
persistent reserve out: Fixed format, current; Sense key: Illegal Request
Additional sense: Invalid command operation code
Info fld=0x0 [0]
PR out:, command not supported
PR in (Read reservation): command not supported
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
persistent reserve out: Fixed format, current; Sense key: Illegal Request
Additional sense: Invalid command operation code
Info fld=0x0 [0]
PR out:, command not supported
PR in (Read reservation): command not supported
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
persistent reserve out: Fixed format, current; Sense key: Illegal Request
Additional sense: Invalid command operation code
Info fld=0x0 [0]
PR out:, command not supported
PR in (Read keys): command not supported
Scenario4: chown the socket during vm running
[root@ibm-x3250m5-04 ~]# ll /var/lib/libvirt/qemu/domain-15-vm2/pr-helper0.sock
srwxr-xr-x. 1 qemu qemu 0 Aug 22 07:12 /var/lib/libvirt/qemu/domain-15-vm2/pr-helper0.sock
[root@ibm-x3250m5-04 ~]# chown root:root /var/lib/libvirt/qemu/domain-15-vm2/pr-helper0.sock
[root@ibm-x3250m5-04 ~]# ll /var/lib/libvirt/qemu/domain-15-vm2/pr-helper0.sock
srwxr-xr-x. 1 root root 0 Aug 22 07:12 /var/lib/libvirt/qemu/domain-15-vm2/pr-helper0.sock
[root@ibm-x3250m5-04 ~]# ssh 192.168.122.192
Warning: Permanently added '192.168.122.192' (ECDSA) to the list of known hosts.
root.122.192's password:
Last login: Wed Aug 22 19:13:03 2018
[root@localhost ~]# sh test.sh /dev/sdb
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
PR out: command (Register and ignore existing key) successful
PR generation=0x2, 1 registered reservation key follows:
0x123aaa
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
PR out: command (Reserve) successful
PR generation=0x2, Reservation follows:
Key=0x123aaa
scope: LU_SCOPE, type: Write Exclusive, registrants only
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
PR out: command (Release) successful
PR generation=0x2, there is NO reservation held
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
PR out: command (Register) successful
PR generation=0x2, there are NO registered reservation keys
Scenario5: Migration
1. source host has a running vm
# virsh dumpxml vm2
<disk type='block' device='lun'>
<driver name='qemu' type='raw' error_policy='stop'/>
<source dev='/dev/sdf'>
<reservations managed='yes'/>
</source>
<target dev='vdb' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
2. migrate to target host
[root@ibm-x3250m5-04 ~]# virsh migrate --live vm2 qemu+ssh://10.66.7.98/system --verbose --unsafe
Migration: [100 %]
3. check the vm on target host
root@localhost ## virsh dumpxml vm2
...
<disk type='block' device='lun'>
<driver name='qemu' type='raw' error_policy='stop'/>
<source dev='/dev/sdf'>
<reservations managed='yes'>
<source type='unix' path='/var/lib/libvirt/qemu/domain-10-vm2/pr-helper0.sock' mode='client'/>
</reservations>
</source>
<backingStore/>
<target dev='vdb' bus='scsi'/>
<alias name='scsi0-0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
root@localhost ~ ## ps -ef | grep qemu-pr-helper
root 18202 1 0 19:22 ? 00:00:00 /usr/bin/qemu-pr-helper -k /var/lib/libvirt/qemu/domain-10-vm2/pr-helper0.sock -f /var/lib/libvirt/qemu/domain-10-vm2/pr-helper0.pid
4. check pr cmds can be issued in guest on target host
[root@localhost ~]# sh test.sh /dev/sdb
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
PR out: command (Register and ignore existing key) successful
PR generation=0x1, 1 registered reservation key follows:
0x123aaa
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
PR out: command (Reserve) successful
PR generation=0x1, Reservation follows:
Key=0x123aaa
scope: LU_SCOPE, type: Write Exclusive, registrants only
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
PR out: command (Release) successful
PR generation=0x1, there is NO reservation held
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
PR out: command (Register) successful
PR generation=0x1, there are NO registered reservation keys
Scenario6: snapshot - not supported as expected
[root@ibm-x3250m5-04 ~]# virsh snapshot-create-as vm2 s1 --disk-only --diskspec vdb,file=/var/lib/libvirt/images/sdg.s1
error: unsupported configuration: external active snapshots are not supported on scsi passthrough devices
Scenario7: combined test with sgio
1. having vm's xml as follow(disk sgio='filtered'):
<disk type='block' device='lun' sgio='filtered'>
<driver name='qemu' type='raw'/>
<source dev='/dev/sdg'>
<reservations managed='yes'/>
</source>
<target dev='vdb' bus='scsi'/>
<shareable/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
2. start the vm
[root@ibm-x3250m5-04 ~]# virsh start vm2
Domain vm2 started
[root@ibm-x3250m5-04 ~]# ps -ef | grep -v grep|grep qemu-pr-helper
root 7667 1 0 08:36 ? 00:00:00 /usr/bin/qemu-pr-helper -k /var/lib/libvirt/qemu/domain-25-vm2/pr-helper0.sock -f /var/lib/libvirt/qemu/domain-25-vm2/pr-helper0.pid
3. issue pr cmds in guest
[root@localhost ~]# sh test.sh /dev/sdb
Persistent Reservation Out cmd: 5f 06 00 00 00 00 00 00 18 00
PR out: command (Register and ignore existing key) successful
PR generation=0xa, 1 registered reservation key follows:
0x123aaa
Persistent Reservation Out cmd: 5f 01 05 00 00 00 00 00 18 00
PR out: command (Reserve) successful
PR generation=0xa, Reservation follows:
Key=0x123aaa
scope: LU_SCOPE, type: Write Exclusive, registrants only
Persistent Reservation Out cmd: 5f 02 05 00 00 00 00 00 18 00
PR out: command (Release) successful
PR generation=0xa, there is NO reservation held
Persistent Reservation Out cmd: 5f 00 05 00 00 00 00 00 18 00
PR out: command (Register) successful
PR generation=0xa, there are NO registered reservation keys
4. set sgio='unfiltered' and repeat, the cmds also can be issued.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:3113 |