Bug 1207543
Summary: | Should not list the single path device in 'multipath -ll' command | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | wanghui <huiwa> | ||||||
Component: | ovirt-node | Assignee: | Fabian Deutsch <fdeutsch> | ||||||
Status: | CLOSED DUPLICATE | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||
Severity: | high | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 3.5.1 | CC: | cshao, ecohen, gklein, hadong, huiwa, leiwang, lsurette, yaniwang, ycui, ylavi | ||||||
Target Milestone: | --- | ||||||||
Target Release: | 3.6.0 | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | node | ||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2015-05-05 13:39:32 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | Node | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 1173290 | ||||||||
Bug Blocks: | |||||||||
Attachments: |
|
Description
wanghui
2015-03-31 07:10:20 UTC
Please provide /etc/multipath.conf as well as an sosreport Created attachment 1008833 [details]
sosreport from Single path machine
Created attachment 1008894 [details]
multipath_sosreport
Due to the reported multipath machine is now available this time. So I found another multipath machine to provide more infos.
# multipath -ll
Mar 31 08:51:13 | multipath.conf +5, invalid keyword: getuid_callout
Mar 31 08:51:13 | multipath.conf +18, invalid keyword: getuid_callout
Mar 31 08:51:13 | multipath.conf +37, invalid keyword: getuid_callout
35000c5001d5b2973 dm-5 SEAGATE ,ST3146356SS
size=137G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 6:0:0:0 sda 8:0 active ready running
360a9800050334c33424b334166784f55 dm-0 NETAPP ,LUN
size=19G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
|- 0:0:0:1 sdc 8:32 active ready running
`- 0:0:1:1 sdh 8:112 active ready running
360a9800050334c33424b334163434546 dm-4 NETAPP ,LUN
size=25G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
|- 0:0:0:0 sdb 8:16 active ready running
`- 0:0:1:0 sdg 8:96 active ready running
360a9800050334c33424b334167714852 dm-1 NETAPP ,LUN
size=1021M features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
|- 0:0:0:2 sdd 8:48 active ready running
`- 0:0:1:2 sdi 8:128 active ready running
360a9800050334c33424b334167742f70 dm-2 NETAPP ,LUN
size=2.0G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
|- 0:0:0:3 sde 8:64 active ready running
`- 0:0:1:3 sdj 8:144 active ready running
360a9800050334c33424b334167756648 dm-3 NETAPP ,LUN
size=3.0G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
|- 0:0:0:4 sdf 8:80 active ready running
`- 0:0:1:4 sdk 8:160 active ready running
Wang, did you run multipath -ll on the host before or after registration? I did some research and found the following: 1. At ISO boot time the multipath.conf is correct and contains find_multipaths=yes 2. After installation (before registration) the multipath.conf was updated by vdsm (likely by a vdsm-tool configure --force call) Because of 1 I do not see a threat to RHEV-H. Because: During installation the wwid of the mpath device is determined and added to the kernel cmdline and this will ensure that this multipath device is always assembled correctly on boot. After all I do not see a missbehavior and would close this bug as NOTABUG. Ying, do you agree with closing this bug? And: can you please update the associated testcase to check the multipath -ll output when booting the installation iso? I.e.: 1. Boot installation iso 2. Wait for the installer screen to come up 3. Drop to shell using F2 4. Run multipath -ll Expected: Single path devices do not appear Multipath devices are listed (In reply to Fabian Deutsch from comment #6) > After all I do not see a missbehavior and would close this bug as NOTABUG. > > Ying, do you agree with closing this bug? > And: can you please update the associated testcase to check the multipath > -ll output when booting the installation iso? Firstly, the behavior is different from rhevh 7.0 for 3.5.0 GA. In my opinion, we have been pursuing the unified behavior before and after hypervisor installation even before and after register to rhevm, behavior consistency is important to show the software of high reliability and strong user experience. Secondly, during checking this bug and on_boot hooks in ovirt-node-plugin-vdsm, there is other inconsistent behavior, after the confirmation, we may need to report bug, the issue is whether vdsmd is running on default after installation? Now if the rhevh installed on local disk, the vdsmd is not running as default, but if rhevh installed on SAN LAN(maybe single path or multipath), the vdsmd is running as default. with the first reason, it is hard to agree to close this bug as notabug. (In reply to Fabian Deutsch from comment #5) > Wang, did you run multipath -ll on the host before or after registration? Wang Hui run multipath -ll on host after rhevh installation, not registration. Cont. comment 7, Thirdly, if this behavior change from rhev 3.5.1, probably also impact rhevh 6.6. so rhevh 6.6 for 3.5.1 multipath behavior is different from rhevh 6.6 for 3.5.0. that is not good. (In reply to Ying Cui from comment #7) > (In reply to Fabian Deutsch from comment #6) > > After all I do not see a missbehavior and would close this bug as NOTABUG. > > > > Ying, do you agree with closing this bug? > > And: can you please update the associated testcase to check the multipath > > -ll output when booting the installation iso? > > Firstly, the behavior is different from rhevh 7.0 for 3.5.0 GA. In my > opinion, we have been pursuing the unified behavior before and after > hypervisor installation even before and after register to rhevm, behavior > consistency is important to show the software of high reliability and strong > user experience. Overall I agree that we should provide a consistent behavior. But in this case it is about a behavior which is not seen by the user. IIUIC the behavior before installation and after installation + after approval has not been changed since 3.5.0. Only the after installation + before approval situation has changed. > Secondly, during checking this bug and on_boot hooks in > ovirt-node-plugin-vdsm, there is other inconsistent behavior, after the > confirmation, we may need to report bug, the issue is whether vdsmd is > running on default after installation? Now if the rhevh installed on local > disk, the vdsmd is not running as default, but if rhevh installed on SAN > LAN(maybe single path or multipath), the vdsmd is running as default. That is interesting and we should see why vdsmd is sometimes running and sometimes not. After all it should be a consistent behavior, independent of the storage beeing used. For this bug however the only thing relevant is that the multipath informations are correct at installation time, and this is still the case. Everything after installation and after approval is in the responsibility area of vdsm. And there we need bug 1173290 fixed to get a consistent behavior. I'm lowering the priority of this bug, because it depends on bug 1173290 Ying/Yaniv, I do not see any functional problem with this bug, thus I'd close it as a dupe of bug 1173290, because when that bug is getting fixed, we'll see a consistent behaviour at all stages. Objections against closing this as a dupe? (In reply to Fabian Deutsch from comment #12) > Ying/Yaniv, I do not see any functional problem with this bug, thus I'd > close it as a dupe of bug 1173290, because when that bug is getting fixed, > we'll see a consistent behaviour at all stages. > > Objections against closing this as a dupe? No issue from my side. Closing this as a duplicate of bug 1173290 according to comment 14. Once bug 1173290 is fixed, this side effect will also be gone. *** This bug has been marked as a duplicate of bug 1173290 *** I have no idea for clear sights on this bug, and I did not know whether fix bug 1173290 then this bug is disappeared as well. So let's see then. |