Bug 1207543 - Should not list the single path device in 'multipath -ll' command
Summary: Should not list the single path device in 'multipath -ll' command
Keywords:
Status: CLOSED DUPLICATE of bug 1173290
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node
Version: 3.5.1
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 3.6.0
Assignee: Fabian Deutsch
QA Contact: Virtualization Bugs
URL:
Whiteboard: node
Depends On: 1173290
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-31 07:10 UTC by wanghui
Modified: 2016-02-10 20:05 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-05 13:39:32 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
sosreport from Single path machine (5.04 MB, application/x-gzip)
2015-03-31 07:51 UTC, wanghui
no flags Details
multipath_sosreport (5.46 MB, application/x-gzip)
2015-03-31 08:54 UTC, wanghui
no flags Details

Description wanghui 2015-03-31 07:10:20 UTC
Description of problem:
It should not list the single path device in 'multipath -ll' command.

Version-Release number of selected component (if applicable):
rhev-hypervisor7-7.1-20150327.0.el7ev
ovirt-node-3.2.2-1.el7.noarch
device-mapper-multipath-libs-0.4.9-77.el7.x86_64
device-mapper-multipath-0.4.9-77.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
Scenario 1:
1. Install rhev-hypervisor7-7.1-20150327.0.el7ev to single path iscsi machine.
2. Check the 'multipath -ll' command.

Scenario 2:
1. Install rhev-hypervisor7-7.1-20150327.0.el7ev to multipath iscsi machine.
2. Check the 'multipath -ll' command.

Actual results:
Scenario 1:
1.After step2, it shows as follows. The installation disk '/dev/sda' is not listed. But the two single path iscsi disks are listed.
# multipath -ll
Mar 31 02:42:28 | multipath.conf +5, invalid keyword: getuid_callout
Mar 31 02:42:28 | multipath.conf +18, invalid keyword: getuid_callout
Mar 31 02:42:28 | multipath.conf +37, invalid keyword: getuid_callout
36090a038d0f721901d033566b2493f23 dm-6 EQLOGIC ,100E-00         
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 1:0:0:0 sdb 8:16 active ready running
36090a038d0f731381e035566b2497f85 dm-7 EQLOGIC ,100E-00         
size=30G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 1:0:1:0 sdc 8:32 active ready running

Scenario 2:
1. After step2, it shows as follows. The installation disk '/dev/sda' is listed.
# multipath -ll
Mar 31 02:44:47 | multipath.conf +5, invalid keyword: getuid_callout
Mar 31 02:44:47 | multipath.conf +18, invalid keyword: getuid_callout
Mar 31 02:44:47 | multipath.conf +37, invalid keyword: getuid_callout
SAMSUNG_HD502IJ_S1W3J9BS604547 dm-16 ATA     ,SAMSUNG HD502IJ 
size=466G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 0:0:0:0 sda 8:0  active ready running
360a9800050334c33424b32542d43497a dm-0 NETAPP  ,LUN             
size=20G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
  |- 7:0:0:0 sdd 8:48 active ready running
  `- 7:0:1:0 sdb 8:16 active ready running
360a9800050334c33424b32542d45446e dm-1 NETAPP  ,LUN             
size=30G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
  |- 7:0:0:1 sde 8:64 active ready running
  `- 7:0:1:1 sdc 8:32 active ready running

Expected results:
1. It should not list the single path device.

Additional info:

Comment 1 Fabian Deutsch 2015-03-31 07:16:04 UTC
Please provide /etc/multipath.conf as well as an sosreport

Comment 2 wanghui 2015-03-31 07:51:04 UTC
Created attachment 1008833 [details]
sosreport from Single path machine

Comment 4 wanghui 2015-03-31 08:54:26 UTC
Created attachment 1008894 [details]
multipath_sosreport

Due to the reported multipath machine is now available this time. So I found another multipath machine to provide more infos.
# multipath -ll
Mar 31 08:51:13 | multipath.conf +5, invalid keyword: getuid_callout
Mar 31 08:51:13 | multipath.conf +18, invalid keyword: getuid_callout
Mar 31 08:51:13 | multipath.conf +37, invalid keyword: getuid_callout
35000c5001d5b2973 dm-5 SEAGATE ,ST3146356SS     
size=137G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 6:0:0:0 sda 8:0   active ready running
360a9800050334c33424b334166784f55 dm-0 NETAPP  ,LUN             
size=19G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
  |- 0:0:0:1 sdc 8:32  active ready running
  `- 0:0:1:1 sdh 8:112 active ready running
360a9800050334c33424b334163434546 dm-4 NETAPP  ,LUN             
size=25G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
  |- 0:0:0:0 sdb 8:16  active ready running
  `- 0:0:1:0 sdg 8:96  active ready running
360a9800050334c33424b334167714852 dm-1 NETAPP  ,LUN             
size=1021M features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
  |- 0:0:0:2 sdd 8:48  active ready running
  `- 0:0:1:2 sdi 8:128 active ready running
360a9800050334c33424b334167742f70 dm-2 NETAPP  ,LUN             
size=2.0G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
  |- 0:0:0:3 sde 8:64  active ready running
  `- 0:0:1:3 sdj 8:144 active ready running
360a9800050334c33424b334167756648 dm-3 NETAPP  ,LUN             
size=3.0G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=2 status=active
  |- 0:0:0:4 sdf 8:80  active ready running
  `- 0:0:1:4 sdk 8:160 active ready running

Comment 5 Fabian Deutsch 2015-03-31 09:07:58 UTC
Wang, did you run multipath -ll on the host before or after registration?

Comment 6 Fabian Deutsch 2015-03-31 09:46:45 UTC
I did some research and found the following:

1. At ISO boot time the multipath.conf is correct and contains find_multipaths=yes
2. After installation (before registration) the multipath.conf was updated by vdsm (likely by a vdsm-tool configure --force call)

Because of 1 I do not see a threat to RHEV-H. Because:
During installation the wwid of the mpath device is determined and added to the kernel cmdline and this will ensure that this multipath device is always assembled correctly on boot.

After all I do not see a missbehavior and would close this bug as NOTABUG.

Ying, do you agree with closing this bug?
And: can you please update the associated testcase to check the multipath -ll output when booting the installation iso?

I.e.:
1. Boot installation iso
2. Wait for the installer screen to come up
3. Drop to shell using F2
4. Run multipath -ll

Expected:
Single path devices do not appear
Multipath devices are listed

Comment 7 Ying Cui 2015-04-01 10:46:18 UTC
(In reply to Fabian Deutsch from comment #6)
> After all I do not see a missbehavior and would close this bug as NOTABUG.
> 
> Ying, do you agree with closing this bug?
> And: can you please update the associated testcase to check the multipath
> -ll output when booting the installation iso?

Firstly, the behavior is different from rhevh 7.0 for 3.5.0 GA. In my opinion, we have been pursuing the unified behavior before and after hypervisor installation even before and after register to rhevm, behavior consistency is important to show the software of high reliability and strong user experience.

Secondly, during checking this bug and on_boot hooks in ovirt-node-plugin-vdsm, there is other inconsistent behavior, after the confirmation, we may need to report bug, the issue is whether vdsmd is running on default after installation? Now if the rhevh installed on local disk, the vdsmd is not running as default, but if rhevh installed on SAN LAN(maybe single path or multipath), the vdsmd is running as default.

with the first reason, it is hard to agree to close this bug as notabug.

Comment 8 Ying Cui 2015-04-01 10:50:43 UTC
(In reply to Fabian Deutsch from comment #5)
> Wang, did you run multipath -ll on the host before or after registration?
Wang Hui run multipath -ll on host after rhevh installation, not registration.

Comment 9 Ying Cui 2015-04-01 11:11:25 UTC
Cont. comment 7,
Thirdly, if this behavior change from rhev 3.5.1, probably also impact rhevh 6.6. so rhevh 6.6 for 3.5.1 multipath behavior is different from rhevh 6.6 for 3.5.0. that is not good.

Comment 10 Fabian Deutsch 2015-04-01 12:18:53 UTC
(In reply to Ying Cui from comment #7)
> (In reply to Fabian Deutsch from comment #6)
> > After all I do not see a missbehavior and would close this bug as NOTABUG.
> > 
> > Ying, do you agree with closing this bug?
> > And: can you please update the associated testcase to check the multipath
> > -ll output when booting the installation iso?
> 
> Firstly, the behavior is different from rhevh 7.0 for 3.5.0 GA. In my
> opinion, we have been pursuing the unified behavior before and after
> hypervisor installation even before and after register to rhevm, behavior
> consistency is important to show the software of high reliability and strong
> user experience.

Overall I agree that we should provide a consistent behavior.
But in this case it is about a behavior which is not seen by the user.
IIUIC the behavior before installation and after installation + after approval has not been changed since 3.5.0.
Only the after installation + before approval situation has changed.

> Secondly, during checking this bug and on_boot hooks in
> ovirt-node-plugin-vdsm, there is other inconsistent behavior, after the
> confirmation, we may need to report bug, the issue is whether vdsmd is
> running on default after installation? Now if the rhevh installed on local
> disk, the vdsmd is not running as default, but if rhevh installed on SAN
> LAN(maybe single path or multipath), the vdsmd is running as default.

That is interesting and we should see why vdsmd is sometimes running and sometimes not.
After all it should be a consistent behavior, independent of the storage beeing used.


For this bug however the only thing relevant is that the multipath informations are correct at installation time, and this is still the case.
Everything after installation and after approval is in the responsibility area of vdsm.
And there we need bug 1173290 fixed to get a consistent behavior.

Comment 11 Fabian Deutsch 2015-04-22 09:47:13 UTC
I'm lowering the priority of this bug, because it depends on bug 1173290

Comment 12 Fabian Deutsch 2015-04-27 14:18:50 UTC
Ying/Yaniv, I do not see any functional problem with this bug, thus I'd close it as a dupe of bug 1173290, because when that bug is getting fixed, we'll see a consistent behaviour at all stages.

Objections against closing this as a dupe?

Comment 14 Yaniv Lavi 2015-05-05 08:24:45 UTC
(In reply to Fabian Deutsch from comment #12)
> Ying/Yaniv, I do not see any functional problem with this bug, thus I'd
> close it as a dupe of bug 1173290, because when that bug is getting fixed,
> we'll see a consistent behaviour at all stages.
> 
> Objections against closing this as a dupe?

No issue from my side.

Comment 15 Fabian Deutsch 2015-05-05 13:39:32 UTC
Closing this as a duplicate of bug 1173290 according to comment 14.
Once bug 1173290 is fixed, this side effect will also be gone.

*** This bug has been marked as a duplicate of bug 1173290 ***

Comment 16 Ying Cui 2015-05-11 11:08:40 UTC
I have no idea for clear sights on this bug, and I did not know whether fix bug 1173290 then this bug is disappeared as well. So let's see then.


Note You need to log in before you can comment on or make changes to this bug.