Bug 1238775 - RHEV-H 7.1 do not load configuration settings from persisted multipath.conf after reboot
Summary: RHEV-H 7.1 do not load configuration settings from persisted multipath.conf a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node
Version: 3.5.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ovirt-3.6.0-rc
: 3.6.0
Assignee: Fabian Deutsch
QA Contact: cshao
URL:
Whiteboard:
Depends On: 1225182
Blocks: 1241115
TreeView+ depends on / blocked
 
Reported: 2015-07-02 15:18 UTC by Sachin Raje
Modified: 2019-08-15 04:49 UTC (History)
14 users (show)

Fixed In Version: ovirt-node-3.3.0-0.4.20150906git14a6024.el7ev
Doc Type: Bug Fix
Doc Text:
With this release, modifications to a persisted multipath.conf file are applied at boot time when previously they were not.
Clone Of:
: 1241115 (view as bug list)
Environment:
Last Closed: 2016-03-09 14:32:30 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:
mgoldboi: Triaged+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1544763 0 None None None Never
Red Hat Product Errata RHBA-2016:0378 0 normal SHIPPED_LIVE ovirt-node bug fix and enhancement update for RHEV 3.6 2016-03-09 19:06:36 UTC

Description Sachin Raje 2015-07-02 15:18:48 UTC
Description of problem:
Unable to load persisted multipath.conf with modified options during RHEV-H 7.1 reboot.



Version-Release number of selected component (if applicable):

# cat etc/redhat-release 
Red Hat Enterprise Virtualization Hypervisor 7.1 (20150512.1.el7ev)

vdsm-4.16.13.1-1.el7ev.x86_64                               Tue May 12 18:37:42 2015    1431455862      Red Hat, Inc.   x86-018.build.eng.bos.redhat.com

ovirt-node-3.2.2-3.el7.noarch                               Tue May 12 18:37:39 2015    1431455859      Red Hat, Inc.   x86-034.build.eng.bos.redhat.com 

device-mapper-persistent-data-0.4.1-2.el7.x86_64            Tue May 12 18:36:42 2015    1431455802      Red Hat, Inc.   x86-020.build.eng.bos.redhat.com 

device-mapper-multipath-0.4.9-77.el7.x86_64                 Tue May 12 18:37:16 2015    1431455836      Red Hat, Inc.   x86-024.build.eng.bos.redhat.com

device-mapper-1.02.93-3.el7.x86_64                          Tue May 12 18:37:02 2015    1431455822      Red Hat, Inc.   x86-030.build.eng.bos.redhat.com



How reproducible:
always

Steps to Reproduce:
1. Install RHEV-H 7.1
2. register it to rhevm
3. Modify /etc/multipath.conf and persist it.
4. Reboot the system. 

Actual results:
The multipath paths do not show modified changes in multipath.conf

Expected results:
The modified changes in persisted multipath.conf should show after RHEV-H 7.1reboot.

Additional info:

### Following info collected from sosrpeort "sosreport-freshsetupfreshlybooted" generate after RHEV-H 7.1 reboot

1. no path list

===== paths list =====
uuid hcil    dev dev_t pri dm_st chk_st vend/prod/rev             dev_st 
     0:1:0:0 sda 8:0   -1  undef ready  Dell    ,Virtual Disk     running
Jun 11 15:57:30 | directio checker refcount 1
Jun 11 15:57:30 | unloading const prioritizer
Jun 11 15:57:30 | unloading directio checker

2. empty multipath wwids

# cat etc/multipath/wwids

3. multipathd service running

# grep multipath ps
root       889  0.0  0.0 304268  5472 ?        SLl  15:53   0:00 /sbin/multipathd

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

### Following info collected from sosrpeort "sosreport-MultipathReloaded" generate after running "multipath -r"

1. Showing the path lists

===== paths list =====
uuid hcil     dev dev_t pri dm_st chk_st vend/prod/rev             dev_st 
     0:1:0:0  sda 8:0   -1  undef ready  Dell    ,Virtual Disk     running
     10:0:0:0 sde 8:64  -1  undef ready  EQLOGIC ,100E-00          running
     11:0:0:0 sdf 8:80  -1  undef ready  EQLOGIC ,100E-00          running
     12:0:0:0 sdg 8:96  -1  undef ready  EQLOGIC ,100E-00          running
     13:0:0:0 sdh 8:112 -1  undef ready  EQLOGIC ,100E-00          running
     14:0:0:0 sdi 8:128 -1  undef ready  EQLOGIC ,100E-00          running
     15:0:0:0 sdj 8:144 -1  undef ready  EQLOGIC ,100E-00          running
     16:0:0:0 sdk 8:160 -1  undef ready  EQLOGIC ,100E-00          running
     17:0:0:0 sdl 8:176 -1  undef ready  EQLOGIC ,100E-00          running
     18:0:0:0 sdm 8:192 -1  undef ready  EQLOGIC ,100E-00          running
     19:0:0:0 sdn 8:208 -1  undef ready  EQLOGIC ,100E-00          running
     20:0:0:0 sdo 8:224 -1  undef ready  EQLOGIC ,100E-00          running
     21:0:0:0 sdp 8:240 -1  undef ready  EQLOGIC ,100E-00          running
     22:0:0:0 sdq 65:0  -1  undef ready  EQLOGIC ,100E-00          running
     23:0:0:0 sdr 65:16 -1  undef ready  EQLOGIC ,100E-00          running
     24:0:0:0 sds 65:32 -1  undef ready  EQLOGIC ,100E-00          running
     7:0:0:0  sdb 8:16  -1  undef ready  EQLOGIC ,100E-00          running
     8:0:0:0  sdc 8:32  -1  undef ready  EQLOGIC ,100E-00          running
     9:0:0:0  sdd 8:48  -1  undef ready  EQLOGIC ,100E-00          running


36090a038a0fc252bd425d54e4601b0a8 dm-9 EQLOGIC ,100E-00         
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 10:0:0:0 sde 8:64  active ready running
  `- 19:0:0:0 sdn 8:208 active ready running

360fff16aee553f6304424524e7044061 dm-6 EQLOGIC ,100E-00         
size=1.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 16:0:0:0 sdk 8:160 active ready running
  `- 7:0:0:0  sdb 8:16  active ready running

36090a038a0bca661022bb50f5f01d058 dm-8 EQLOGIC ,100E-00         
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 18:0:0:0 sdm 8:192 active ready running
  `- 9:0:0:0  sdd 8:48  active ready running

36090a038a0bc46c5a028453d5c019031 dm-7 EQLOGIC ,100E-00         
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 17:0:0:0 sdl 8:176 active ready running
  `- 8:0:0:0  sdc 8:32  active ready running


2. showing multipath wwids

	
# cat etc/multipath/wwids 
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/360fff16aee553f6304424524e7044061/
/36090a038a0bc46c5a028453d5c019031/
/36090a038a0bca661022bb50f5f01d058/
/36090a038a0fc252bd425d54e4601b0a8/
/36090a038a0fc255da42065bb3b0140ed/
/360fff16aee359704e03f357c5dada13e/
/36090a038a0bc6646f62e951c63015049/
/36090a038a0fc354a7a2695c14801a0ab/
/36090a038a0fcf5a36725e57f440130d8/

3. multipathd service running

# grep multipath ps
root       889  0.0  0.0 959988  6156 ?        SLl  15:53   0:00 /sbin/multipathd

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Comment 1 Sachin Raje 2015-07-02 15:20:24 UTC
Workaround:
Manually running "multipath -r" after reboot gets the modified configuration from "/etc/multipath.conf"

Comment 4 Allon Mureinik 2015-07-05 05:45:14 UTC
Offhand this seems like a RHEV-H persistency issue, so flagging with whiteboard=node.
Fabian - if you need our hand here, just say so.

Comment 5 Sachin Raje 2015-07-05 10:50:22 UTC
Just want to clarify, changes in /etc/multipath.conf is getting persisted after reboot but multipathd does not load those changes until we manually run 'multipath -r' on RHEV-H.

Comment 6 Yaniv Lavi 2015-07-07 09:42:17 UTC
Should be handled in BZ #1225182.

Comment 7 cshao 2015-07-07 11:16:42 UTC
Test version:
Red Hat Enterprise Virtualization Hypervisor 7.1 (20150512.1.el7ev)
ovirt-node-3.2.2-3.el7.noarch
vdsm-4.16.13.1-1.el7ev.x86_64

Test machine:
hp-z800-02(multipath iSCSI)
iSCSI: QLogic Corp. ISP4032-based iSCSI TOE IPv6 HBA /qla4xxx

Test steps:
1. Install RHEV-H 7.1
2. register it to rhevm
3. Modify /etc/multipath.conf(pasted from http://pastebin.test.redhat.com/295198) and persist it.
4. Reboot the system. 

Test result:
/etc/multipath.conf can be persisted after reboot, but I don't know how the multipath paths can effect after modify multipath.conf.

In our test machine, before modify /etc/multipath.conf, all multipath lun can work fine. after modify it, all luns still can work fine. I can't find any difference.

Hi fabiand,

Could you guide us to how to test this bug?

Thanks!

Comment 8 Fabian Deutsch 2015-07-07 13:44:34 UTC
This bug can be tested by:

1. Ensure that /etc/multipath.conf is persisted
2. Look for a unique configuration item in /etc/multipath.conf, and look for this configuration item in the output of "multipathd -k" followed by a "show config"

Comment 9 Fabian Deutsch 2015-07-07 13:48:41 UTC
Sachin, the assumption is that persisting the multipath.conf and reloading it during boot will bring up the right paths.

And problem is that we can not reproduce the issue in our environment, that means that we van only verify this bug by making sure that we persist the file, and the contents are reloaded.

Comment 10 Ying Cui 2015-07-07 13:50:48 UTC
Fabian, from patch on BZ #1225182, reload multipathd during boot.
Currently we need reload multipathd manually to make effect.

Then to verify this bug we can do this
To reproduce it:
1. Installed RHEV-H 
2. Registered RHEVH to RHEVM.
3. Modified /etc/multipath.conf, such as
    user_friendly_names     no  # default
    user_friendly_names     yes # new values
4. Reboot rhevh
5. check the new value after reboot rhevh, there still not update.
# multipathd show config | grep user_friendly_names
	user_friendly_names "no"
		user_friendly_names no
6. service multipathd reload
7. then modified configuration works

After the patches on BZ #1225182 in ovirt-node 3.5 branch, then the modified configuration in multipath.conf will take effect after rhevh reboot, no need reload multipathd manually.

Please give an ack for above to verify this bug. Thanks.

Comment 11 Ying Cui 2015-07-07 13:52:10 UTC
> Please give an ack for above to verify this bug. Thanks.

already answer my question on comment 8. Thanks Fabian.

Comment 12 Ying Cui 2015-07-07 13:56:36 UTC
Sachin, Is our customer willing to verify this bug? Thanks.
Because we can not reproduce the bug report issue in currently enviroment, we only can verify this bug to check persist the file, the modified configuration reload after rhevh reboot.

Comment 13 cshao 2015-07-07 14:35:43 UTC
(In reply to Ying Cui from comment #10)
> Fabian, from patch on BZ #1225182, reload multipathd during boot.
> Currently we need reload multipathd manually to make effect.
> 
> Then to verify this bug we can do this
> To reproduce it:
> 1. Installed RHEV-H 
> 2. Registered RHEVH to RHEVM.
> 3. Modified /etc/multipath.conf, such as
>     user_friendly_names     no  # default
>     user_friendly_names     yes # new values
> 4. Reboot rhevh
> 5. check the new value after reboot rhevh, there still not update.
> # multipathd show config | grep user_friendly_names
> 	user_friendly_names "no"
> 		user_friendly_names no
> 6. service multipathd reload
> 7. then modified configuration works
> 
> After the patches on BZ #1225182 in ovirt-node 3.5 branch, then the modified
> configuration in multipath.conf will take effect after rhevh reboot, no need
> reload multipathd manually.
> 
> Please give an ack for above to verify this bug. Thanks.

Hi ycui and fabiand,

Thanks you for your help, I understand now. if need I will verify persist part.

Comment 14 Sachin Raje 2015-07-07 15:27:05 UTC
Hi Ying,  customer has provided the 1.  sosreport of rebooted Host BEFORE running "multipath -r" and 2.  sosreport of rebooted Host AFTER running "multipath -r"

The sosrpeort has all the details required to verify this bug.

Let me know what other details needed so I will collect it further.

Comment 15 Ying Cui 2015-07-07 15:36:18 UTC
(In reply to Sachin Raje from comment #14)
> Hi Ying,  customer has provided the 1.  sosreport of rebooted Host BEFORE
> running "multipath -r" and 2.  sosreport of rebooted Host AFTER running
> "multipath -r"
> 
> The sosrpeort has all the details required to verify this bug.
> 
> Let me know what other details needed so I will collect it further.

Sachin, here we probably got misunderstanding on comment 12, I mean whether customer can help to fully verify this bug when this bug is fixed by devel and ON_QA. Because in QE side, we can not reproduce the original issue which customer encountered, we only can do partial verify on checking persist file and multipath reload after rhevh restart.

Thanks
Ying

Comment 16 Sachin Raje 2015-07-07 15:48:23 UTC
Hello Ying,

   Thanks for clarification. I'll check and confirm with customer about testing and verification of this bug in his rhev environment.

I'll let you know soon.

Regards,
Sachin

Comment 17 Ying Cui 2015-07-07 16:02:26 UTC
(In reply to Sachin Raje from comment #16)
> Hello Ying,
> 
>    Thanks for clarification. I'll check and confirm with customer about
> testing and verification of this bug in his rhev environment.
> 
> I'll let you know soon.

Thanks, and please note, the rhevh build is not built yet on brew for testing, we will paste the brew link soon ~2 days on this bug, then you can provide the iso to customer to verify it.

Comment 18 Ying Cui 2015-07-07 16:04:09 UTC
according to comment 8, comment 9, comment 10 and comment 16, I will give qa_ack+ here.

Comment 21 Gunther Schlegel 2015-07-09 08:05:42 UTC
Hi there,

basically you are able to reproduce it by changing one "non-path-routing-related" settings in /etc/multipath.conf, e.g. failback, user_friendly_names. If you set something nonstandard in there and that configuration is in place after a reboot, then the issue is fixed.

But I will test it in my environment if you provide me with a RHEV-H RPM to test with.

regards, Gunther

Comment 24 cshao 2015-11-26 06:31:57 UTC
Test version:
rhev-hypervisor7-7.2-20151112.1
ovirt-node-3.6.0-0.20.20151103git3d3779a.el7ev.noarch

Test steps:
1. Installed RHEV-H 
2. Registered RHEVH to RHEVM.
3. Modified /etc/multipath.conf, such as
    user_friendly_names     no  # default
    user_friendly_names     yes # new values
4. Reboot rhevh
5. check the new value after reboot rhevh

# multipathd show config | grep user_friendly_names
	user_friendly_names "yes"
		user_friendly_names no


Hi fabian.

 The persist part work well, can you provide a download link for Gunther Schlegel to full verify this bug?

Thanks!

Comment 25 Fabian Deutsch 2015-11-26 09:07:51 UTC
Checn, thanks for the reminder.

Gunther, can you kindly try to reproduce this bug with the RHEV-H 3.6 beta 1 build?

Please note that upgrades of RHEV-H 3.6 beta 1 are not working correctly (will be fixed in beta2), so in case you try, please use a clean installation.

Comment 26 Gunther Schlegel 2015-11-30 13:21:27 UTC
Fabian,

I am actually not even sure if I have access to the software you have mentioned, nevertheless, I will definitely not find time to test any of this in 2015.

regards, Gunther

Comment 27 Fabian Deutsch 2015-11-30 13:37:58 UTC
Thanks for your reply Gunther.

Comment 28 Fabian Deutsch 2015-11-30 13:38:35 UTC
Chen, I'd say when then go forward and verify this bug according to comment 24.

Comment 29 cshao 2016-02-03 06:21:59 UTC
(In reply to Fabian Deutsch from comment #28)
> Chen, I'd say when then go forward and verify this bug according to comment
> 24.

Verify this bug according #c24, #c28

Comment 31 errata-xmlrpc 2016-03-09 14:32:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0378.html


Note You need to log in before you can comment on or make changes to this bug.