Bug 1287316 - volumes not mounted fast enough resulting in system boot to emergency mode.
volumes not mounted fast enough resulting in system boot to emergency mode.
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
7.2
Unspecified Linux
urgent Severity urgent
: rc
: ---
Assigned To: Peter Rajnoha
cluster-qe@redhat.com
:
Depends On:
Blocks: 1385242
  Show dependency treegraph
 
Reported: 2015-12-01 18:27 EST by Joe Wright
Modified: 2017-01-30 06:08 EST (History)
24 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-01-30 06:07:50 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 2 Lukáš Nykrýn 2016-01-11 08:14:04 EST
Can you boot the machine with debug on kernel cmdline, reproduce the issue and send us output of journalctl?
Comment 8 ajit mote 2016-02-04 02:26:12 EST
Logs with debug are attached so removing needinfo ..... Customer is active so we can collect or try out things.
Comment 9 Lukáš Nykrýn 2016-02-04 04:31:25 EST
To me this looks like a dm/lvm bug. The system was not able to assemble the raid. 

But anyway, even if this is a systemd bug, we will need some help from storage guys, so reassigning to lvm.
Comment 10 Peter Rajnoha 2016-02-04 05:58:55 EST
Are those failing mount points on LVM and then on multipath stack?
Comment 11 ajit mote 2016-02-04 06:46:57 EST
(In reply to Peter Rajnoha from comment #10)
> Are those failing mount points on LVM and then on multipath stack?

Yes, In my case mount points are on LVM which are using multipath device.

I have latest sosreport from customer, I can upload here.
Comment 13 Peter Rajnoha 2016-02-04 07:36:27 EST
(In reply to ajit mote from comment #11)
> (In reply to Peter Rajnoha from comment #10)
> > Are those failing mount points on LVM and then on multipath stack?
> 
> Yes, In my case mount points are on LVM which are using multipath device.
> 
> I have latest sosreport from customer, I can upload here.

If the customer uses lvmetad (use_lvmetad=1 set in lvm.conf), if possible, it would be great if he could try suggestion from bug #1287106 comment #47.

If he doesn't use lvmetad (use_lvmetad=0), then bug #1287106 comment #42 and bug #1287106 comment #43.

I think bug #1287106 and this bug may be about the same.
Comment 16 Ben Marzinski 2016-02-11 19:19:48 EST
There are device-mapper-multipath test packages that avoid reloading a multipath device until multipathd has received the uevent from creating it. You can download them here:

http://people.redhat.com/~bmarzins/device-mapper-multipath/rpms/RHEL7/bz1304687/

These packages, along with the change to pvscan that Peter posted in bug #1287106 comment #48, should hopefully fix this (assuming that this is the same issue as bug #1287106)
Comment 46 LENHOF 2016-04-05 04:56:10 EDT
Is this bug report linked with this solution (I've access to this one) ?
https://access.redhat.com/solutions/2147731

I've tried this solution in our environment and it seems to be a good workaround for this problem. But we don't want to put in production (Oracle RAC on RH Linux, so definitively a high availability platform ) something which has not a definitive fix, is following this bug report the right thing to do ?

I did'nt have access to bug report #1287106...

Regards,

JYL
Comment 47 Alasdair Kergon 2016-04-05 05:55:31 EDT
(In reply to LENHOF from comment #46)
> I did'nt have access to bug report #1287106...

There's customer data scattered through that bug so I cannot easily open it up unfortunately.

We'll see if the information is somewhere else or if we need to copy it across here.
Comment 48 Peter Rajnoha 2016-04-05 07:39:41 EDT
I've opened a new BZ with summary of the problem which is public:
https://bugzilla.redhat.com/show_bug.cgi?id=1324028
Comment 61 Zdenek Kabelac 2017-01-30 06:07:50 EST
So the original case is closely related to broken installation procedure provided by  'openstack undercloud install'.

It should not mess with 'auto_activation_volume_list'

As for the comment 58 - from provided report - it looks unrelated to this bz, as logs shows evidence of 'qla2xxx [0000:44:00.0]-8038:0: Cable is unplugged...' - so it's more or less likely a hardware issue with multipath configuration.

If there is still any problem - please open new case.

Closing the BZ as the original case is closed.

Note You need to log in before you can comment on or make changes to this bug.