Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1402093

Summary: Docker vg built from ephemeral storage does not activate on boot
Product: Red Hat Enterprise Linux 7 Reporter: Jake Hunsaker <jhunsaker>
Component: lvm2Assignee: LVM and device-mapper development team <lvm-team>
lvm2 sub component: Default / Unclassified QA Contact: cluster-qe <cluster-qe>
Status: CLOSED INSUFFICIENT_DATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: agk, amurdaca, heinzm, jbrassow, jhunsaker, lsm5, lvm-team, msnitzer, prajnoha, prockai, thornber, vgoyal, zkabelac
Version: 7.3Keywords: Extras
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-07-28 03:44:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1420851    

Description Jake Hunsaker 2016-12-06 18:21:29 UTC
Description of problem:

Customer has reported that when using ephemeral storage from RHOS as the backing vg for docker, the VG does not activate on boot. Looking at https://access.redhat.com/solutions/22545, neither workaround suggested works (_netdev in fstab, etc...) and the bug for that KCS (BZ#1371692) is closed with errata. 

The customer can simply do a 'vgchange -a -y' and boot that in rc.local as a workaround, but ideally we should see why the VG is not activating on boot in the first place.


Version-Release number of selected component (if applicable):
docker-1.10

How reproducible:
Easily for customer, I do not have a RHOSP environment to use to test this with myself.

Steps to Reproduce:
1. Create RHOSP VM with ephemeral storage, using that storage for the backing VG given to docker-storage-setup
2.
3.

Actual results:
VG is not activated on boot, thus docker will not start

Expected results:
VG should activate and docker should start

Additional info:

Comment 2 Daniel Walsh 2016-12-06 20:09:13 UTC
Is this really a docker issue or is this more of an LVM issue?

Comment 3 Vivek Goyal 2016-12-06 21:45:09 UTC
This sounds like an LVM issue if VG does not activate automatically during boot. Or may be it is a setting issue to make sure VG activates during boot automatically.

As of now, docker-storage-setup does not do anything to activate volume group during reboot and it assumes that lvm will take care of activating VG.

Comment 5 Vivek Goyal 2016-12-07 14:40:09 UTC
Is it just the docker volume group which does not activate automatically. Do other volume groups activate.

I suspect this is the issue of system wide (or per volume group) setting in system which decides whether volume group should be activated automatically on boot or not.

Is there a per volume group setting to enable this?

Comment 6 Jake Hunsaker 2016-12-07 14:53:00 UTC
It is just the docker VG that does not activate.

Comment 7 Vivek Goyal 2016-12-07 15:05:16 UTC
What do you mean by ephemeral storage here?

Is that ephemeral storage available after reboot but volume group does not activate?

Comment 8 Jake Hunsaker 2016-12-07 16:53:17 UTC
ephemeral meaning provisioned from storage local to the node instead of one of the OSP storage components.

Correct, storage is available but VG does not activate.

Comment 9 Zdenek Kabelac 2017-02-09 14:22:38 UTC
I'd propose to first make terminology clear.

There is no such thing as 'active' VG. It's ONLY LV which is active.
When there is 'active' LV in VG - we 'may' just call such VG to be active.


So when we talk about 'auto activation' of  DockerVG - what does it exactly means ?

Unsused thin-pool alone is not 'auto-activated' LV.

LVM does auto-activate only 'ThinLV' (from lvm2 POV there is no reason to activate virtually unused thin-pool LV which has transaction-id == 0 in lvm2 metadata).

ThinLV is not used by Docker.

So now - what exactly is not activated ?

Existing  'lvs -a' ?
Wanted  'lvs -a' ?

Comment 11 Peter Rajnoha 2017-02-14 15:11:42 UTC
If you're not using lvmetad, then there's no LVM autoactivation. Make sure you have lvmetad properly configured - check lvmconfig --type current global/use_lvmetad. Based on comment #10, the configuration is wrong - the use_lvmetad is placed incorrectly in the lvm.conf file, however, in this case LVM should fallback to default operation (which is use_lvmetad=1).

Comment 12 Jonathan Earl Brassow 2017-03-08 23:33:37 UTC
Please could you tell us if your issue is resolved after correcting the lvm.conf file?

Comment 13 Jake Hunsaker 2017-04-03 13:54:42 UTC
Customer abandoned the support case after we mentioned the lvm.conf issues. At this point we could probably close as INSUFFICIENT_DATA.

Comment 14 Jonathan Earl Brassow 2017-07-28 03:44:46 UTC
(In reply to Jake Hunsaker from comment #13)
> Customer abandoned the support case after we mentioned the lvm.conf issues.
> At this point we could probably close as INSUFFICIENT_DATA.

ok, will do.