Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
Customer has reported that when using ephemeral storage from RHOS as the backing vg for docker, the VG does not activate on boot. Looking at https://access.redhat.com/solutions/22545, neither workaround suggested works (_netdev in fstab, etc...) and the bug for that KCS (BZ#1371692) is closed with errata.
The customer can simply do a 'vgchange -a -y' and boot that in rc.local as a workaround, but ideally we should see why the VG is not activating on boot in the first place.
Version-Release number of selected component (if applicable):
docker-1.10
How reproducible:
Easily for customer, I do not have a RHOSP environment to use to test this with myself.
Steps to Reproduce:
1. Create RHOSP VM with ephemeral storage, using that storage for the backing VG given to docker-storage-setup
2.
3.
Actual results:
VG is not activated on boot, thus docker will not start
Expected results:
VG should activate and docker should start
Additional info:
This sounds like an LVM issue if VG does not activate automatically during boot. Or may be it is a setting issue to make sure VG activates during boot automatically.
As of now, docker-storage-setup does not do anything to activate volume group during reboot and it assumes that lvm will take care of activating VG.
Is it just the docker volume group which does not activate automatically. Do other volume groups activate.
I suspect this is the issue of system wide (or per volume group) setting in system which decides whether volume group should be activated automatically on boot or not.
Is there a per volume group setting to enable this?
ephemeral meaning provisioned from storage local to the node instead of one of the OSP storage components.
Correct, storage is available but VG does not activate.
I'd propose to first make terminology clear.
There is no such thing as 'active' VG. It's ONLY LV which is active.
When there is 'active' LV in VG - we 'may' just call such VG to be active.
So when we talk about 'auto activation' of DockerVG - what does it exactly means ?
Unsused thin-pool alone is not 'auto-activated' LV.
LVM does auto-activate only 'ThinLV' (from lvm2 POV there is no reason to activate virtually unused thin-pool LV which has transaction-id == 0 in lvm2 metadata).
ThinLV is not used by Docker.
So now - what exactly is not activated ?
Existing 'lvs -a' ?
Wanted 'lvs -a' ?
If you're not using lvmetad, then there's no LVM autoactivation. Make sure you have lvmetad properly configured - check lvmconfig --type current global/use_lvmetad. Based on comment #10, the configuration is wrong - the use_lvmetad is placed incorrectly in the lvm.conf file, however, in this case LVM should fallback to default operation (which is use_lvmetad=1).
Comment 12Jonathan Earl Brassow
2017-03-08 23:33:37 UTC
Please could you tell us if your issue is resolved after correcting the lvm.conf file?
Customer abandoned the support case after we mentioned the lvm.conf issues. At this point we could probably close as INSUFFICIENT_DATA.
Comment 14Jonathan Earl Brassow
2017-07-28 03:44:46 UTC
(In reply to Jake Hunsaker from comment #13)
> Customer abandoned the support case after we mentioned the lvm.conf issues.
> At this point we could probably close as INSUFFICIENT_DATA.
ok, will do.