Description of problem: during rhvh boot following message appears "systemd: Cannot add dependency job for unit lvm2-lvmetad.socket, ignoring: Unit is masked." Version-Release number of selected component (if applicable): redhat-release-virtualization-host-4.1-2.1.el7.x86_64 version below 4.1-2 as well How reproducible: everytime Steps to Reproduce: 1. install rhvh 2. boot it 3. check /var/log/messages Actual results: systemd: Cannot add dependency job for unit lvm2-lvmetad.socket, ignoring: Unit is masked. Expected results: as lvm2-lvmetad is disabled i would expect that is would not be as any dependency for service Additional info:
Nir, is there any functional implication here other than the warning?
This bug should move to lvm. I don't know why lvm is logging warnings about disabled and msaked service.
Nir, moving to you,, can you please handle the move to lvm?
Please, do not mask lvm2-lvmetad.socket - it's used by lvm2-lvmetad.service and also referenced in various other systemd units and you would need to mask them too for this to be complete (but you don't need to this at all). If you disable lvmetad in lvm.conf (use_lvmetad=0), then lvm tools will not use lvmetad and hence they won't initiate a connection through lvmetad socket. So there's no gain in masking the lvm2-lvmetad.socket - it's simply not used. Just keep the lvm2-lvmetad.socket as it is, do not mask it please.
Thanks Peter, moving the bug to vdsm, we will change the configuration.
Tal, this should be a trivial change, do you want to schedule it to 4.1.4?
(In reply to Nir Soffer from comment #9) > Tal, this should be a trivial change, do you want to schedule it to 4.1.4? If it's really that trivial, yes please. We should strive to fix bugs with customer tickets as early as possible.
4.1.4 is planned as a minimal, fast, z-stream version to fix any open issues we may have in supporting the upcoming EL 7.4. Pushing out anything unrelated, although if there's a minimal/trival, SAFE fix that's ready on time, we can consider introducing it in 4.1.4.
is it safe to manually do the suggested changes? use_lvmetad=0 in lvm.conf and unmask lvm2-lvmetad.socket? I would like to get rid of the systemd error messages
(In reply to Klaas Demter from comment #16) > is it safe to manually do the suggested changes? > use_lvmetad=0 in lvm.conf and unmask lvm2-lvmetad.socket? I would like to > get rid of the systemd error messages Peter, can you answer that? Based on https://www.redhat.com/archives/lvm-devel/2017-October/msg00035.html, I think this should be solved by LVM. I feel safer when lvmetad *cannot* run or be auto-activated and use_lvmetad is disabled in lvm.conf.
This should be working already (if not, please let me know the lvm version + attach the -vvvv output from the LVM command which causes the lvmetad to get instantiated even if use_lvmetad=0 is set): ● lvm2-lvmetad.service - LVM2 metadata daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; vendor preset: enabled) Active: inactive (dead) Docs: man:lvmetad(8) Tasks: 0 (limit: 4915) CGroup: /system.slice/lvm2-lvmetad.service [0] raw/~ # lvmconfig --type current global/use_lvmetad use_lvmetad=0 [0] raw/~ # pvscan --cache [0] raw/~ # systemctl status lvm2-lvmetad ● lvm2-lvmetad.service - LVM2 metadata daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; vendor preset: enabled) Active: inactive (dead) Docs: man:lvmetad(8) Tasks: 0 (limit: 4915) CGroup: /system.slice/lvm2-lvmetad.service [0] raw/~ # pvs ... [0] raw/~ # vgs ... [0] raw/~ # lvs ... [0] raw/~ # systemctl status lvm2-lvmetad ● lvm2-lvmetad.service - LVM2 metadata daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; vendor preset: enabled) Active: inactive (dead) Docs: man:lvmetad(8) Tasks: 0 (limit: 4915) CGroup: /system.slice/lvm2-lvmetad.service
So that means it is safe to do the changes manually for my running rhev hypervisors and you can incorporate those changes into 4.2? :)
to answer my own question: I've tested it on a test hypervisor, vdsmd checks the service on startup so its not enough to simply change the config and unmask the lvm2-lvmetad.service Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: Error: Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: One of the modules is not configured to work with VDSM. Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: To configure the module use the following: Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: 'vdsm-tool configure [--module module-name]'. Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: If all modules are not configured try to use: Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: 'vdsm-tool configure --force' Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: (The force flag will stop the module's service and start it Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: afterwards automatically to load the new configuration.) Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: abrt is already configured for vdsm Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: Units need configuration: {'lvm2-lvmetad.service': {'LoadState': 'loaded', 'ActiveState': 'inactive'}} Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: lvm requires configuration Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: libvirt is already configured for vdsm Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: Current revision of multipath.conf detected, preserving Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: Modules lvm are not configured Nov 03 10:40:57 rhev-hypervisor.example.com vdsmd_init_common.sh[3103]: vdsm: stopped during execute check_is_configured task (task returned with error code 1). Nov 03 10:40:57 rhev-hypervisor.example.com systemd[1]: vdsmd.service: control process exited, code=exited status=1
(In reply to Klaas Demter from comment #20) > to answer my own question: I've tested it on a test hypervisor, vdsmd checks > the service on startup so its not enough to simply change the config and > unmask the lvm2-lvmetad.service vdsm requires that the lvm2-lvmetad service and socket are masked and disabled. You will have to disable this check by modifying /usr/lib/python2.7/site-packages/vdsm/tool/configurators/lvm.py.
Is there a change proposed for this issue or is it still being evaluated?
(In reply to Klaas Demter from comment #24) > Is there a change proposed for this issue or is it still being evaluated? Waiting for evaluation. How bad is the extra logging caused by this issue?
(In reply to Nir Soffer from comment #25) > (In reply to Klaas Demter from comment #24) > > Is there a change proposed for this issue or is it still being evaluated? > > Waiting for evaluation. How bad is the extra logging caused by this issue? Hi Nir, the log volume is not that bad, it's a single line but it gets logged every day once per hypervisor. The problem for me is that I have to start ignoring messages in my monitoring that I would rather not just turn off :) I could add the specific warning to a white list but I buy rhv so that our problems get fixed properly. Greetings Klaas
(In reply to Peter Rajnoha from comment #18) > This should be working already (if not, please let me know the lvm version + > attach the -vvvv output from the LVM command which causes the lvmetad to get > instantiated even if use_lvmetad=0 is set): > > ● lvm2-lvmetad.service - LVM2 metadata daemon > Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; > vendor preset: enabled) > Active: inactive (dead) > Docs: man:lvmetad(8) > Tasks: 0 (limit: 4915) > CGroup: /system.slice/lvm2-lvmetad.service > > [0] raw/~ # lvmconfig --type current global/use_lvmetad > use_lvmetad=0 > > [0] raw/~ # pvscan --cache > > [0] raw/~ # systemctl status lvm2-lvmetad > ● lvm2-lvmetad.service - LVM2 metadata daemon > Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; > vendor preset: enabled) > Active: inactive (dead) > Docs: man:lvmetad(8) > Tasks: 0 (limit: 4915) > CGroup: /system.slice/lvm2-lvmetad.service > > [0] raw/~ # pvs > ... > > [0] raw/~ # vgs > ... > > [0] raw/~ # lvs > ... > > [0] raw/~ # systemctl status lvm2-lvmetad > ● lvm2-lvmetad.service - LVM2 metadata daemon > Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; > vendor preset: enabled) > Active: inactive (dead) > Docs: man:lvmetad(8) > Tasks: 0 (limit: 4915) > CGroup: /system.slice/lvm2-lvmetad.service This is not just about lvm2-lvmetad.service. This issue is also about lvm2-lvmetad.socket. If I unmask that service it will get started automatically: root # systemctl status lvm2-lvmetad.service lvm2-lvmetad.socket ● lvm2-lvmetad.service - LVM2 metadata daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:lvmetad(8) ● lvm2-lvmetad.socket - LVM2 metadata daemon socket Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; disabled; vendor preset: enabled) Active: active (listening) since Thu 2017-12-07 16:48:00 CET; 1min 24s ago Docs: man:lvmetad(8) Listen: /run/lvm/lvmetad.socket (Stream) even with root # lvmconfig --type current global/use_lvmetad use_lvmetad=0 lvm2-2.02.171-8.el7.x86_64
(In reply to Klaas Demter from comment #27) > ● lvm2-lvmetad.socket - LVM2 metadata daemon socket > Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; disabled; > vendor preset: enabled) > Active: active (listening) since Thu 2017-12-07 16:48:00 CET; 1min 24s ago > Docs: man:lvmetad(8) > Listen: /run/lvm/lvmetad.socket (Stream) > > > even with > > root # lvmconfig --type current global/use_lvmetad > use_lvmetad=0 > > lvm2-2.02.171-8.el7.x86_64 But that is correct and expected - you have the socket prepared to instantiate the service in case the configuration is switched from use_lvmetad=0 to use_lvmetad=1. As far as use_lvmetad=0, there's nothing behind the socket really - it's just systemd keeping the socket file (/run/lvm/lvmetad.socket) prepared for any possible future service instantiation, but that doesn't allocate any more resources than the file itself. Systemd monitors it then for possible access which will happen only in case you switch to use_lvmetad=1 and you call LVM commands with that. So it's by design - lvmetad is socket-activated service and hence we keep the socket unit active (but not service - that is instantiated on first socket access).
Created attachment 1364722 [details] Proposed Patch for this bz Proposed fix, also includes a migration to unmask.
(In reply to Klaas Demter from comment #29) > Created attachment 1364722 [details] > Proposed Patch for this bz > > Proposed fix, also includes a migration to unmask. Thanks Klaas! would you post this patch to oVirt gerrit? The easiest way is: 1. yum install git-review 2. git clone git://gerrit.ovirt.org/vdsm 3. apply your patch, commit 4. git review # will send you patch to oVirt gerrit If have trouble, I can post the patch for you.
Peter, current oVirt disables and masks both lvm2-lvmetad.serive and lvm2-lvmetad.socket, and set global/use_lvmetad to 0. We want to make sure that lvmetad is not used by anything on the system, even if some command is overriding global/use_lvmetad from --config command line. lvmetad is not compatible with the way we use lvm on shared storage, and we have we had serious issues with running lvmetad when using vgremove. For example, vgremove return with zero exit code *before* the operation was completed, and the operation run in the background. With your suggested setup, lvm2-lvmetad.socket will be active, and lvm2-lvmetad.service will happily run on the first time someone overrides gloab/use_lvmetad. Do you think this is safe enough? We prefer that lvm will change the dependencies on startup, so lvm2-lvmetad.socket is not required by other services. We don't want to run lvmetad service, and we don't want the socket activation for a service which should never run.
Peter, see bug 1403836 for the issue with pvremove (not vgremove as I wrote in previous comment).
you could still mask the service, its just the socket thats creating the dependency problems. I can fix that monday in patch if that is really needed.
(In reply to Nir Soffer from comment #31) > Peter, current oVirt disables and masks both lvm2-lvmetad.serive and > lvm2-lvmetad.socket, and set global/use_lvmetad to 0. > > We want to make sure that lvmetad is not used by anything on the system, > even if > some command is overriding global/use_lvmetad from --config command line. If you mask both lvm2-lvmetad.socket and .service and if some command overrides use_lvmetad to use_lvmeatad=1 then such command will print a warning about inability to communicate with lvmetad because it's not going to be instantiated due to masking. So you either end up with a warning: (if use_lvmetad=0 in global lvm.conf and lvmetad is running because some other command switched temporarily to use_lvmetad=1 with --config) WARNING: Not using lvmetad because config setting use_lvmetad=0. WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache). OR this warning: (if use_lvmetad=1 either in global lvm.conf or by --config and lvmetad is not running because the socket/service unit is masked and it's not possible to instantiate lvmetad as a service) WARNING: Failed to connect to lvmetad. Falling back to device scanning. So which warning you prefer? :) I don't think any of the two is a win. What I want to say is that masking the socket/service unit is not quite a good solution if you want to support commands that may override use_lvmetad setting. Maybe what would be better is to simply have a configuration line in lvm.conf to disable any further overrides with --config (if that's too coarse, then we'd need to have a way to disable overrides per config option).
(In reply to Nir Soffer from comment #32) > Peter, see bug 1403836 for the issue with pvremove (not vgremove as I wrote > in > previous comment). (--- just a note, not directly related to this bug report ---) Hmm, if pvremove exits with success, the LVM2 PV signature MUST be removed. If it's not, it's a bug. Maybe you refer to udev still holding information about the device as having LVM2 PV signature? This one is possible because we don't synchronize with udev database when it comes to PV signature creation/deletion. (--- just a note, not directly related to this bug report ---)
Here is example session, showing what happens after unmasking and enabling lvm2-lvmetad.service and lvm2-lvmetad.socket. # systemctl unmask lvm2-lvmetad.service lvm2-lvmetad.socket Removed symlink /etc/systemd/system/lvm2-lvmetad.service. Removed symlink /etc/systemd/system/lvm2-lvmetad.socket. # systemctl enable lvm2-lvmetad.service lvm2-lvmetad.socket # systemctl start lvm2-lvmetad.socket # systemctl status lvm2-lvmetad.socket # systemctl status lvm2-lvmetad.socket ● lvm2-lvmetad.socket - LVM2 metadata daemon socket Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; enabled; vendor preset: enabled) Active: active (listening) since Fri 2017-12-08 16:01:25 IST; 13s ago Docs: man:lvmetad(8) Listen: /run/lvm/lvmetad.socket (Stream) Dec 08 16:01:25 voodoo6.tlv.redhat.com systemd[1]: Listening on LVM2 metadata daemon socket. Dec 08 16:01:25 voodoo6.tlv.redhat.com systemd[1]: Starting LVM2 metadata daemon socket. # systemctl status lvm2-lvmetad.service ● lvm2-lvmetad.service - LVM2 metadata daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; enabled; vendor preset: enabled) Active: inactive (dead) Docs: man:lvmetad(8) Dec 07 04:42:13 voodoo6.tlv.redhat.com systemd[1]: Cannot add dependency job for unit lvm2-lvmetad.service, ignoring: Unit is masked. ... Now lets run lvs command trying to use lvmetad: # lvs --config 'global { use_lvmetad = 1 }' WARNING: Device for PV bSRiP0-GjWy-IbWV-BuAU-yoRi-XDPQ-HEr1CX not found or rejected by a filter. WARNING: Device for PV VblUHM-IuSm-KurY-4wm9-KIJd-FeLY-Ge5Jd0 not found or rejected by a filter. WARNING: Couldn't find all devices for LV 507281e9-0205-444f-aed7-750d062a70e1/metadata while checking used and assumed devices. WARNING: Couldn't find all devices for LV 507281e9-0205-444f-aed7-750d062a70e1/outbox while checking used and assumed devices. WARNING: Couldn't find all devices for LV 507281e9-0205-444f-aed7-750d062a70e1/xleases while checking used and assumed devices. WARNING: Couldn't find all devices for LV 507281e9-0205-444f-aed7-750d062a70e1/leases while checking used and assumed devices. WARNING: Couldn't find all devices for LV 507281e9-0205-444f-aed7-750d062a70e1/ids while checking used and assumed devices. WARNING: Couldn't find all devices for LV 507281e9-0205-444f-aed7-750d062a70e1/inbox while checking used and assumed devices. WARNING: Couldn't find all devices for LV 507281e9-0205-444f-aed7-750d062a70e1/master while checking used and assumed devices. WARNING: Device for PV nEGdUC-xyqA-jG2V-eCK1-ap5H-ACAN-H85WTh not found or rejected by a filter. WARNING: Couldn't find all devices for LV ovirt-local/pool0_tmeta while checking used and assumed devices. WARNING: Couldn't find all devices for LV ovirt-local/pool0_tdata while checking used and assumed devices. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 30d786c0-312f-4d0e-856c-67734fb8136b 507281e9-0205-444f-aed7-750d062a70e1 -wi-----p- 128.00m c0a6e665-0f10-42e7-a552-9108c9b26153 507281e9-0205-444f-aed7-750d062a70e1 -wi-----p- 128.00m ids 507281e9-0205-444f-aed7-750d062a70e1 -wi-ao--p- 128.00m inbox 507281e9-0205-444f-aed7-750d062a70e1 -wi-a---p- 128.00m leases 507281e9-0205-444f-aed7-750d062a70e1 -wi-a---p- 2.00g master 507281e9-0205-444f-aed7-750d062a70e1 -wi-ao--p- 1.00g metadata 507281e9-0205-444f-aed7-750d062a70e1 -wi-a---p- 512.00m outbox 507281e9-0205-444f-aed7-750d062a70e1 -wi-a---p- 128.00m xleases 507281e9-0205-444f-aed7-750d062a70e1 -wi-a---p- 1.00g pool0 ovirt-local twi---tzp- 40.00g 0.00 0.49 test ovirt-local Vwi-a-tzp- 10.00g pool0 0.00 lv_home vg0 -wi-ao---- 924.00m lv_root vg0 -wi-ao---- 14.60g lv_swap vg0 -wi-ao---- 4.00g WARNING: Not using lvmetad because config setting use_lvmetad=0. WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache). # systemctl status lvm2-lvmetad.service ● lvm2-lvmetad.service - LVM2 metadata daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2017-12-08 16:02:22 IST; 6min ago Docs: man:lvmetad(8) Main PID: 25128 (lvmetad) CGroup: /system.slice/lvm2-lvmetad.service └─25128 /usr/sbin/lvmetad -f Dec 08 16:02:22 voodoo6.tlv.redhat.com systemd[1]: Started LVM2 metadata daemon. Dec 08 16:02:22 voodoo6.tlv.redhat.com systemd[1]: Starting LVM2 metadata daemon... The service was started - why would we run a service which is harmful on this host? Now we get warnings on every lvm command: # lvs WARNING: Not using lvmetad because config setting use_lvmetad=0. WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache). LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_home vg0 -wi-ao---- 924.00m lv_root vg0 -wi-ao---- 14.60g lv_swap vg0 -wi-ao---- 4.00g Here lvs command as vdsm run it: # lvs --config 'global { use_lvmetad=0 }' WARNING: Not using lvmetad because config setting use_lvmetad=0. WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache). LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_home vg0 -wi-ao---- 924.00m lv_root vg0 -wi-ao---- 14.60g lv_swap vg0 -wi-ao---- 4.00g WARNING: Not using lvmetad because config setting use_lvmetad=0. WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache). These warnings will be logged to vdsm log, for every lvm command run by vdsm. If we stop the service, we stop getting the warnings: # systemctl stop lvm2-lvmetad.service Warning: Stopping lvm2-lvmetad.service, but it can still be activated by: lvm2-lvmetad.socket # lvs --config 'global { use_lvmetad=0 }' LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_home vg0 -wi-ao---- 924.00m lv_root vg0 -wi-ao---- 14.60g lv_swap vg0 -wi-ao---- 4.00g With current oVirt configuration - masking lvm2-lvmetad.* # lvs --config 'global { use_lvmetad=1 }' /run/lvm/lvmetad.socket: connect failed: Connection refused WARNING: Failed to connect to lvmetad. Falling back to device scanning. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_home vg0 -wi-ao---- 924.00m lv_root vg0 -wi-ao---- 14.60g lv_swap vg0 -wi-ao---- 4.00g Looks much safer setup to me.
(In reply to Klaas Demter from comment #33) > you could still mask the service, its just the socket thats creating the > dependency problems. I can fix that monday in patch if that is really needed. Sounds interesting, does it eliminate the warnings in system logs?
I think it should stop the dependency messages - yes root # lvs --config 'global { use_lvmetad=1 }' Daemon lvmetad returned error 104 WARNING: Failed to connect to lvmetad. Falling back to device scanning. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_root vg00 -wi-ao---- 50.00g lv_swap vg00 -wi-ao---- 3.88g is what lvm will say with masked service but open socket. I'll change a few lines in patch and submit it to gerrit. But in general I think if you explicitly overwrite it on command line it's your own fault if it messes something up :D
Quick follow up, I tested it: it does solve the dependency message but socket then shows: ● lvm2-lvmetad.socket - LVM2 metadata daemon socket Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; enabled; vendor preset: enabled) Active: failed (Result: resources) since Fri 2017-12-08 15:40:31 CET; 5min ago Docs: man:lvmetad(8) Listen: /run/lvm/lvmetad.socket (Stream) Dec 08 15:40:31 hostname systemd[1]: lvm2-lvmetad.socket failed to queue service startup job (Maybe the service file is missing or not a non-template unit?): Unit is masked. Dec 08 15:40:31 hostname systemd[1]: Unit lvm2-lvmetad.socket entered failed state. so just masking the service is not a viable option :)
(In reply to Nir Soffer from comment #36) > Here is example session, showing what happens after unmasking and enabling > lvm2-lvmetad.service and lvm2-lvmetad.socket. > > > # systemctl unmask lvm2-lvmetad.service lvm2-lvmetad.socket > Removed symlink /etc/systemd/system/lvm2-lvmetad.service. [...] > > Looks much safer setup to me. So do you still want me to send the patch to gerrit even though one could deliberately overwrite the config on cli? In that case I'll send the patch as a fix for this bug Or does ovirt/rhv need lvm to change its dependencies so it'll run fine with services masked as long as the config is set to "use_lvmetad = 0". In this case this bug needs to be changed to "lvm2 should not be dependent on lvm2-lvmetad.socket/service if disabled in config" or something like that and lvm needs to fix this.
(In reply to Klaas Demter from comment #40) > (In reply to Nir Soffer from comment #36) > > Here is example session, showing what happens after unmasking and enabling > > lvm2-lvmetad.service and lvm2-lvmetad.socket. > > > > > > # systemctl unmask lvm2-lvmetad.service lvm2-lvmetad.socket > > Removed symlink /etc/systemd/system/lvm2-lvmetad.service. > [...] > > > > Looks much safer setup to me. > > So do you still want me to send the patch to gerrit even though one could > deliberately overwrite the config on cli? In that case I'll send the patch > as a fix for this bug > > Or does ovirt/rhv need lvm to change its dependencies so it'll run fine with > services masked as long as the config is set to "use_lvmetad = 0". In this > case this bug needs to be changed to "lvm2 should not be dependent on > lvm2-lvmetad.socket/service if disabled in config" or something like that > and lvm needs to fix this. Klass, I want to file LVM RFE for improving this situation. I'm not confident about unmaking the lvm2-lvmetad service and socket, since then the only thing protecting us is the use_lvmetad = 0 configuration. If you think that umasking is better then the current situation, please send your patch to gerrit for discussion.
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
Tal, I think this will be solve only in RHEL 8.0, which eliminated lvmetad service, so we can defer this 4.4.
Nir, since the lvm rfe is closed wontfix (bz#1546538), should we close this BZ as well? Since lvmedat is not going to be in RHEL8, would this BZ will be still relevant to us in RHV4.4?
Removing stale needs infos.