Description of problem: vdsm.storage.lvm pvcreate failed due to lvmetad service is running but disabled in vdsm lvm config. lvm._createpv() function returns the following message. >>> from storage import lvm >>> lvm._createpv(["/dev/md127"], metadataSize=0) (0, [' Physical volume "/dev/md127" successfully created'], [' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!', ' /dev/loop0p1: read failed after 0 of 512 at 49872896: Input/output error', ' /dev/loop0p1: read failed after 0 of 512 at 49971200: Input/output error', ' /dev/loop0p1: read failed after 0 of 512 at 0: Input/output error', ' WARNING: Error counts reached a limit of 3. Device /dev/loop0p1 was disabled']) The pv is not available and not visible to pvs How reproducible: Always Steps to Reproduce: 1. Create a pv using vdsm.storage.lvm._createpv Actual results: pv does not created properly Additional info: PV created successfully without any issue if i set the value for use_lvmetad=1 In RHEL7.1 lvm2-lvmetad service is running but vdsm is disabling the service by setting the value to 0 in the lvm.config [root@dhcp42-139 ~]# rpm -qa | grep lvm2 lvm2-2.02.115-3.el7.x86_64 lvm2-libs-2.02.115-3.el7.x86_64 [root@dhcp42-139 ~]# [root@dhcp42-139 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.1 (Maipo)
/var/log/vdsm/supervdsm.log:MainProcess|Thread-18::DEBUG::2015-06-24 16:32:33,650::lvm::291::Storage.Misc.excCmd::(cmd) /usr/sbin/lvm pvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/vdf1|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --dataalignment 640K /dev/vdf1 (cwd None) [root@dhcp42-139 share]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel_dhcp42-139 lvm2 a-- 39.51g 44.00m lvm Command line [root@dhcp42-139 share]# pvcreate /dev/vdf1 --dataalignment 640k Physical volume "/dev/vdf1" successfully created [root@dhcp42-139 share]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel_dhcp42-139 lvm2 a-- 39.51g 44.00m /dev/vdf1 lvm2 --- 1023.00m 1023.00m
1) Success if the value in /etc/lvm/lvm.conf for use_lvmetad=0 it works if we set the value for use_lvmetad=0 in /etc/lvm/lvm.conf and run vdsm.storage.lvm.pvcreate vdsm log as follows: /var/log/vdsm/supervdsm.log:MainProcess|Thread-20::DEBUG::2015-06-24 16:48:08,317::lvm::291::Storage.Misc.excCmd::(cmd) /usr/sbin/lvm pvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/vdf1|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --dataalignment 640K /dev/vdf1 (cwd None) [root@dhcp42-139 share]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel_dhcp42-139 lvm2 a-- 39.51g 44.00m 2) Success if the value for use_lvmetad=1 is set in vdsm/storage/lvm.py and with the default value which is "1" in the /etc/lvm/lvm.conf file. vdsm log as follows: /var/log/vdsm/supervdsm.log:MainProcess|Thread-14::DEBUG::2015-06-24 16:53:40,808::lvm::291::Storage.Misc.excCmd::(cmd) /usr/sbin/lvm pvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/vdf1|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=1 } backup { retain_min = 50 retain_days = 0 } ' --dataalignment 640K /dev/vdf1 (cwd None) [root@dhcp42-139 share]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel_dhcp42-139 lvm2 a-- 39.51g 44.00m /dev/vdf1 lvm2 --- 1023.00m 1023.00m [root@dhcp42-139 share]# pvs -o pe_start 1st PE 1.00m 640.00k
(In reply to Timothy Asir from comment #3) > 1) Success if the value in /etc/lvm/lvm.conf for use_lvmetad=0 > it works if we set the value for use_lvmetad=0 in /etc/lvm/lvm.conf > and run vdsm.storage.lvm.pvcreate > > vdsm log as follows: > /var/log/vdsm/supervdsm.log:MainProcess|Thread-20::DEBUG::2015-06-24 > 16:48:08,317::lvm::291::Storage.Misc.excCmd::(cmd) /usr/sbin/lvm pvcreate > --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ '\''a|/dev/vdf1|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = > 0 } ' --dataalignment 640K /dev/vdf1 (cwd None) > > [root@dhcp42-139 share]# pvs > PV VG Fmt Attr PSize PFree > /dev/vda2 rhel_dhcp42-139 lvm2 a-- 39.51g 44.00m [root@dhcp42-139 share]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel_dhcp42-139 lvm2 a-- 39.51g 44.00m /dev/vdf1 lvm2 --- 1023.00m 1023.00m > > > 2) Success if the value for use_lvmetad=1 is set in vdsm/storage/lvm.py > and with the default value which is "1" in the /etc/lvm/lvm.conf file. > > vdsm log as follows: > /var/log/vdsm/supervdsm.log:MainProcess|Thread-14::DEBUG::2015-06-24 > 16:53:40,808::lvm::291::Storage.Misc.excCmd::(cmd) /usr/sbin/lvm pvcreate > --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ '\''a|/dev/vdf1|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=1 } backup { retain_min = 50 retain_days = > 0 } ' --dataalignment 640K /dev/vdf1 (cwd None) > > [root@dhcp42-139 share]# pvs > PV VG Fmt Attr PSize PFree > /dev/vda2 rhel_dhcp42-139 lvm2 a-- 39.51g 44.00m > /dev/vdf1 lvm2 --- 1023.00m 1023.00m > > [root@dhcp42-139 share]# pvs -o pe_start > 1st PE > 1.00m > 640.00k
(In reply to Timothy Asir from comment #0) > How reproducible: > Always It does not happen in rhel 7.1 when working with block storage domain in vdsm. Can you explain how to reproduce this with vdsm? Also, vdsm does not use metadataSize=0 - this option was added for gluster. Maybe you need to make additional tweaks are needed in this case.
(In reply to Timothy Asir from comment #2) > /var/log/vdsm/supervdsm.log:MainProcess|Thread-18::DEBUG::2015-06-24 > 16:32:33,650::lvm::291::Storage.Misc.excCmd::(cmd) /usr/sbin/lvm pvcreate > --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ '\''a|/dev/vdf1|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = > 0 } ' --dataalignment 640K /dev/vdf1 (cwd None) This call does not come from vdsm.storage.lvm module - we do not use supervdsm to run lvm commands, but sudo. We also do not use the "--dataalignment 640K" argument - can you point me to the code you are running?
(In reply to Timothy Asir from comment #0) > The pv is not available and not visible to pvs Stupid question - did you run pvscan --cache before checking if the pv was created using pvs command? Vdsm bypass lvmetad, because we use our own metadata cache, so you must update lvmetad before using lvm commands on the machine.
yes if i run pvscan --cache afer i create the pv using lvm._createpv() and before pvs command it is showing correctly. I have checked this in RHEL7.1 and in Fedora22 also. However if we did not run pvscan --cache afer creating the pv; it does not appear to pvs (and in the flow, the next vgcreate command simply creates a pv with default value (since the pv was not available) and creates vg.) Observation in RHEL7.1 as follows: [root@dhcp42-139 ~]# cd /usr/share/vdsm [root@dhcp42-139 vdsm]# python Python 2.7.5 (default, Feb 11 2014, 07:46:25) [GCC 4.8.2 20140120 (Red Hat 4.8.2-13)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from storage import lvm >>> lvm._createpv(["/dev/vdf1"], metadataSize=0) (0, [' Physical volume "/dev/vdf1" successfully created'], [' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!']) >>> [root@dhcp42-139 vdsm]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel_dhcp42-139 lvm2 a-- 39.51g 44.00m [root@dhcp42-139 vdsm]# pvscan --cache [root@dhcp42-139 vdsm]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel_dhcp42-139 lvm2 a-- 39.51g 44.00m /dev/vdf1 lvm2 --- 1023.00m 1023.00m [root@dhcp42-139 vdsm]# Observation in Fedora 22 as follows: [root@dhcp42-31 vdsm]# cd /usr/share/vdsm/ [root@dhcp42-31 vdsm]# python Python 2.7.9 (default, Apr 15 2015, 12:08:00) [GCC 5.0.0 20150319 (Red Hat 5.0.0-0.21)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from storage import lvm >>> lvm._createpv(["/dev/vdf1"], metadataSize=0) (0, [' Physical volume "/dev/vdf1" successfully created'], [' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!']) >>> [root@dhcp42-31 vdsm]# pvs WARNING: Device for PV 6SvzN4-XqKq-5ICO-3KQW-exGQ-7M6N-08vxfe not found or rejected by a filter. WARNING: Device for PV 6SvzN4-XqKq-5ICO-3KQW-exGQ-7M6N-08vxfe not found or rejected by a filter. PV VG Fmt Attr PSize PFree /dev/vda2 fedora_dhcp42-31 lvm2 a-- 39.51g 4.00m /dev/vdb1 VolGroup lvm2 a-- 2.00g 1020.00m unknown device VolGroup lvm2 a-m 39.51g 0 [root@dhcp42-31 vdsm]# [root@dhcp42-31 vdsm]# pvscan --cache [root@dhcp42-31 vdsm]# pvs WARNING: Device for PV 6SvzN4-XqKq-5ICO-3KQW-exGQ-7M6N-08vxfe not found or rejected by a filter. WARNING: Device for PV 6SvzN4-XqKq-5ICO-3KQW-exGQ-7M6N-08vxfe not found or rejected by a filter. PV VG Fmt Attr PSize PFree /dev/vda2 fedora_dhcp42-31 lvm2 a-- 39.51g 4.00m /dev/vdb1 VolGroup lvm2 a-- 2.00g 1020.00m /dev/vdf1 lvm2 --- 3.00g 3.00g unknown device VolGroup lvm2 a-m 39.51g 0
(In reply to Timothy Asir from comment #8) > yes if i run pvscan --cache ... it is showing correctly. Expected... > However if we did not run pvscan --cache ... it does not > appear to pvs Also expected... > (and in the flow, the next vgcreate command simply creates a > pv with default value (since the pv was not available) and creates vg.) What do you mean by the next vgcreate command? vdsm knows about the new pv since it always access storage directly, bypassing lvmetad. lvm commands in the shell access lvmetad instead of going to storage, and may be confused if you do not update the cache before each command. I don't see any bug.
If this is a case, we need to call "pvscan --cache" after storage.lvm.pvcreate. Because, Many times people would like to create their LVM phase by phase. Meaning first they want to create only pvs and wants the details of the created pv (to be shown in the ui); then after some time, they might create the vgs and lvs. Hope this issue will not occurs if we call storage.lvm.pvcreate and storage.lvm.vgcreate because vdsm knows about the new pv. In our case we are using storage.lvm.pvcreate and lvm vgcreate command. However if some one wants to create only pv and wants the details of the pv, the current vdsm will not support completely. That means one has to run "pvscan --cache" after they create the pv if they want the details of the created pv. So, my suggestion is that, the vdsm should do this "pvscan --cache" in the pvcreate function with an option. So that wherever this api is being used need not to care about this. What is your opinion about this? What do you mean by the next vgcreate command? Currently gluster.storage uses vdsm.storage to create list of pvs and lvm command to create vg. We observed a change in pv before and after create the vg. That means the pv's at the time of pvcreate is different after vg create. Looks like the "lvm command line vg create" failed to identify/found the pv which was created early and creates a new pv with default values and used it for the vg.
(In reply to Timothy Asir from comment #10) > If this is a case, we need to call "pvscan --cache" after > storage.lvm.pvcreate. Not sure what is "we" - if you mean "vdsm", no, vdsm should not call pvscan --cache, since we do not use lvmetad, and we don't care about its state. We bypass lvmetad because this cache cannot be correct for clustered storage, where one host perform operations on the shared storage, and other host consume the shared storage, and no nothing about the changes made on the other host. If you want to use lvm commands on a machine with vdsm, yes, you will need to call pvscan --cache before lvm commands to update lvmetad, or, you should use --config like vdsm to got directly to the storage - for example: vgcrate --config "global {use_lvmetad=0}" ... This commands bypass lvmetad and go directly to storage; this is what vdsm is doing as you can see in *all* lvm commands vdsm is running. We plan to integrate vdsm with lvmetad in future version, hoping that we can replace vdsm's private cache with lvmetad, but did not start to work on this yet. > What do you mean by the next vgcreate command? > Currently gluster.storage uses vdsm.storage to create list of pvs and lvm > command to create vg. We observed a change in pv before and after create the > vg. That means the pv's at the time of pvcreate is different after vg > create. Looks like the "lvm command line vg create" failed to identify/found > the pv which was created early and creates a new pv with default values and > used it for the vg. Why are you mixing vdsm.lvm functions and calling lvm commands directly? You should either use only vdsm.lvm functions, or only lvm commands. Mixing both does not make sense.
I don't see any bug here. This is explained in comment 11 and earlier. Please reopen if you think some action is needed on the vdsm side.