Description of problem: vgimportclone fails due to duplicate PV Version-Release number of selected component (if applicable): 2.6.18-274.el5 x86_64 lvm2-2.02.84-6.el5.x86_64 How reproducible: very Steps to Reproduce: 1. Attempt vgimportclone where duplicate PV exists filter = [ "a/.*/" ] Actual results: Found duplicate PV bhZhO7RE9wCVOGCZnL5MHfLtYf5Pz6tj: using /dev/sdc not /dev/sdb PV /dev/sdc VG mikectst_rdm_vg01 lvm2 [2.00 GB / 0 free] #pvs --noheadings -o vg_name /dev/sdc Found duplicate PV bhZhO7RE9wCVOGCZnL5MHfLtYf5Pz6tj: using /dev/sdb not /dev/sdc get_pv_from_vg_by_id: vg_read_internal failed to read VG mikectst_rdm_vg01 #pvs --config 'devices { filter = [ "a|sdb|", "r|.*|" ] }' --noheadings -o vg_name /dev/sdb testrdmclone_vg01 #pvs --config 'devices { filter = [ "a|sdc|", "r|.*|" ] }' --noheadings -o vg_name /dev/sdc mikectst_rdm_vg01 # vgimportclone -n testrdmclone_vg02 /dev/sdc &> /tmp/vgimportclone.out [..] ++ lvm pvs --noheadings -o vg_name /dev/sdc + PVS_OUT=' ' + checkvalue 0 '/dev/sdc is not a PV.' <<=== + '[' 0 -ne 0 ']' ++ echo ++ grep -v '[[:space:]]+$' + PV_VGNAME= + '[' -z '' ']' + die 3 '/dev/sdc is not in a VG.' + code=3 + shift + echo 'Fatal: /dev/sdc is not in a VG.' <<== Fatal: /dev/sdc is not in a VG. + exit 3 Work-around: --- # By default we accept every block device: filter = [ "a|/dev/sda$|", "a|/dev/sdc|", "r|.*|" ] Then perform the following: # pvscan # vgimportclone -n testrdmclone_vg02 /dev/sdc &> /tmp/vgimportclone-filter.out <snip> + lvm pvchange --uuid /tmp/snap.kDuO4794/vgimport0 --config 'global{activation=0}' WARNING: Activation disabled. No device-mapper interaction will be attempted. Physical volume "/tmp/snap.kDuO4794/vgimport0" changed 1 physical volume changed / 0 physical volumes not changed ++ echo testrdmclone_vg02 + NEWVGNAME=testrdmclone_vg02 + lvm vgchange --uuid mikectst_rdm_vg01 --config 'global{activation=0}' WARNING: Activation disabled. No device-mapper interaction will be attempted. Volume group "mikectst_rdm_vg01" successfully changed + lvm vgrename mikectst_rdm_vg01 testrdmclone_vg02 Volume group "mikectst_rdm_vg01" successfully renamed to "testrdmclone_vg02" + lvm vgscan --mknodes Reading all physical volumes. This may take a while... Found volume group "testrdmclone_vg02" using metadata type lvm2 + exit 0 </snip> Expected results: vgimportclone succeed w/o modifying lvm.conf filter Additional info: Noted in BZ697959 which is supposed to be resolved in lvm2-2.02.84-6.el5.x86_64.rpm
The whole point of vgimportclone is to make the duplicate PV(s) and VG coexist with the originals. As such it must cope with the fact that you have duplicate PV(s). The vgimportclone shell script confines its operations to the PV(s) specified on the commandline by using a temporary lvm.conf. It shouldn't see any other PV(s) other than those listed on the commandline. No changes to the system's lvm.conf's filter config should have any bearing on how vgimportclone functions (since vgimportclone completely replaces the filter config with its own). Also, the impact active LV(s) have on vgimportclone is in doubt in this BZ but we verify that an active LV doesn't negatively impact vgimportclone as part of the lvm2 testsuite. This testsuite runs every time a change is committed for lvm2. Please have the customer run vgimportclone with the --debug option. This will preserve LVM_SYSTEM_DIR and the config that vgimportclone uses during runtime. Please have them create a tarball of the /tmp/snap.XXXXXXX dir that is created, e.g.: tar -cvzf /tmp/vgimportclone.tar.gz -C /tmp snap.qn1vLVIS and have them provide /tmp/vgimportclone.tar.gz to us.
Created attachment 804161 [details] vgimport clone default lvm filter
Created attachment 804163 [details] vgimport clone modified lvm filter
Done some further testing. Added -vvvv to cmd line to preserve the tmp config. The next step I did was to edit the file as follows to ensure the contents were not removed: if [ $KEEP_TMP_LVM_SYSTEM_DIR -eq 1 ]; then echo "${SCRIPTNAME}: LVM_SYSTEM_DIR (${TMP_LVM_SYSTEM_DIR}) must be cleaned up manually." else echo "${TMP_LVM_SYSTEM_DIR}" <<= Change this line to keep TMP contents fi # LVM_BINARY=/usr/sbin/lvm vgimportclone --debug -vvvv --basevgname testrdmclone_vg02 /dev/sdc &> /tmp/cmd-output However, the directory is empty: # tar -cvzf /tmp/vgimportclone.tar.gz -C /tmp/snap.jBJ25334 tar: Cowardly refusing to create an empty archive # ls -l /tmp/snap.jBJ25334/ total 0 One thing I noted is strange, which you can see in the attached cmd-output is that -vvvv is an argument, however it did not add any additional verbosity.
(In reply to Jon from comment #10) > Done some further testing. Added -vvvv to cmd line to preserve the tmp > config. The next step I did was to edit the file as follows to ensure the > contents were not removed: > > if [ $KEEP_TMP_LVM_SYSTEM_DIR -eq 1 ]; then > echo "${SCRIPTNAME}: LVM_SYSTEM_DIR (${TMP_LVM_SYSTEM_DIR}) must be > cleaned up manually." > else > echo "${TMP_LVM_SYSTEM_DIR}" <<= Change this line to keep TMP > contents > fi > > > # LVM_BINARY=/usr/sbin/lvm vgimportclone --debug -vvvv --basevgname > testrdmclone_vg02 /dev/sdc &> /tmp/cmd-output The script isn't getting far enough to create the LVM_SYSTEM_DIR's lvm.conf because the pvs command doesn't think /dev/sdc is in a VG. > However, the directory is empty: > # tar -cvzf /tmp/vgimportclone.tar.gz -C /tmp/snap.jBJ25334 > tar: Cowardly refusing to create an empty archive > > # ls -l /tmp/snap.jBJ25334/ > total 0 > > > One thing I noted is strange, which you can see in the attached cmd-output > is that -vvvv is an argument, however it did not add any additional > verbosity. The additional output would've gone to stderr.
Created attachment 814777 [details] pvs -vvvv output outside of vgimportclone
I fixed vgimportclone (upstream) to not redirect stderr to /dev/null for 3 of the lvm commands it runs, see: https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=65456a4a29d8f25ab75af2145b8a6b2a9ff391e5
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux release for currently deployed products. This request is not yet committed for inclusion in a release.