Bug 681541 - system-config-lvm crashes when duplicate PV found or PV missing
Summary: system-config-lvm crashes when duplicate PV found or PV missing
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: system-config-lvm
Version: 6.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Marek Grac
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 756082 782183 840699
TreeView+ depends on / blocked
 
Reported: 2011-03-02 14:23 UTC by Bruno Mairlot
Modified: 2018-11-26 17:24 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-08-02 15:26:21 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Bruno Mairlot 2011-03-02 14:23:43 UTC
Description of problem:

I have a system with multiple iSCSI connexions. On all of these (multipathed) device, I have created a Physical Volume, then a Volume Group and through virt-manager I have created the Logical Volume.

Now after a bit of handling these volume group, I have two problems :

- the hypervisor sees a volume group that is actually defined within one of my guest but it doesn't sees the underlying physical volume 

Therefore, each pv*, vg* or lv* command returns a warning with : Couldn't find device with uuid SYYswn-VBES-9IX8-5h8V-QAyP-Coa0-qBAvvY.

But following to that I have a duplicate Physical volume found within the system. That isn't the real problem , but I wanted to check with the tool system-config-lvm.

At launch, the interface is scanning the LVM and the crases

Version-Release number of selected component (if applicable):

System-config-lvm is in version system-config-lvm-1.1.12-7.el6.noarch

How reproducible:

I can reproduce it on my server, but I can't tell exactly how to.

Steps to Reproduce:
1. Have a LVM that behaves strangely (by seeing VG within a guest)
2. Launch system-config-lvm
  
Actual results:

system-config-lvm crashes with the following trace : 

[root@castafiore ~]# system-config-lvm 
Traceback (most recent call last):
  File "/usr/sbin/system-config-lvm", line 172, in <module>
    runFullGUI()
  File "/usr/sbin/system-config-lvm", line 157, in runFullGUI
    blvm = baselvm(glade_xml, app)
  File "/usr/sbin/system-config-lvm", line 105, in __init__
    self.volume_tab_view = Volume_Tab_View(glade_xml, self.lvmm, self.main_win)
  File "/usr/share/system-config-lvm/Volume_Tab_View.py", line 133, in __init__
    self.prepare_tree()
  File "/usr/share/system-config-lvm/Volume_Tab_View.py", line 214, in prepare_tree
    self.model_factory.reload()
  File "/usr/share/system-config-lvm/lvm_model.py", line 175, in reload
    self.__set_LVs_props() # has to come after link_mirrors
  File "/usr/share/system-config-lvm/lvm_model.py", line 813, in __set_LVs_props
    lv.set_properties(self.__get_data_for_LV(lv))
  File "/usr/share/system-config-lvm/lvm_model.py", line 869, in __get_data_for_LV
    text_list.append(self.__getFS(lv.get_path()))
  File "/usr/share/system-config-lvm/lvm_model.py", line 1021, in __getFS
    if path_list[0].getPartition()[1].id == ID_EMPTY:
TypeError: 'NoneType' object is unsubscriptable

Expected results:

system-config-lvm should be running fine.

Additional info:

I wonder how the host can see the VG within the guest machine.

Comment 2 RHEL Program Management 2011-03-02 14:58:03 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.

Comment 8 Marek Grac 2011-09-19 08:30:56 UTC
This looks like a problem in something different. Both my tests are working as expected and system-config-lvm is working. 

Unit test for missing PV:

dd if=/dev/zero of=loopA count=30000 bs=1024
dd if=/dev/zero of=loopB count=30000 bs=1024

losetup -f loop; pvcreate /dev/loop0 /dev/loop1;
vgcreate foo /dev/loop0; lvcreate foo -n goo -l 10%FREE
losetup -d /dev/loop1
run system-config-lvm

Unit test for duplicate PV:

dd if=/dev/zero of=loopA count=30000 bs=1024
losetup -f loopA
pvcreate /dev/loop0
vgcreate foo /dev/loop0
lvcreate -n hugo foo -l 10%FREE
vgchange -an foo
cp loopA loopB
losetup -f loopB
pvscan
run system-config-lvm

Comment 9 Bruno Mairlot 2011-09-19 08:40:39 UTC
Indeed, the situation I experienced was different.

(I'll try to reproduce it exactly). I my case the problem is that the host OS is seeing the virtual VG that are within a VM.

Here are the bigger step to reproduce : 

- Install a RHEL with Virtualization Tools (leave some place in the VG ideally)
- Create a VG and carve a LV from it.
- Set this LV to be the storage of a Virtual Machine (I used KVM)
- Boot the VM and at install set a standard LVM layer.

When using pvscan (or vgscan) the host server see the VG of the guest, but apparently can't see the PV.

Please, note that the VG I created for the VMs are located on iSCSI storage

I believe I could exactly reproduce the setup, I'll post it here.

Bruno

Comment 10 Petr Rockai 2011-09-19 12:51:01 UTC
Another possibility is that the bug is due to PV nesting within LVs. There is only limited support for this scenario and it is strongly suggested to use lvm.conf device filters to avoid this whenever possible. (You should exclude any LVs you have from LVM PV scanning. The documentation and examples in your lvm.conf should give you enough guidance.) This is especially true when you are dealing with virtual machine images.

Comment 13 Bruno Mairlot 2011-10-05 13:07:42 UTC
Hi,

Here is my filter from lvm.conf : 

filter = [ "a|/dev/sda2|","a|/dev/mapper/.*|", "r|.*|" ] 

The /dev/sda2 is the main storage for local installation, all others disks are connected through iSCSI and multi-pathed.

I noticed something while analysing the missings PV. It appears only when the virtual machine is using two PV grouped together within one VG. It is usually a scenario when you want to increase the size of the storage for the vm, and you add a second PV, then group ip within a VG (then resize the fs).

Bruno

Comment 21 Dave Wysochanski 2011-12-12 16:19:25 UTC
Hi Bruno,

From looking at your filter (I assume this is in the host, not the guest right?), and your .cache file:
filter = [ "a|/dev/sda2|","a|/dev/mapper/.*|", "r|.*|" ] 

It seems clear the filter will allow the host to scan the logical volumes which were assigned to the guest (The 'a|/dev/mapper/.*|' part will pick up all the multipath paths, as well as logical volumes).  Since this is allowed, these logical volumes will be seen as "PVs" as well, since they've been exported to the guest, and inside the guest they were initialized as PVs.  I don't think this is what you intend.

To avoid this you should set the filter more restrictively, so that the logical volumes exported to the guest are not scanned by any lvm commands in the host.

To see which devices lvm commands will scan, look at the contents of the /etc/lvm/cache/.cache file.  If it contains any of the logical volumes exported to the hosts, it probably needs adjusted.  One common adjustment is as follows:
For multipath LUNs using user_friendly_names, add
"a|/dev/mapper/mpath.*|" instead of
"a|/dev/mapper/.*|"

You should probably look at https://access.redhat.com/kb/docs/DOC-2991 for configuration of the filter properly.

Even still, this should not cause system-config-lvm to crash.

Comment 22 Bruno Mairlot 2011-12-29 12:09:16 UTC
Hi Dave,

I agree the filter should be set more restrictively but I don't use friendly names in the multipath configuration, so I can't use "a|/dev/mapper/mpath.*|", but I could even be explicit by defining each multipath volume like this : 

filter = [ "a|/dev/sda2|",
           "a|/dev/mapper/alias1|",
           "a|/dev/mapper/alias2|", "r|.*|" ] 

I guess it would not really impact the performance of the scanning since it is done only once at a time.

Bruno

Comment 26 RHEL Program Management 2012-08-02 15:26:21 UTC
Development Management has reviewed and declined this request.
You may appeal this decision by reopening this request.


Note You need to log in before you can comment on or make changes to this bug.