Bug 530881 - lvcreate fails on DRBD device
Summary: lvcreate fails on DRBD device
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 11
Hardware: i686
OS: Linux
low
high
Target Milestone: ---
Assignee: Milan Broz
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-10-25 19:59 UTC by Wolfgang Denk
Modified: 2013-03-01 04:07 UTC (History)
11 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2010-03-22 15:11:45 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
Running pvcreate, vgcreate and lvcreate with "-vvvv" + additional info (102.18 KB, text/plain)
2009-10-26 14:03 UTC, Wolfgang Denk
no flags Details
Output of "lvmdump -a -m" (83.03 KB, application/octet-stream)
2009-10-26 14:25 UTC, Wolfgang Denk
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Debian BTS 533436 0 None None None Never

Description Wolfgang Denk 2009-10-25 19:59:22 UTC
Description of problem:

I'm trying to set up a nested LVM configuration with DRBD, see for example
http://www.drbd.org/users-guide-emb/s-nested-lvm.html . Creating logical volumes on top of a DRBD device fails.

Version-Release number of selected component (if applicable):

lvm2-2.02.48-2.fc11.i586
kernel-2.6.30.8-64.fc11.i586

How reproducible:

stable, happens always.

Steps to Reproduce:
1. set up a DRBD device
2. create a physical volume on it:
# pvcreate /dev/drbd0
  Physical volume "/dev/drbd0" successfully created
3. create a volume group on it:
# vgcreate replicated /dev/drbd0
  Volume group "replicated" successfully created
# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "replicated" using metadata type lvm2
  Found volume group "vg0" using metadata type lvm2
  Found volume group "vg1" using metadata type lvm2
# vgchange -a y replicated
  0 logical volume(s) in volume group "replicated" now active
4. try to create a logical volum:
# lvcreate --name foo --size 32G replicated
  device-mapper: reload ioctl failed: Invalid argument
  Aborting. Failed to activate new LV to wipe the start of it.

Actual results:

LV not created.

Error messages on stdout:
  device-mapper: reload ioctl failed: Invalid argument
  Aborting. Failed to activate new LV to wipe the start of it.
Syslog messages:
# Oct 24 21:44:01 gemini kernel: device-mapper: table: 253:11: linear: dm-linear: Device lookup failed
Oct 24 21:44:01 gemini kernel: device-mapper: ioctl: error adding target to table

Expected results:

LV created, no error messages.

Additional info:

I am pretty sure that used to work in older versions (probably F9 or F10),
but unfortunately I don't have any records of this setup any more.

There are similar bug reports from other distributions, so this seems to be a generic (kernel?) issue, see for example http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=533436

Comment 1 Alasdair Kergon 2009-10-25 20:23:22 UTC
As per the Debian bug, run the tool with -vvvv, look for the line similar to this one:
       Adding target: 0 8388608 linear 252:2 384

Take the '252:2' there as major:minor and investigate the state of that device.  Report all its attributes - use 'blockdev', find it in /sys/block etc.  Check if anything has it open.  Check its size etc. etc.

Also test with udev disabled in case that's interfering.

Comment 2 Wolfgang Denk 2009-10-26 14:03:42 UTC
Created attachment 366101 [details]
Running pvcreate, vgcreate and lvcreate with "-vvvv" + additional info

Comment 3 Wolfgang Denk 2009-10-26 14:04:37 UTC
OK, I attached the requested additional information - the only thing I
cannot do right now is testing without udev - I don't know how to
disable udev (and still have a resonably "normal" Fedora 11 running).
Please advice if really needed - although I don't see where udev
would interfere.

[Note this is from another system, actually the peer; problem is the
very same though.]

Please let me know if any additional information is requqired.

Wolfgang Denk, wd

Comment 4 Milan Broz 2009-10-26 14:14:52 UTC
Please can you also attach output (tarball) from lvmdump?
(from some system where is /dev/drbd* device present and lvm fails)

Comment 5 Wolfgang Denk 2009-10-26 14:25:01 UTC
Created attachment 366106 [details]
Output of "lvmdump -a -m"

Comment 6 Milan Broz 2009-10-26 14:51:12 UTC
Thanks.

There is something strange:
- you create pvcreate /dev/drbd0 (/dev/drbd0 is major:minor 147:0)
- but in metadata PV as /dev/dm-2 (which is 253:2 and in fact it is vg0/drbd0 volume mapped to 8:3 - which is /dev/sda3)

So now we need to find where this confusion happens.

Comment 7 Milan Broz 2009-10-26 14:58:28 UTC
seems one thing still missing in lvmdump - please can you paste output of these files?

cat /proc/devices /proc/partitions

Thanks.

Comment 8 Wolfgang Denk 2009-10-26 15:47:38 UTC
-> cat /proc/devices
Character devices:
  1 mem
  4 /dev/vc/0
  4 tty
  4 ttyS
  5 /dev/tty
  5 /dev/console
  5 /dev/ptmx
  7 vcs
  9 st
 10 misc
 13 input
 14 sound
 21 sg
 29 fb
 86 ch
 99 ppdev
116 alsa
128 ptm
136 pts
162 raw
180 usb
188 ttyUSB
189 usb_device
202 cpu/msr
203 cpu/cpuid
206 osst
226 drm
248 firewire
249 hidraw
250 usb_endpoint
251 usbmon
252 bsg
253 pcmcia
254 rtc

Block devices:
  1 ramdisk
  2 fd
259 blkext
  7 loop
  8 sd
  9 md
 11 sr
 65 sd
 66 sd
 67 sd
 68 sd
 69 sd
 70 sd
 71 sd
128 sd
129 sd
130 sd
131 sd
132 sd
133 sd
134 sd
135 sd
147 drbd
253 device-mapper
254 mdp
-> cat /proc/partitions
major minor  #blocks  name

   8        0  244198584 sda
   8        1     128488 sda1
   8        2    4192965 sda2
   8        3  239874547 sda3
   8       16  390711384 sdb
   8       17  390708801 sdb1
 253        0   33554432 dm-0
 253        1   83886080 dm-1
 253        3  390705152 dm-3
 253        2  100663296 dm-2
 147        0  100660188 drbd0

Comment 9 Milan Broz 2009-10-27 11:50:49 UTC
We should probably add some especial case for DRBD similar to MDinto lvm code.

But for the time being, you can blacklist underlying device for DRBD,
see lvm.conf filter setting .

When you create new PV on /dev/drbdX device, lvm should see only PV on /dev/drbd and not the underlying device.

(run vgscan ; pvs and you see something like this:
# pvs
  PV         VG      Fmt  Attr PSize    PFree
  /dev/drbd0 vg_test lvm2 a-   1020.00M 620.00M

and no duplicate warnings like
  Found duplicate PV TQrcpi4AAEzMGDHYY1cbwsByUkpLuEjj: using /dev/sdf not /dev/drbd0)

Then DRBD PV works as expected. (Your configuration is more complicated, because you are using LV for backend DRBD device - but it still should work with proper filter setting.)

Maybe for testing, explicitly name your PVs in lvm.conf.

For your config it can be:
filter = [ "a/drbd/", "a/sda/", "a/sdb/", "r/.*/" ]

(or similar rule). Then run vgscan to refresh device cache and try again.

Comment 10 Wolfgang Denk 2009-10-27 12:12:14 UTC
I'm afraid I don't get what you mean, or it doesn't work. I don't see any "Found duplicate PV" warnings:

# diff /etc/lvm/lvm.conf.ORIG /etc/lvm/lvm.conf
53c53,55
<     filter = [ "a/.*/" ]
---
>     #filter = [ "a/.*/" ]
>     # For DEBD setup, accept only DRBD and "SCSI" disks
>     filter = [ "a/drbd/", "a/sda/", "a/sdb/", "r/.*/" ]
# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg1" using metadata type lvm2
  Found volume group "vg0" using metadata type lvm2
# pvs
  PV         VG   Fmt  Attr PSize   PFree 
  /dev/sda3  vg0  lvm2 a-   228.75G 20.75G
  /dev/sdb1  vg1  lvm2 a-   372.61G     0 
# pvcreate /dev/drbd0
  Physical volume "/dev/drbd0" successfully created
# vgcreate replicated /dev/drbd0
  Volume group "replicated" successfully created
# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg1" using metadata type lvm2
  Found volume group "vg0" using metadata type lvm2
  Found volume group "replicated" using metadata type lvm2
# pvs
  PV         VG         Fmt  Attr PSize   PFree 
  /dev/dm-2  replicated lvm2 a-    96.00G 96.00G
  /dev/sda3  vg0        lvm2 a-   228.75G 20.75G
  /dev/sdb1  vg1        lvm2 a-   372.61G     0 
# lvcreate --name Mail --size 32G replicated
  device-mapper: reload ioctl failed: Invalid argument
  Aborting. Failed to activate new LV to wipe the start of it.

Oct 27 13:10:13 nyx kernel: device-mapper: table: 253:4: linear: dm-linear: Device lookup failed
Oct 27 13:10:13 nyx kernel: device-mapper: ioctl: error adding target to table

As far as I can tell nothing has changed. Did I miss something?

Comment 11 Milan Broz 2009-10-27 12:37:28 UTC
(In reply to comment #10)
> # pvs
>   PV         VG         Fmt  Attr PSize   PFree 
>   /dev/dm-2  replicated lvm2 a-    96.00G 96.00G

this is stil wrong, it uses directly LV instead of drbd device.

Can you try
filter = [ "a|^/dev/drbd|", "a/sda/", "a/sdb/", "r/.*/" ]

I forgot you have drbd in LV name :)

Comment 12 Wolfgang Denk 2009-10-27 13:04:14 UTC
Indeed - this gets it working:

# diff /etc/lvm/lvm.conf.ORIG /etc/lvm/lvm.conf
53c53,55
<     filter = [ "a/.*/" ]
---
>     #filter = [ "a/.*/" ]
>     # For DEBD setup, accept only DRBD and "SCSI" disks
>     filter = [ "a|^/dev/drbd|", "a/sda/", "a/sdb/", "r/.*/" ]
# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg1" using metadata type lvm2
  Found volume group "vg0" using metadata type lvm2
# pvs
  PV         VG   Fmt  Attr PSize   PFree 
  /dev/sda3  vg0  lvm2 a-   228.75G 20.75G
  /dev/sdb1  vg1  lvm2 a-   372.61G     0 
# pvcreate /dev/drbd0
  Physical volume "/dev/drbd0" successfully created
# vgcreate replicated /dev/drbd0
  Volume group "replicated" successfully created
# pvs
  PV         VG         Fmt  Attr PSize   PFree 
  /dev/drbd0 replicated lvm2 a-    96.00G 96.00G
  /dev/sda3  vg0        lvm2 a-   228.75G 20.75G
  /dev/sdb1  vg1        lvm2 a-   372.61G     0 
# lvcreate --name Mail --size 32G replicated
  Logical volume "Mail" created

Thanks a lot!

Comment 13 Milan Broz 2009-10-27 15:38:35 UTC
Patch to automatically prefer DRBD top device similar to MD mirror images is here
https://www.redhat.com/archives/lvm-devel/2009-October/msg00229.html

Comment 14 Milan Broz 2010-03-22 15:11:45 UTC
The code is for some time in rawhide and workaround (filters) exists for older systems, so I am closing this as fixed in rawhide/F13.


Note You need to log in before you can comment on or make changes to this bug.