I see two problems with pvcreate in the latest rawhide bits: 1) pvcreate doesn't see all the disks on my system I have a system with quite a few disks attached to fibre channel cards in the system (>128) and the lvmtools only see a few of them. If I manually add them to /etc/lvm/.cache, I'm able to use pvcreate and other tools on them, which leads to (2) 2) when I use pvcreate in script to loop over all of the devices I want to run it on, I get lots of messages like this: Found duplicate PV AwQix3KWcd0M65hiNOXW7OPHV9B2CYAp: using /dev/sdca not /dev/sdg Found duplicate PV DIl1iYl8uu50LSHy7wmiA3hC0GTcpjav: using /dev/sdcq not /dev/ This happens even if I specify a uuid on the command line with --uuid (I scripted it to increment the uuid for every device so that they're unique). Are these known problems? What other info do you need? Thanks, Jesse
1) What sorts of disks are they? What name/major number in /proc/devices? May need to add this to lvm.conf if it isn't built-in. 2) --uuid is only meant to be used in recovery situations - I don't understand the reason for using it here. Seeing duplicate uuids suggests either a bug in the script (e.g. multiple runs using the same uuid) or you have multiple paths to the disk or are using raid or similar? You should edit filters in lvm.conf so that only one path to each PV is seen.
> 1) What sorts of disks are they? Fibre channel mostly, they appear as /dev/sd* > What name/major number in /proc/devices? I've attached the output of /proc/partitions, which includes the device numbers. > May need to add this to lvm.conf if it isn't built-in. The filter settings of my lvm.conf don't seem like they'd exclude /dev/sd* devices. > 2) --uuid is only meant to be used in recovery situations - I don't > understand the reason for using it here. Seeing duplicate uuids > suggests either a bug in the script (e.g. multiple runs using the > same uuid) Running pvcreate on all the disks created duplicate uuids by default, so I though I'd try explicitly passing unique IDs. > or you have multiple paths to the disk or are using raid or > similar? You should edit filters in lvm.conf so that only one path > to each PV is seen. This could definitely be it. I'm an idiot. Number (1) is still a problem though... Thanks, Jesse
Created attachment 104904 [details] output of /proc/partitions
I asked for /proc/devices not /proc/partitions :-)
Here's /proc/devices. Sorry, I figured /proc/partitions would be even better since it includes the major/minor as well as all of the disks on the system. Jesse
Created attachment 104944 [details] /proc/devices
OK. For problem 1, your devices are all of type 'sd' which lvm2 already recognises; there should be no need to update lvm.conf for that. Try adding the verbose flags to your vgscan and grep for the missing devices to see if it tells you why it's ignoring your devices when you run it to update your .cache file. (vgscan -vvv)
Ahh, vgscan -vvv says that most of the devices are md components: Getting size of /dev/sdcr Opened /dev/sdcr /dev/sdcr: Skipping md component device Closed /dev/sdcr /dev/sdcr: Skipping (cached) Getting size of /dev/sddh Opened /dev/sddh /dev/sddh: Skipping md component device Closed /dev/sddh /dev/sddh: Skipping (cached) If I disable md detection in /etc/lvm/lvm.conf it seems to see everything. I'll double check to see if they actually had md superblocks (they shouldn't have). Thanks, Jesse
See also #130713.
Thanks for your help Alasdair, is it obvious that I've never used these tools before? :) Jesse
Marking as a duplicate of 130713. Once the md superblocks and gpt labels are automatically zeroed by pvcreate, everything should be peachy. *** This bug has been marked as a duplicate of 130713 ***