RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1467975 - stray lvmlockd lock when attempting to create volume using an already in use minor number
Summary: stray lvmlockd lock when attempting to create volume using an already in use ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1489986 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-05 17:36 UTC by Corey Marthaler
Modified: 2021-09-03 12:38 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.02.175-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 15:20:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
lvmlockctl --dump from failing node host-115 (1.00 MB, text/plain)
2017-07-05 21:45 UTC, Corey Marthaler
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0853 0 None None None 2018-04-10 15:21:32 UTC

Description Corey Marthaler 2017-07-05 17:36:28 UTC
Description of problem:

[root@host-114 ~]# pcs status
Cluster name: STSRHTS23738
Stack: corosync
Current DC: host-114 (version 1.1.16-12.el7-94ff4df) - partition with quorum
Last updated: Wed Jul  5 12:07:23 2017
Last change: Mon Jul  3 14:36:26 2017 by root via cibadmin on host-113

3 nodes configured
3 resources configured

Online: [ host-113 host-114 host-115 ]

Full list of resources:

 fence-host-113 (stonith:fence_xvm):    Started host-113
 fence-host-114 (stonith:fence_xvm):    Started host-114
 fence-host-115 (stonith:fence_xvm):    Started host-115

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled


[root@host-114 ~]# systemctl status lvm2-lvmlockd
â lvm2-lvmlockd.service - LVM2 lock daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmlockd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2017-07-03 14:36:31 CDT; 1 day 21h ago
     Docs: man:lvmlockd(8)
 Main PID: 21857 (lvmlockd)
   CGroup: /system.slice/lvm2-lvmlockd.service
           ââ21857 /usr/sbin/lvmlockd -f

Jul 03 14:36:31 host-114.virt.lab.msp.redhat.com systemd[1]: Started LVM2 lock daemon.
Jul 03 14:36:31 host-114.virt.lab.msp.redhat.com systemd[1]: Starting LVM2 lock daemon...
Jul 03 14:36:31 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: 1499110591 lvmlockd started
Jul 03 14:36:31 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: [D] creating /run/lvm/lvmlockd.socket
Jul 03 15:04:16 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: 1499112256 S lvm_raid_sanity R VGLK res_update cl 130 lock not found
Jul 03 15:16:16 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: 1499112976 S lvm_raid_sanity R VGLK res_update cl 253 lock not found
Jul 03 15:51:57 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: 1499115117 S lvm_raid_sanity R VGLK res_update cl 565 lock not found


# Final cleanup is failing after successful test scenarios
removing VG global on host-115
skipping global vg for later...
removing VG raid_sanity on host-115
host-114: vgchange --lock-stop  raid_sanity
  VG raid_sanity stop failed: LVs must first be deactivated
unable to stop lock space for raid_sanity on host-114


[root@host-113 ~]# lvs -a -o +devices
  LV        VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  [lvmlock] global        -wi-ao---- 256.00m                                                     /dev/sdf2(0)  
  [lvmlock] raid_sanity   -wi-ao---- 256.00m                                                     /dev/sdf1(0)  

[root@host-114 ~]# lvs -a -o +devices
  LV        VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  [lvmlock] global        -wi-ao---- 256.00m                                                     /dev/sdf2(0)  
  [lvmlock] raid_sanity   -wi-ao---- 256.00m                                                     /dev/sdf1(0)  

[root@host-115 ~]# lvs -a -o +devices
  LV        VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  [lvmlock] global        -wi-ao---- 256.00m                                                     /dev/sdf2(0)  
  [lvmlock] raid_sanity   -wi-ao---- 256.00m                                                     /dev/sdf1(0)  

# passes
[root@host-113 ~]# vgchange --lock-stop  raid_sanity

# fails
[root@host-114 ~]# vgchange --lock-stop  raid_sanity
  VG raid_sanity stop failed: LVs must first be deactivated

# passes
[root@host-115 ~]# vgchange --lock-stop  raid_sanity


# There are no LVs period, more less active ones
[root@host-114 ~]# lvs
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel_host-114 -wi-ao----  <6.20g                                                    
  swap rhel_host-114 -wi-ao---- 820.00m                                                    

[root@host-114 ~]# dmsetup ls
global-lvmlock  (253:2)
raid_sanity-lvmlock     (253:3)



[root@host-113 ~]# sanlock gets -h 1
s lvm_global:1325:/dev/mapper/global-lvmlock:0 
h 105 gen 1 timestamp 417342 LIVE
h 1063 gen 1 timestamp 417328 LIVE
h 1325 gen 1 timestamp 417327 LIVE


[root@host-114 ~]# sanlock gets -h 1
s lvm_raid_sanity:105:/dev/mapper/raid_sanity-lvmlock:0 
h 105 gen 1 timestamp 417316 LIVE
s lvm_global:105:/dev/mapper/global-lvmlock:0 
h 105 gen 1 timestamp 417321 LIVE
h 1063 gen 1 timestamp 417328 LIVE
h 1325 gen 1 timestamp 417327 LIVE


[root@host-115 ~]# sanlock gets -h 1
s lvm_global:1063:/dev/mapper/global-lvmlock:0 
h 105 gen 1 timestamp 417342 LIVE
h 1063 gen 1 timestamp 417328 LIVE
h 1325 gen 1 timestamp 417347 LIVE



From verbose output: "Counted 0 active LVs in VG raid_sanity"


#metadata/vg.c:74         Allocated VG raid_sanity at 0x56249018e390.
#format_text/import_vsn1.c:597         Importing logical volume raid_sanity/lvmlock.
#format_text/import_vsn1.c:722         Logical volume raid_sanity/lvmlock is sanlock lv.
#toollib.c:1970       Process single VG raid_sanity
#activate/activate.c:1476         Counted 0 active LVs in VG raid_sanity
#locking/lvmlockd.c:1078         lockd stop VG raid_sanity lock_type sanlock
#libdm-config.c:956       Setting response to OK
#libdm-config.c:987       Setting op_result to -16
#libdm-config.c:956       Setting lock_type to none
#locking/lvmlockd.c:174         lockd_result -16 flags none lm none
#locking/lvmlockd.c:1097   VG raid_sanity stop failed: LVs must first be deactivated
#vgchange.c:1032         <backtrace>
#toollib.c:1975         <backtrace>
#mm/memlock.c:562         Unlock: Memlock counters: locked:0 critical:0 daemon:0 suspended:0
#activate/fs.c:489         Syncing device names
#cache/lvmcache.c:157         Metadata cache: VG raid_sanity wiped.
#misc/lvm-flock.c:70       Unlocking /run/lock/lvm/V_raid_sanity
#misc/lvm-flock.c:47         _undo_flock /run/lock/lvm/V_raid_sanity



# Restarted on all nodes:
[root@host-113 ~]# vgchange --lock-start  raid_sanity
  VG raid_sanity starting sanlock lockspace
  Starting locking.  Waiting for sanlock may take 20 sec to 3 min...

[root@host-114 ~]# vgchange --lock-start  raid_sanity
  Starting locking.  Waiting for sanlock may take 20 sec to 3 min...

[root@host-115 ~]# vgchange --lock-start  raid_sanity
  VG raid_sanity starting sanlock lockspace
  Starting locking.  Waiting for sanlock may take 20 sec to 3 min...



[root@host-113 ~]#  sanlock gets -h 1
s lvm_raid_sanity:1325:/dev/mapper/raid_sanity-lvmlock:0 
h 105 gen 1 timestamp 417686 UNKNOWN
h 1063 gen 2 timestamp 417667 UNKNOWN
h 1325 gen 2 timestamp 417666 LIVE
s lvm_global:1325:/dev/mapper/global-lvmlock:0 
h 105 gen 1 timestamp 417691 LIVE
h 1063 gen 1 timestamp 417677 LIVE
h 1325 gen 1 timestamp 417675 LIVE

[root@host-114 ~]#  sanlock gets -h 1
s lvm_raid_sanity:105:/dev/mapper/raid_sanity-lvmlock:0 
h 105 gen 1 timestamp 417665 LIVE
h 1063 gen 2 timestamp 417667 LIVE
h 1325 gen 2 timestamp 417666 LIVE
s lvm_global:105:/dev/mapper/global-lvmlock:0 
h 105 gen 1 timestamp 417670 LIVE
h 1063 gen 1 timestamp 417677 LIVE
h 1325 gen 1 timestamp 417675 LIVE

[root@host-115 ~]#  sanlock gets -h 1
s lvm_raid_sanity:1063:/dev/mapper/raid_sanity-lvmlock:0 
h 105 gen 1 timestamp 417686 UNKNOWN
h 1063 gen 2 timestamp 417667 LIVE
h 1325 gen 2 timestamp 417687 UNKNOWN
s lvm_global:1063:/dev/mapper/global-lvmlock:0 
h 105 gen 1 timestamp 417691 LIVE
h 1063 gen 1 timestamp 417677 LIVE
h 1325 gen 1 timestamp 417696 LIVE


# After this, the lock stop continued to fail, however a simple vgremove worked to remove the vg, which shouldn't be allowed w/o the lock being stopped, correct?

[root@host-114 ~]# vgremove -f raid_sanity
  Volume group "raid_sanity" successfully removed



Version-Release number of selected component (if applicable):
3.10.0-689.el7.x86_64

lvm2-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
lvm2-libs-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
lvm2-cluster-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-libs-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-event-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-event-libs-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017
cmirror-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
sanlock-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
sanlock-lib-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
lvm2-lockd-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017

Comment 2 David Teigland 2017-07-05 18:31:07 UTC
> #vgchange --lock-stop  raid_sanity
> VG raid_sanity stop failed: LVs must first be deactivated

This is printed when lvmlockd returns -EBUSY for the lock-stop request from vgchange.  lvmlockd returns -EBUSY if it finds LV locks still exist in the lockspace.  -EBUSY is then translated into the "LVs are active" message.

What's not clear is what LV locks still exist and why.  If no LVs exist, no LV locks should exist either.  So, I'm guessing that there may have been a command run during the test that left a stray LV lock in the lockspace (a bug).

'sanlock gets' reports info about lockspaces+hosts, but not about locks.  The 'sanlock status' command will show info about lockspaces+locks, which should show us if lvmlockd is still holding any stray LV locks.  To actually debug where a stray lock came from, we'll need to collect the full debug log from lvmlockd after the --lock-stop failure.  The command for that is 'lvmlockctl --dump' redirected to some file.

Comment 3 Corey Marthaler 2017-07-05 21:43:02 UTC
[root@host-114 ~]# vgchange --lock-stop  raid_sanity
[root@host-114 ~]# sanlock status
daemon 116c961f-2475-43fa-9bef-373f3f068f46.host-114.v
p -1 helper
p -1 listener
p 28036 lvmlockd
p -1 status
s lvm_global:105:/dev/mapper/global-lvmlock:0



[root@host-115 ~]# vgchange --lock-stop  raid_sanity
  VG raid_sanity stop failed: LVs must first be deactivated
[root@host-115 ~]# sanlock status
daemon 2e704e09-785f-400d-82c1-9675ca80bdf4.host-115.v
p -1 helper
p -1 listener
p 16581 lvmlockd
p 16581 lvmlockd
p -1 status
s lvm_raid_sanity:1063:/dev/mapper/raid_sanity-lvmlock:0
s lvm_global:1063:/dev/mapper/global-lvmlock:0
r lvm_raid_sanity:zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk:/dev/mapper/raid_sanity-lvmlock:71303168:1 p 16581

Comment 4 Corey Marthaler 2017-07-05 21:45:24 UTC
Created attachment 1294744 [details]
lvmlockctl --dump from failing node host-115

Comment 5 Corey Marthaler 2017-07-05 22:00:34 UTC
r lvm_raid_sanity:zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk:/dev/mapper/raid_sanity-lvmlock:71303168:1 p 16581

Searching for "zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk":

1499286064 send lvcreate[32619] cl 330 find_free_lock vg rv 0  
1499286064 recv lvcreate[32619] cl 330 init lv "raid_sanity" mode iv flags 0
1499286064 work init_lv raid_sanity/inuse_minorB uuid zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk
1499286064 S lvm_raid_sanity init_lv_san zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk found unused area at 71303168
1499286064 send lvcreate[32619] cl 330 init lv rv 0 vg_args 1.0.0:lvmlock lv_args 1.0.0:71303168
1499286064 recv lvcreate[32619] cl 330 update vg "raid_sanity" mode iv flags 0
1499286064 S lvm_raid_sanity R VGLK action update iv
1499286064 S lvm_raid_sanity R VGLK res_update cl 330 lk version to 281
1499286064 send lvcreate[32619] cl 330 update vg rv 0  
1499286064 recv lvcreate[32619] cl 330 update vg "raid_sanity" mode iv flags 0
1499286064 S lvm_raid_sanity R VGLK action update iv
1499286064 S lvm_raid_sanity R VGLK res_update cl 330 lk version to 282
1499286064 send lvcreate[32619] cl 330 update vg rv 0  
1499286065 recv lvcreate[32619] cl 330 lock lv "raid_sanity" mode ex flags 1
1499286065 S lvm_raid_sanity R zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk action lock ex
1499286065 S lvm_raid_sanity R zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk res_lock cl 330 mode ex (inuse_minorB)
1499286065 S lvm_raid_sanity R zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk lock_san ex at /dev/mapper/raid_sanity-lvmlock:71303168
1499286065 S lvm_raid_sanity R zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk res_lock rv 0
1499286065 send lvcreate[32619] cl 330 lock lv rv 0  
1499286065 recv lvcreate[32619] cl 330 update vg "raid_sanity" mode iv flags 0
1499286065 S lvm_raid_sanity R VGLK action update iv
1499286065 S lvm_raid_sanity R VGLK res_update cl 330 lk version to 283
1499286065 send lvcreate[32619] cl 330 update vg rv 0  
1499286065 recv lvcreate[32619] cl 330 lock vg "raid_sanity" mode un flags 0



Which is the following test case:

SCENARIO (raid1) - [create_inuse_minor_raid]
Create a raid and then attempt to reuse it's minor num on a new raid
Creating raid with rand minor num 165

lvcreate --activate ey --type raid1 -m 1 -n inuse_minorA -L 300M -My --major 253 --minor 165 raid_sanity
  WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead.

dmsetup ls | grep inuse_minorA | grep 165

Attempt to create raid with in use minor num 165
lvcreate --activate ey --type raid1 -m 1 -n inuse_minorB -L 300M -My --major 253 --minor 165 raid_sanity
  WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead.
  The requested major:minor pair (253:165) is already used.
  Failed to activate new LV.

Deactivating raid inuse_minorA... and removing


  VG raid_sanity stop failed: LVs must first be deactivated
unable to stop lock space for raid_sanity on host-115

[root@host-115 ~]# lvs
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel_host-115 -wi-ao----  <6.20g                                                    
  swap rhel_host-115 -wi-ao---- 820.00m                                                    
[root@host-115 ~]# sanlock status
daemon 9986d712-c73e-4690-a2ce-51b911a84641.host-115.v
p -1 helper
p -1 listener
p 21184 lvmlockd
p 21184 lvmlockd
p -1 status
s lvm_raid_sanity:1063:/dev/mapper/raid_sanity-lvmlock:0
s lvm_global:1063:/dev/mapper/global-lvmlock:0
r lvm_raid_sanity:WT1nN5-EtwM-pJnJ-uexB-zoHi-cQij-lI4g2I:/dev/mapper/raid_sanity-lvmlock:71303168:1 p 21184

Comment 6 David Teigland 2017-07-06 16:34:16 UTC
Thanks for the debugging, it looks like the error path is missing the unlock which leaves the stray LV lock as suspected.

Comment 8 Corey Marthaler 2017-09-11 14:57:26 UTC
*** Bug 1489986 has been marked as a duplicate of this bug. ***

Comment 11 Corey Marthaler 2017-10-16 22:53:53 UTC
Fix verified in the latest rpms.

lvm2-2.02.175-2.el7    BUILT: Fri Oct 13 06:31:22 CDT 2017
lvm2-libs-2.02.175-2.el7    BUILT: Fri Oct 13 06:31:22 CDT 2017
lvm2-cluster-2.02.175-2.el7    BUILT: Fri Oct 13 06:31:22 CDT 2017
device-mapper-1.02.144-2.el7    BUILT: Fri Oct 13 06:31:22 CDT 2017
device-mapper-libs-1.02.144-2.el7    BUILT: Fri Oct 13 06:31:22 CDT 2017
device-mapper-event-1.02.144-2.el7    BUILT: Fri Oct 13 06:31:22 CDT 2017
device-mapper-event-libs-1.02.144-2.el7    BUILT: Fri Oct 13 06:31:22 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017
cmirror-2.02.175-2.el7    BUILT: Fri Oct 13 06:31:22 CDT 2017
sanlock-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
sanlock-lib-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
lvm2-lockd-2.02.175-2.el7    BUILT: Fri Oct 13 06:31:22 CDT 2017


# mirrors
host-040: pvcreate /dev/sdf2 /dev/sdf1 /dev/sda2 /dev/sda1 /dev/sdh2 /dev/sdh1 /dev/sdd2 /dev/sdd1 /dev/sdg2 /dev/sdg1
host-040: vgcreate  --shared mirror_sanity /dev/sdf2 /dev/sdf1 /dev/sda2 /dev/sda1 /dev/sdh2 /dev/sdh1 /dev/sdd2 /dev/sdd1 /dev/sdg2 /dev/sdg1
host-040: vgchange --lock-start mirror_sanity
host-041: vgchange --lock-start mirror_sanity
host-042: vgchange --lock-start mirror_sanity

============================================================
Iteration 1 of 1 started at Mon Oct 16 17:43:28 CDT 2017
============================================================
SCENARIO - [create_inuse_minor_mirror]
Create a mirror and then attempt to reuse it's minor num on a new mirror
Creating mirror with rand minor num 145
  WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead.
Attempt to create mirror with in use minor num 145
  WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead.
  The requested major:minor pair (253:145) is already used.
  Failed to activate new LV.
Deactivating mirror inuse_minorA... and removing


# thin pools
host-040: pvcreate /dev/sdf1 /dev/sda1 /dev/sdh1 /dev/sdd1 /dev/sdg1
host-040: vgcreate  --shared snapper_thinp /dev/sdf1 /dev/sda1 /dev/sdh1 /dev/sdd1 /dev/sdg1
host-040: vgchange --lock-start snapper_thinp
host-041: vgchange --lock-start snapper_thinp
host-042: vgchange --lock-start snapper_thinp

============================================================
Iteration 1 of 1 started at Mon Oct 16 17:46:40 CDT 2017
============================================================
SCENARIO - [create_inuse_minor_thin_snap]
Create a snapshot and then attempt to reuse it's minor num on a new snapshot
Making pool volume
lvcreate --activate ey --thinpool POOL -L 1G --profile thin-performance --zero n --poolmetadatasize 4M snapper_thinp
There should be no "stripesize" messages in pool create output (possible regression of bug 1382860)
  Using default stripesize 64.00 KiB.
   Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
   Logical volume "POOL" created.

Making origin volume
lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n origin
lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other1
  WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other2
  WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other3
  WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n other4
  WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other5
  WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
Creating snapshot with rand minor num 102
lvcreate --activate ey -k n -s /dev/snapper_thinp/origin -n inuse_minorA -My --major 253 --minor 102
  WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead.
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
Attempt to create snapshot with in use minor num 102
Removing thin origin and other virtual thin volumes
Removing pool snapper_thinp/POOL


# raids
host-040: pvcreate /dev/sdf2 /dev/sdf1 /dev/sda2 /dev/sda1 /dev/sdh2 /dev/sdh1 /dev/sdd2 /dev/sdd1 /dev/sdg2 /dev/sdg1
host-040: vgcreate  --shared raid_sanity /dev/sdf2 /dev/sdf1 /dev/sda2 /dev/sda1 /dev/sdh2 /dev/sdh1 /dev/sdd2 /dev/sdd1 /dev/sdg2 /dev/sdg1
host-040: vgchange --lock-start raid_sanity
host-041: vgchange --lock-start raid_sanity
host-042: vgchange --lock-start raid_sanity

============================================================
Iteration 1 of 1 started at Mon Oct 16 17:50:09 CDT 2017
============================================================
SCENARIO (raid1) - [create_inuse_minor_raid]
Create a raid and then attempt to reuse it's minor num on a new raid
Creating raid with rand minor num 104
  WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead.
Attempt to create raid with in use minor num 104
  WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead.
  The requested major:minor pair (253:104) is already used.
  Failed to activate new LV.

perform raid scrubbing (lvchange --syncaction check) on raid raid_sanity/inuse_minorA
  raid_sanity/inuse_minorA state is currently "resync".  Unable to switch to "check".
Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )
Sleeping 15 sec

Deactivating raid inuse_minorA... and removing

Comment 14 errata-xmlrpc 2018-04-10 15:20:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0853


Note You need to log in before you can comment on or make changes to this bug.