Bug 1467975
| Summary: | stray lvmlockd lock when attempting to create volume using an already in use minor number | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> | ||||
| Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> | ||||
| lvm2 sub component: | LVM lock daemon / lvmlockd | QA Contact: | cluster-qe <cluster-qe> | ||||
| Status: | CLOSED ERRATA | Docs Contact: | |||||
| Severity: | medium | ||||||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, mcsontos, prajnoha, teigland, zkabelac | ||||
| Version: | 7.4 | ||||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | lvm2-2.02.175-1.el7 | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-04-10 15:20:44 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
> #vgchange --lock-stop raid_sanity
> VG raid_sanity stop failed: LVs must first be deactivated
This is printed when lvmlockd returns -EBUSY for the lock-stop request from vgchange. lvmlockd returns -EBUSY if it finds LV locks still exist in the lockspace. -EBUSY is then translated into the "LVs are active" message.
What's not clear is what LV locks still exist and why. If no LVs exist, no LV locks should exist either. So, I'm guessing that there may have been a command run during the test that left a stray LV lock in the lockspace (a bug).
'sanlock gets' reports info about lockspaces+hosts, but not about locks. The 'sanlock status' command will show info about lockspaces+locks, which should show us if lvmlockd is still holding any stray LV locks. To actually debug where a stray lock came from, we'll need to collect the full debug log from lvmlockd after the --lock-stop failure. The command for that is 'lvmlockctl --dump' redirected to some file.
[root@host-114 ~]# vgchange --lock-stop raid_sanity [root@host-114 ~]# sanlock status daemon 116c961f-2475-43fa-9bef-373f3f068f46.host-114.v p -1 helper p -1 listener p 28036 lvmlockd p -1 status s lvm_global:105:/dev/mapper/global-lvmlock:0 [root@host-115 ~]# vgchange --lock-stop raid_sanity VG raid_sanity stop failed: LVs must first be deactivated [root@host-115 ~]# sanlock status daemon 2e704e09-785f-400d-82c1-9675ca80bdf4.host-115.v p -1 helper p -1 listener p 16581 lvmlockd p 16581 lvmlockd p -1 status s lvm_raid_sanity:1063:/dev/mapper/raid_sanity-lvmlock:0 s lvm_global:1063:/dev/mapper/global-lvmlock:0 r lvm_raid_sanity:zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk:/dev/mapper/raid_sanity-lvmlock:71303168:1 p 16581 Created attachment 1294744 [details]
lvmlockctl --dump from failing node host-115
r lvm_raid_sanity:zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk:/dev/mapper/raid_sanity-lvmlock:71303168:1 p 16581 Searching for "zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk": 1499286064 send lvcreate[32619] cl 330 find_free_lock vg rv 0 1499286064 recv lvcreate[32619] cl 330 init lv "raid_sanity" mode iv flags 0 1499286064 work init_lv raid_sanity/inuse_minorB uuid zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk 1499286064 S lvm_raid_sanity init_lv_san zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk found unused area at 71303168 1499286064 send lvcreate[32619] cl 330 init lv rv 0 vg_args 1.0.0:lvmlock lv_args 1.0.0:71303168 1499286064 recv lvcreate[32619] cl 330 update vg "raid_sanity" mode iv flags 0 1499286064 S lvm_raid_sanity R VGLK action update iv 1499286064 S lvm_raid_sanity R VGLK res_update cl 330 lk version to 281 1499286064 send lvcreate[32619] cl 330 update vg rv 0 1499286064 recv lvcreate[32619] cl 330 update vg "raid_sanity" mode iv flags 0 1499286064 S lvm_raid_sanity R VGLK action update iv 1499286064 S lvm_raid_sanity R VGLK res_update cl 330 lk version to 282 1499286064 send lvcreate[32619] cl 330 update vg rv 0 1499286065 recv lvcreate[32619] cl 330 lock lv "raid_sanity" mode ex flags 1 1499286065 S lvm_raid_sanity R zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk action lock ex 1499286065 S lvm_raid_sanity R zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk res_lock cl 330 mode ex (inuse_minorB) 1499286065 S lvm_raid_sanity R zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk lock_san ex at /dev/mapper/raid_sanity-lvmlock:71303168 1499286065 S lvm_raid_sanity R zd1NSD-pHdO-Kxy3-VmFD-urzN-EecT-7mzRbk res_lock rv 0 1499286065 send lvcreate[32619] cl 330 lock lv rv 0 1499286065 recv lvcreate[32619] cl 330 update vg "raid_sanity" mode iv flags 0 1499286065 S lvm_raid_sanity R VGLK action update iv 1499286065 S lvm_raid_sanity R VGLK res_update cl 330 lk version to 283 1499286065 send lvcreate[32619] cl 330 update vg rv 0 1499286065 recv lvcreate[32619] cl 330 lock vg "raid_sanity" mode un flags 0 Which is the following test case: SCENARIO (raid1) - [create_inuse_minor_raid] Create a raid and then attempt to reuse it's minor num on a new raid Creating raid with rand minor num 165 lvcreate --activate ey --type raid1 -m 1 -n inuse_minorA -L 300M -My --major 253 --minor 165 raid_sanity WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead. dmsetup ls | grep inuse_minorA | grep 165 Attempt to create raid with in use minor num 165 lvcreate --activate ey --type raid1 -m 1 -n inuse_minorB -L 300M -My --major 253 --minor 165 raid_sanity WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead. The requested major:minor pair (253:165) is already used. Failed to activate new LV. Deactivating raid inuse_minorA... and removing VG raid_sanity stop failed: LVs must first be deactivated unable to stop lock space for raid_sanity on host-115 [root@host-115 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel_host-115 -wi-ao---- <6.20g swap rhel_host-115 -wi-ao---- 820.00m [root@host-115 ~]# sanlock status daemon 9986d712-c73e-4690-a2ce-51b911a84641.host-115.v p -1 helper p -1 listener p 21184 lvmlockd p 21184 lvmlockd p -1 status s lvm_raid_sanity:1063:/dev/mapper/raid_sanity-lvmlock:0 s lvm_global:1063:/dev/mapper/global-lvmlock:0 r lvm_raid_sanity:WT1nN5-EtwM-pJnJ-uexB-zoHi-cQij-lI4g2I:/dev/mapper/raid_sanity-lvmlock:71303168:1 p 21184 Thanks for the debugging, it looks like the error path is missing the unlock which leaves the stray LV lock as suspected. fixed here https://sourceware.org/git/?p=lvm2.git;a=commit;h=3797f47ecf23c41d4476e2cce0f210b48b32923d *** Bug 1489986 has been marked as a duplicate of this bug. *** Fix verified in the latest rpms. lvm2-2.02.175-2.el7 BUILT: Fri Oct 13 06:31:22 CDT 2017 lvm2-libs-2.02.175-2.el7 BUILT: Fri Oct 13 06:31:22 CDT 2017 lvm2-cluster-2.02.175-2.el7 BUILT: Fri Oct 13 06:31:22 CDT 2017 device-mapper-1.02.144-2.el7 BUILT: Fri Oct 13 06:31:22 CDT 2017 device-mapper-libs-1.02.144-2.el7 BUILT: Fri Oct 13 06:31:22 CDT 2017 device-mapper-event-1.02.144-2.el7 BUILT: Fri Oct 13 06:31:22 CDT 2017 device-mapper-event-libs-1.02.144-2.el7 BUILT: Fri Oct 13 06:31:22 CDT 2017 device-mapper-persistent-data-0.7.0-0.1.rc6.el7 BUILT: Mon Mar 27 10:15:46 CDT 2017 cmirror-2.02.175-2.el7 BUILT: Fri Oct 13 06:31:22 CDT 2017 sanlock-3.5.0-1.el7 BUILT: Wed Apr 26 09:37:30 CDT 2017 sanlock-lib-3.5.0-1.el7 BUILT: Wed Apr 26 09:37:30 CDT 2017 lvm2-lockd-2.02.175-2.el7 BUILT: Fri Oct 13 06:31:22 CDT 2017 # mirrors host-040: pvcreate /dev/sdf2 /dev/sdf1 /dev/sda2 /dev/sda1 /dev/sdh2 /dev/sdh1 /dev/sdd2 /dev/sdd1 /dev/sdg2 /dev/sdg1 host-040: vgcreate --shared mirror_sanity /dev/sdf2 /dev/sdf1 /dev/sda2 /dev/sda1 /dev/sdh2 /dev/sdh1 /dev/sdd2 /dev/sdd1 /dev/sdg2 /dev/sdg1 host-040: vgchange --lock-start mirror_sanity host-041: vgchange --lock-start mirror_sanity host-042: vgchange --lock-start mirror_sanity ============================================================ Iteration 1 of 1 started at Mon Oct 16 17:43:28 CDT 2017 ============================================================ SCENARIO - [create_inuse_minor_mirror] Create a mirror and then attempt to reuse it's minor num on a new mirror Creating mirror with rand minor num 145 WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead. Attempt to create mirror with in use minor num 145 WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead. The requested major:minor pair (253:145) is already used. Failed to activate new LV. Deactivating mirror inuse_minorA... and removing # thin pools host-040: pvcreate /dev/sdf1 /dev/sda1 /dev/sdh1 /dev/sdd1 /dev/sdg1 host-040: vgcreate --shared snapper_thinp /dev/sdf1 /dev/sda1 /dev/sdh1 /dev/sdd1 /dev/sdg1 host-040: vgchange --lock-start snapper_thinp host-041: vgchange --lock-start snapper_thinp host-042: vgchange --lock-start snapper_thinp ============================================================ Iteration 1 of 1 started at Mon Oct 16 17:46:40 CDT 2017 ============================================================ SCENARIO - [create_inuse_minor_thin_snap] Create a snapshot and then attempt to reuse it's minor num on a new snapshot Making pool volume lvcreate --activate ey --thinpool POOL -L 1G --profile thin-performance --zero n --poolmetadatasize 4M snapper_thinp There should be no "stripesize" messages in pool create output (possible regression of bug 1382860) Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. Logical volume "POOL" created. Making origin volume lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n origin lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other1 WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other2 WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other3 WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n other4 WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other5 WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). Creating snapshot with rand minor num 102 lvcreate --activate ey -k n -s /dev/snapper_thinp/origin -n inuse_minorA -My --major 253 --minor 102 WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead. WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). Attempt to create snapshot with in use minor num 102 Removing thin origin and other virtual thin volumes Removing pool snapper_thinp/POOL # raids host-040: pvcreate /dev/sdf2 /dev/sdf1 /dev/sda2 /dev/sda1 /dev/sdh2 /dev/sdh1 /dev/sdd2 /dev/sdd1 /dev/sdg2 /dev/sdg1 host-040: vgcreate --shared raid_sanity /dev/sdf2 /dev/sdf1 /dev/sda2 /dev/sda1 /dev/sdh2 /dev/sdh1 /dev/sdd2 /dev/sdd1 /dev/sdg2 /dev/sdg1 host-040: vgchange --lock-start raid_sanity host-041: vgchange --lock-start raid_sanity host-042: vgchange --lock-start raid_sanity ============================================================ Iteration 1 of 1 started at Mon Oct 16 17:50:09 CDT 2017 ============================================================ SCENARIO (raid1) - [create_inuse_minor_raid] Create a raid and then attempt to reuse it's minor num on a new raid Creating raid with rand minor num 104 WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead. Attempt to create raid with in use minor num 104 WARNING: Ignoring supplied major number 253 - kernel assigns major numbers dynamically. Using major number 253 instead. The requested major:minor pair (253:104) is already used. Failed to activate new LV. perform raid scrubbing (lvchange --syncaction check) on raid raid_sanity/inuse_minorA raid_sanity/inuse_minorA state is currently "resync". Unable to switch to "check". Waiting until all mirror|raid volumes become fully syncd... 1/1 mirror(s) are fully synced: ( 100.00% ) Sleeping 15 sec Deactivating raid inuse_minorA... and removing Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0853 |
Description of problem: [root@host-114 ~]# pcs status Cluster name: STSRHTS23738 Stack: corosync Current DC: host-114 (version 1.1.16-12.el7-94ff4df) - partition with quorum Last updated: Wed Jul 5 12:07:23 2017 Last change: Mon Jul 3 14:36:26 2017 by root via cibadmin on host-113 3 nodes configured 3 resources configured Online: [ host-113 host-114 host-115 ] Full list of resources: fence-host-113 (stonith:fence_xvm): Started host-113 fence-host-114 (stonith:fence_xvm): Started host-114 fence-host-115 (stonith:fence_xvm): Started host-115 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled [root@host-114 ~]# systemctl status lvm2-lvmlockd â lvm2-lvmlockd.service - LVM2 lock daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmlockd.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2017-07-03 14:36:31 CDT; 1 day 21h ago Docs: man:lvmlockd(8) Main PID: 21857 (lvmlockd) CGroup: /system.slice/lvm2-lvmlockd.service ââ21857 /usr/sbin/lvmlockd -f Jul 03 14:36:31 host-114.virt.lab.msp.redhat.com systemd[1]: Started LVM2 lock daemon. Jul 03 14:36:31 host-114.virt.lab.msp.redhat.com systemd[1]: Starting LVM2 lock daemon... Jul 03 14:36:31 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: 1499110591 lvmlockd started Jul 03 14:36:31 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: [D] creating /run/lvm/lvmlockd.socket Jul 03 15:04:16 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: 1499112256 S lvm_raid_sanity R VGLK res_update cl 130 lock not found Jul 03 15:16:16 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: 1499112976 S lvm_raid_sanity R VGLK res_update cl 253 lock not found Jul 03 15:51:57 host-114.virt.lab.msp.redhat.com lvmlockd[21857]: 1499115117 S lvm_raid_sanity R VGLK res_update cl 565 lock not found # Final cleanup is failing after successful test scenarios removing VG global on host-115 skipping global vg for later... removing VG raid_sanity on host-115 host-114: vgchange --lock-stop raid_sanity VG raid_sanity stop failed: LVs must first be deactivated unable to stop lock space for raid_sanity on host-114 [root@host-113 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices [lvmlock] global -wi-ao---- 256.00m /dev/sdf2(0) [lvmlock] raid_sanity -wi-ao---- 256.00m /dev/sdf1(0) [root@host-114 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices [lvmlock] global -wi-ao---- 256.00m /dev/sdf2(0) [lvmlock] raid_sanity -wi-ao---- 256.00m /dev/sdf1(0) [root@host-115 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices [lvmlock] global -wi-ao---- 256.00m /dev/sdf2(0) [lvmlock] raid_sanity -wi-ao---- 256.00m /dev/sdf1(0) # passes [root@host-113 ~]# vgchange --lock-stop raid_sanity # fails [root@host-114 ~]# vgchange --lock-stop raid_sanity VG raid_sanity stop failed: LVs must first be deactivated # passes [root@host-115 ~]# vgchange --lock-stop raid_sanity # There are no LVs period, more less active ones [root@host-114 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel_host-114 -wi-ao---- <6.20g swap rhel_host-114 -wi-ao---- 820.00m [root@host-114 ~]# dmsetup ls global-lvmlock (253:2) raid_sanity-lvmlock (253:3) [root@host-113 ~]# sanlock gets -h 1 s lvm_global:1325:/dev/mapper/global-lvmlock:0 h 105 gen 1 timestamp 417342 LIVE h 1063 gen 1 timestamp 417328 LIVE h 1325 gen 1 timestamp 417327 LIVE [root@host-114 ~]# sanlock gets -h 1 s lvm_raid_sanity:105:/dev/mapper/raid_sanity-lvmlock:0 h 105 gen 1 timestamp 417316 LIVE s lvm_global:105:/dev/mapper/global-lvmlock:0 h 105 gen 1 timestamp 417321 LIVE h 1063 gen 1 timestamp 417328 LIVE h 1325 gen 1 timestamp 417327 LIVE [root@host-115 ~]# sanlock gets -h 1 s lvm_global:1063:/dev/mapper/global-lvmlock:0 h 105 gen 1 timestamp 417342 LIVE h 1063 gen 1 timestamp 417328 LIVE h 1325 gen 1 timestamp 417347 LIVE From verbose output: "Counted 0 active LVs in VG raid_sanity" #metadata/vg.c:74 Allocated VG raid_sanity at 0x56249018e390. #format_text/import_vsn1.c:597 Importing logical volume raid_sanity/lvmlock. #format_text/import_vsn1.c:722 Logical volume raid_sanity/lvmlock is sanlock lv. #toollib.c:1970 Process single VG raid_sanity #activate/activate.c:1476 Counted 0 active LVs in VG raid_sanity #locking/lvmlockd.c:1078 lockd stop VG raid_sanity lock_type sanlock #libdm-config.c:956 Setting response to OK #libdm-config.c:987 Setting op_result to -16 #libdm-config.c:956 Setting lock_type to none #locking/lvmlockd.c:174 lockd_result -16 flags none lm none #locking/lvmlockd.c:1097 VG raid_sanity stop failed: LVs must first be deactivated #vgchange.c:1032 <backtrace> #toollib.c:1975 <backtrace> #mm/memlock.c:562 Unlock: Memlock counters: locked:0 critical:0 daemon:0 suspended:0 #activate/fs.c:489 Syncing device names #cache/lvmcache.c:157 Metadata cache: VG raid_sanity wiped. #misc/lvm-flock.c:70 Unlocking /run/lock/lvm/V_raid_sanity #misc/lvm-flock.c:47 _undo_flock /run/lock/lvm/V_raid_sanity # Restarted on all nodes: [root@host-113 ~]# vgchange --lock-start raid_sanity VG raid_sanity starting sanlock lockspace Starting locking. Waiting for sanlock may take 20 sec to 3 min... [root@host-114 ~]# vgchange --lock-start raid_sanity Starting locking. Waiting for sanlock may take 20 sec to 3 min... [root@host-115 ~]# vgchange --lock-start raid_sanity VG raid_sanity starting sanlock lockspace Starting locking. Waiting for sanlock may take 20 sec to 3 min... [root@host-113 ~]# sanlock gets -h 1 s lvm_raid_sanity:1325:/dev/mapper/raid_sanity-lvmlock:0 h 105 gen 1 timestamp 417686 UNKNOWN h 1063 gen 2 timestamp 417667 UNKNOWN h 1325 gen 2 timestamp 417666 LIVE s lvm_global:1325:/dev/mapper/global-lvmlock:0 h 105 gen 1 timestamp 417691 LIVE h 1063 gen 1 timestamp 417677 LIVE h 1325 gen 1 timestamp 417675 LIVE [root@host-114 ~]# sanlock gets -h 1 s lvm_raid_sanity:105:/dev/mapper/raid_sanity-lvmlock:0 h 105 gen 1 timestamp 417665 LIVE h 1063 gen 2 timestamp 417667 LIVE h 1325 gen 2 timestamp 417666 LIVE s lvm_global:105:/dev/mapper/global-lvmlock:0 h 105 gen 1 timestamp 417670 LIVE h 1063 gen 1 timestamp 417677 LIVE h 1325 gen 1 timestamp 417675 LIVE [root@host-115 ~]# sanlock gets -h 1 s lvm_raid_sanity:1063:/dev/mapper/raid_sanity-lvmlock:0 h 105 gen 1 timestamp 417686 UNKNOWN h 1063 gen 2 timestamp 417667 LIVE h 1325 gen 2 timestamp 417687 UNKNOWN s lvm_global:1063:/dev/mapper/global-lvmlock:0 h 105 gen 1 timestamp 417691 LIVE h 1063 gen 1 timestamp 417677 LIVE h 1325 gen 1 timestamp 417696 LIVE # After this, the lock stop continued to fail, however a simple vgremove worked to remove the vg, which shouldn't be allowed w/o the lock being stopped, correct? [root@host-114 ~]# vgremove -f raid_sanity Volume group "raid_sanity" successfully removed Version-Release number of selected component (if applicable): 3.10.0-689.el7.x86_64 lvm2-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 lvm2-libs-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 lvm2-cluster-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-1.02.140-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-libs-1.02.140-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-event-1.02.140-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-event-libs-1.02.140-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-persistent-data-0.7.0-0.1.rc6.el7 BUILT: Mon Mar 27 10:15:46 CDT 2017 cmirror-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 sanlock-3.5.0-1.el7 BUILT: Wed Apr 26 09:37:30 CDT 2017 sanlock-lib-3.5.0-1.el7 BUILT: Wed Apr 26 09:37:30 CDT 2017 lvm2-lockd-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017