RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1676143 - "Inconsistent sector sizes" when attempting to create sanlock shared VGs from PVs of varying sizes
Summary: "Inconsistent sector sizes" when attempting to create sanlock shared VGs from...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: 8.0
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-11 22:14 UTC by Corey Marthaler
Modified: 2021-09-07 11:58 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.03.09-4.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-04 02:00:20 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
verbose vgcreate --shared attempt (142.38 KB, text/plain)
2019-02-11 22:31 UTC, Corey Marthaler
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4546 0 None None None 2020-11-04 02:00:43 UTC

Description Corey Marthaler 2019-02-11 22:14:19 UTC
Description of problem:
This appears to only be a problem when in sanlock mode.

# healthy lvmlcokd / sanlock
[root@hayes-03 ~]# systemctl status lvmlockd
â lvmlockd.service - LVM lock daemon
   Loaded: loaded (/usr/lib/systemd/system/lvmlockd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-11 15:59:31 CST; 1min 4s ago
     Docs: man:lvmlockd(8)
 Main PID: 2224 (lvmlockd)
    Tasks: 3 (limit: 32767)
   Memory: 3.1M
   CGroup: /system.slice/lvmlockd.service
           ââ2224 /usr/sbin/lvmlockd --foreground

Feb 11 15:59:31 hayes-03.lab.msp.redhat.com systemd[1]: Starting LVM lock daemon...
Feb 11 15:59:31 hayes-03.lab.msp.redhat.com lvmlockd[2224]: [D] creating /run/lvm/lvmlockd.socket
Feb 11 15:59:31 hayes-03.lab.msp.redhat.com lvmlockd[2224]: 1549922371 lvmlockd started
Feb 11 15:59:31 hayes-03.lab.msp.redhat.com systemd[1]: Started LVM lock daemon.

[root@hayes-03 ~]# systemctl status sanlock
â sanlock.service - Shared Storage Lease Manager
   Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-11 15:59:25 CST; 1min 19s ago
  Process: 2206 ExecStart=/usr/sbin/sanlock daemon (code=exited, status=0/SUCCESS)
 Main PID: 2212 (sanlock)
    Tasks: 6 (limit: 32767)
   Memory: 15.5M
   CGroup: /system.slice/sanlock.service
           ââ2212 /usr/sbin/sanlock daemon
           ââ2213 /usr/sbin/sanlock daemon

Feb 11 15:59:25 hayes-03.lab.msp.redhat.com systemd[1]: Starting Shared Storage Lease Manager...
Feb 11 15:59:25 hayes-03.lab.msp.redhat.com systemd[1]: Started Shared Storage Lease Manager.

# I have both 500G and 2T drives on this machine, and when I mix them in a VG, I see this error and the VG created is not marked "shared" like requested.


# 500G device
[root@hayes-03 ~]# vgcreate --shared global /dev/sdp1
  Enabling sanlock global lock
  Physical volume "/dev/sdp1" successfully created.
  Logical volume "lvmlock" created.
  Volume group "global" successfully created
  VG global starting sanlock lockspace
  Starting locking.  Waiting until locks are ready...

# Mix of 500G and 2T
[root@hayes-03 ~]# vgcreate  --shared centipede2 /dev/sde1 /dev/sdl1 /dev/sdf1 /dev/sdm1 /dev/sdo1 /dev/sdg1 /dev/sdj1
  Physical volume "/dev/sde1" successfully created.
  Physical volume "/dev/sdl1" successfully created.
  Physical volume "/dev/sdf1" successfully created.
  Physical volume "/dev/sdm1" successfully created.
  Physical volume "/dev/sdo1" successfully created.
  Physical volume "/dev/sdg1" successfully created.
  Physical volume "/dev/sdj1" successfully created.
  Inconsistent sector sizes for /dev/sdl1 and /dev/sde1.
  Volume group "centipede2" successfully created

[root@hayes-03 ~]# pvscan
  PV /dev/sdp1   VG global          lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sde1   VG centipede2      lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdl1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdm1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdo1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdj1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 8 [<13.17 TiB] / in use: 8 [<13.17 TiB] / in no VG: 0 [0   ]

# centipede2 is not shared
[root@hayes-03 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree  
  centipede2   7   0   0 wz--n- <11.35t <11.35t
  global       1   0   0 wz--ns  <1.82t  <1.82t



Version-Release number of selected component (if applicable):
kernel-4.18.0-64.el8    BUILT: Wed Jan 23 15:34:02 CST 2019
lvm2-2.03.02-3.el8    BUILT: Mon Jan 28 16:52:43 CST 2019
lvm2-libs-2.03.02-3.el8    BUILT: Mon Jan 28 16:52:43 CST 2019
lvm2-dbusd-2.03.02-3.el8    BUILT: Mon Jan 28 16:55:07 CST 2019
lvm2-lockd-2.03.02-3.el8    BUILT: Mon Jan 28 16:52:43 CST 2019
boom-boot-0.9-7.el8    BUILT: Mon Jan 14 14:00:54 CST 2019
cmirror-2.03.02-3.el8    BUILT: Mon Jan 28 16:52:43 CST 2019
device-mapper-1.02.155-3.el8    BUILT: Mon Jan 28 16:52:43 CST 2019
device-mapper-libs-1.02.155-3.el8    BUILT: Mon Jan 28 16:52:43 CST 2019
device-mapper-event-1.02.155-3.el8    BUILT: Mon Jan 28 16:52:43 CST 2019
device-mapper-event-libs-1.02.155-3.el8    BUILT: Mon Jan 28 16:52:43 CST 2019
device-mapper-persistent-data-0.7.6-1.el8    BUILT: Sun Aug 12 04:21:55 CDT 2018
sanlock-3.6.0-5.el8    BUILT: Thu Dec  6 13:31:26 CST 2018
sanlock-lib-3.6.0-5.el8    BUILT: Thu Dec  6 13:31:26 CST 2018


How reproducible:
Everytime

Comment 1 Corey Marthaler 2019-02-11 22:31:54 UTC
Created attachment 1533832 [details]
verbose vgcreate --shared attempt

Comment 2 David Teigland 2020-06-03 16:28:58 UTC
The way this is handled has changed a couple times since this bug was created, and the checking of proper block size combinations was handled by lvm more generally.  Since this commit the error mentioned no longer exists:

commit 2d1fe38d84d499011d13ae1ea11535398528fc87
Author: David Teigland <teigland>
Date:   Mon May 11 13:08:39 2020 -0500

    lvmlockd: use 4K sector size when any dev is 4K
    
    When either logical block size or physical block size is 4K,
    then lvmlockd creates sanlock leases based on 4K sectors,
    but the lvm client side would create the internal lvmlock LV
    based on the first logical block size it saw in the VG,
    which could be 512.  This could cause the lvmlock LV to be
    too small to hold all the sanlock leases. Make the lvm client
    side use the same sizing logic as lvmlockd.

Comment 6 Corey Marthaler 2020-08-17 23:33:36 UTC
Fix verified in the latest rpms. Warnings now present in both the cmd output and messages.

kernel-4.18.0-232.el8    BUILT: Mon Aug 10 02:17:54 CDT 2020
lvm2-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-libs-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-dbusd-2.03.09-5.el8    BUILT: Wed Aug 12 15:49:44 CDT 2020
lvm2-lockd-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-libs-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-event-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-event-libs-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020


[root@hayes-03 ~]# systemctl status sanlock
â sanlock.service - Shared Storage Lease Manager
   Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-08-17 18:18:23 CDT; 3s ago
  Process: 193496 ExecStart=/usr/sbin/sanlock daemon (code=exited, status=0/SUCCESS)
 Main PID: 193500 (sanlock)
    Tasks: 6 (limit: 1647453)
   Memory: 15.4M
   CGroup: /system.slice/sanlock.service
           ââ193500 /usr/sbin/sanlock daemon
           ââ193501 /usr/sbin/sanlock daemon

Aug 17 18:18:23 hayes-03.lab.msp.redhat.com systemd[1]: Starting Shared Storage Lease Manager...
Aug 17 18:18:23 hayes-03.lab.msp.redhat.com systemd[1]: Started Shared Storage Lease Manager.

[root@hayes-03 ~]# systemctl status lvmlockd
â lvmlockd.service - LVM lock daemon
   Loaded: loaded (/usr/lib/systemd/system/lvmlockd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-08-17 18:18:35 CDT; 11s ago
     Docs: man:lvmlockd(8)
 Main PID: 193516 (lvmlockd)
    Tasks: 3 (limit: 1647453)
   Memory: 3.0M
   CGroup: /system.slice/lvmlockd.service
           ââ193516 /usr/sbin/lvmlockd --foreground

Aug 17 18:18:35 hayes-03.lab.msp.redhat.com systemd[1]: Starting LVM lock daemon...
Aug 17 18:18:35 hayes-03.lab.msp.redhat.com lvmlockd[193516]: [D] creating /run/lvm/lvmlockd.socket
Aug 17 18:18:35 hayes-03.lab.msp.redhat.com lvmlockd[193516]: 1597706315 lvmlockd started
Aug 17 18:18:35 hayes-03.lab.msp.redhat.com systemd[1]: Started LVM lock daemon.


[root@hayes-03 ~]# vgcreate  --shared centipede2 /dev/sde1 /dev/sdl1 /dev/sdf1 /dev/sdm1 /dev/sdo1 /dev/sdg1 /dev/sdj1
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  Physical volume "/dev/sde1" successfully created.
  Physical volume "/dev/sdl1" successfully created.
  Physical volume "/dev/sdf1" successfully created.
  Physical volume "/dev/sdm1" successfully created.
  Physical volume "/dev/sdo1" successfully created.
  Physical volume "/dev/sdg1" successfully created.
  Physical volume "/dev/sdj1" successfully created.
  Logical volume "lvmlock" created.
  Volume group "centipede2" successfully created
  VG centipede2 starting sanlock lockspace
  Starting locking.  Waiting until locks are ready...

Aug 17 18:27:14 hayes-03 lvmlockd[193516]: 1597706834 WARNING: mixed block sizes physical 4096 logical 512 (using 4096) for /devk

[root@hayes-03 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree  
  centipede2   7   0   0 wz--ns <11.35t <11.35t
  global       1   0   0 wz--ns  <1.82t  <1.82t

[root@hayes-03 ~]# pvscan
  PV /dev/sdp1   VG global          lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sde1   VG centipede2      lvm2 [446.62 GiB / 445.62 GiB free]
  PV /dev/sdl1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdm1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdo1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdj1   VG centipede2      lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 8 [<13.17 TiB] / in use: 8 [<13.17 TiB] / in no VG: 0 [0   ]

Comment 9 errata-xmlrpc 2020-11-04 02:00:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4546


Note You need to log in before you can comment on or make changes to this bug.