RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1356685 - mkfs.gfs2: Bad device topology can cause failure
Summary: mkfs.gfs2: Bad device topology can cause failure
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: gfs2-utils
Version: 7.3
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Andrew Price
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1413684
Blocks: 1385242 1437009
TreeView+ depends on / blocked
 
Reported: 2016-07-14 17:26 UTC by Corey Marthaler
Modified: 2020-05-14 15:14 UTC (History)
7 users (show)

Fixed In Version: gfs2-utils-3.1.10-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1437009 (view as bug list)
Environment:
Last Closed: 2017-08-01 21:57:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2983391 0 None None None 2017-03-28 15:42:03 UTC
Red Hat Product Errata RHBA-2017:2226 0 normal SHIPPED_LIVE gfs2-utils bug fix and enhancement update 2017-08-01 18:43:08 UTC

Description Corey Marthaler 2016-07-14 17:26:30 UTC
Description of problem:
This may be related to bug 1342176. When I have multiple VGs with raid0 volumes, mkfs.gfs2 fails on some of them with "Failed to create resource group index entry: No space left on device". However when I run mkfs.xfs and mkfs.ext4, on the same volumes, they all pass.


[root@host-082 ~]# for i in 1 2 3 4 5 6
> do
> lvcreate --type raid0 -L 500M -n raid0_$i -i 2 VG1
> lvcreate --type raid0 -L 500M -n raid0_$i -i 2 VG2
> done
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_1" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_1" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_2" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_2" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_3" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_3" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_4" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_4" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_5" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_5" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_6" created.
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid0_6" created.


[root@host-082 ~]# lvs -a -o +devices
  LV                 VG  Attr       LSize    Devices                                
  raid0_1            VG1 rwi-a-r--- 504.00m  raid0_1_rimage_0(0),raid0_1_rimage_1(0)
  [raid0_1_rimage_0] VG1 iwi-aor--- 252.00m  /dev/sda1(0)                           
  [raid0_1_rimage_1] VG1 iwi-aor--- 252.00m  /dev/sdb1(0)                           
  raid0_2            VG1 rwi-a-r--- 504.00m  raid0_2_rimage_0(0),raid0_2_rimage_1(0)
  [raid0_2_rimage_0] VG1 iwi-aor--- 252.00m  /dev/sda1(63)                          
  [raid0_2_rimage_1] VG1 iwi-aor--- 252.00m  /dev/sdb1(63)                          
  raid0_3            VG1 rwi-a-r--- 504.00m  raid0_3_rimage_0(0),raid0_3_rimage_1(0)
  [raid0_3_rimage_0] VG1 iwi-aor--- 252.00m  /dev/sda1(126)                         
  [raid0_3_rimage_1] VG1 iwi-aor--- 252.00m  /dev/sdb1(126)                         
  raid0_4            VG1 rwi-a-r--- 504.00m  raid0_4_rimage_0(0),raid0_4_rimage_1(0)
  [raid0_4_rimage_0] VG1 iwi-aor--- 252.00m  /dev/sda1(189)                         
  [raid0_4_rimage_1] VG1 iwi-aor--- 252.00m  /dev/sdb1(189)                         
  raid0_5            VG1 rwi-a-r--- 504.00m  raid0_5_rimage_0(0),raid0_5_rimage_1(0)
  [raid0_5_rimage_0] VG1 iwi-aor--- 252.00m  /dev/sda1(252)                         
  [raid0_5_rimage_1] VG1 iwi-aor--- 252.00m  /dev/sdb1(252)                         
  raid0_6            VG1 rwi-a-r--- 504.00m  raid0_6_rimage_0(0),raid0_6_rimage_1(0)
  [raid0_6_rimage_0] VG1 iwi-aor--- 252.00m  /dev/sda1(315)                         
  [raid0_6_rimage_1] VG1 iwi-aor--- 252.00m  /dev/sdb1(315)                         
  raid0_1            VG2 rwi-a-r--- 504.00m  raid0_1_rimage_0(0),raid0_1_rimage_1(0)
  [raid0_1_rimage_0] VG2 iwi-aor--- 252.00m  /dev/sda2(0)                           
  [raid0_1_rimage_1] VG2 iwi-aor--- 252.00m  /dev/sdb2(0)                           
  raid0_2            VG2 rwi-a-r--- 504.00m  raid0_2_rimage_0(0),raid0_2_rimage_1(0)
  [raid0_2_rimage_0] VG2 iwi-aor--- 252.00m  /dev/sda2(63)                          
  [raid0_2_rimage_1] VG2 iwi-aor--- 252.00m  /dev/sdb2(63)                          
  raid0_3            VG2 rwi-a-r--- 504.00m  raid0_3_rimage_0(0),raid0_3_rimage_1(0)
  [raid0_3_rimage_0] VG2 iwi-aor--- 252.00m  /dev/sda2(126)                         
  [raid0_3_rimage_1] VG2 iwi-aor--- 252.00m  /dev/sdb2(126)                         
  raid0_4            VG2 rwi-a-r--- 504.00m  raid0_4_rimage_0(0),raid0_4_rimage_1(0)
  [raid0_4_rimage_0] VG2 iwi-aor--- 252.00m  /dev/sda2(189)                         
  [raid0_4_rimage_1] VG2 iwi-aor--- 252.00m  /dev/sdb2(189)                         
  raid0_5            VG2 rwi-a-r--- 504.00m  raid0_5_rimage_0(0),raid0_5_rimage_1(0)
  [raid0_5_rimage_0] VG2 iwi-aor--- 252.00m  /dev/sda2(252)                         
  [raid0_5_rimage_1] VG2 iwi-aor--- 252.00m  /dev/sdb2(252)                         
  raid0_6            VG2 rwi-a-r--- 504.00m  raid0_6_rimage_0(0),raid0_6_rimage_1(0)
  [raid0_6_rimage_0] VG2 iwi-aor--- 252.00m  /dev/sda2(315)                         
  [raid0_6_rimage_1] VG2 iwi-aor--- 252.00m  /dev/sdb2(315)                         


[root@host-082 ~]# for i in 1 2 3 4 5 6; do mkfs.gfs2 -j 1 -p lock_nolock /dev/VG1/raid0_$i -O; mkfs.gfs2 -j 1 -p lock_nolock /dev/VG2/raid0_$i -O ; done
/dev/VG1/raid0_1 is a symbolic link to /dev/dm-4
This will destroy any data on /dev/dm-4
Device:                    /dev/VG1/raid0_1
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.44 GB (114692 blocks)
Journals:                  1
Resource groups:           2
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      8c06b922-1dcf-fc23-b60f-3d01601a7c38
/dev/VG2/raid0_1 is a symbolic link to /dev/dm-7
This will destroy any data on /dev/dm-7
Device:                    /dev/VG2/raid0_1
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.44 GB (114692 blocks)
Journals:                  1
Resource groups:           2
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      5a450e15-d456-6d1f-0a78-012c2b8ad503
/dev/VG1/raid0_2 is a symbolic link to /dev/dm-10
This will destroy any data on /dev/dm-10
Device:                    /dev/VG1/raid0_2
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.44 GB (114692 blocks)
Journals:                  1
Resource groups:           2
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      77b7bd8e-2943-d68d-fcac-2084e7c394c9
/dev/VG2/raid0_2 is a symbolic link to /dev/dm-13
This will destroy any data on /dev/dm-13
Device:                    /dev/VG2/raid0_2
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.44 GB (114692 blocks)
Journals:                  1
Resource groups:           2
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      118aac86-2fae-b46e-a953-6351278b72e5
/dev/VG1/raid0_3 is a symbolic link to /dev/dm-16
This will destroy any data on /dev/dm-16
Device:                    /dev/VG1/raid0_3
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.44 GB (114692 blocks)
Journals:                  1
Resource groups:           2
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      5d73158b-ee3e-e93d-448c-529354e47202
/dev/VG2/raid0_3 is a symbolic link to /dev/dm-19
This will destroy any data on /dev/dm-19
Device:                    /dev/VG2/raid0_3
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.49 GB (129023 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      8e394837-8ac1-e519-6fee-68b84f682d59
/dev/VG1/raid0_4 is a symbolic link to /dev/dm-22
This will destroy any data on /dev/dm-22
Failed to create resource group index entry: No space left on device
Failed to build resource groups
/dev/VG2/raid0_4 is a symbolic link to /dev/dm-25
This will destroy any data on /dev/dm-25
Error building 'per_node': No space left on device
/dev/VG1/raid0_5 is a symbolic link to /dev/dm-28
This will destroy any data on /dev/dm-28
Device:                    /dev/VG1/raid0_5
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.44 GB (114692 blocks)
Journals:                  1
Resource groups:           2
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      23f41bec-f17c-470b-b919-b47c69965bc5
/dev/VG2/raid0_5 is a symbolic link to /dev/dm-31
This will destroy any data on /dev/dm-31
Device:                    /dev/VG2/raid0_5
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.49 GB (129023 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      4faeee52-276f-ae77-6939-73936f1e0839
/dev/VG1/raid0_6 is a symbolic link to /dev/dm-34
This will destroy any data on /dev/dm-34
Failed to create resource group index entry: No space left on device
Failed to build resource groups
/dev/VG2/raid0_6 is a symbolic link to /dev/dm-37
This will destroy any data on /dev/dm-37
Device:                    /dev/VG2/raid0_6
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.49 GB (129023 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      d5479f49-56aa-0c3f-5d79-a1942b344645


Version-Release number of selected component (if applicable):
gfs2-utils-3.1.8-6.el7.x86_64


How reproducible:
Often

Comment 2 Andrew Price 2016-07-14 17:50:34 UTC
Hi Corey, does the problem go away if you reduce the journal size used by mkfs.gfs2 (-J 8 for example)? Could you also try adding -o align=0 to the mkfs.gfs2 separately? And could you provide the output of blkid -i <device> for each of those devices too?

gfs2 uses more space for its bookkeeping structures (journals etc) than xfs or ext4 so space is going to be pretty tight anyway, but I suspect that stripe alignment is coming into play as well in this case.

Comment 3 Andrew Price 2016-07-14 18:16:54 UTC
(In reply to Andrew Price from comment #2)
> And could you provide the output of blkid -i <device> for each of those devices too?

Sorry, lsblk --topology is a faster way to get the same info :)

Comment 4 Corey Marthaler 2016-07-14 20:23:19 UTC
The journal size didn't matter, i saw failures with -J 32 and -J 8.

[root@host-075 ~]# mkfs.gfs2 -j 1 -p lock_nolock -J 8 /dev/VG2/raid0_1
/dev/VG2/raid0_1 is a symbolic link to /dev/dm-7
This will destroy any data on /dev/dm-7
Are you sure you want to proceed? [y/n]y
Failed to create resource group index entry: No space left on device
Failed to build resource groups

[root@host-075 ~]# mkfs.ext4 /dev/VG2/raid0_1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=64 blocks, Stripe width=4177216 blocks
129024 inodes, 516096 blocks
25804 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=34078720
63 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks: 
        8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done 


[root@host-075 ~]# for i in 1 2 3 4 5; do mkfs.gfs2 -j 1 -p lock_nolock -J 8 /dev/VG1/raid0_$i -O; mkfs.gfs2 -j 1 -p lock_nolock -J 8 /dev/VG2/raid0_$i -O ; done
It appears to contain an existing filesystem (gfs2)
/dev/VG1/raid0_1 is a symbolic link to /dev/dm-4
This will destroy any data on /dev/dm-4
Failed to create resource group index entry: No space left on device
Failed to build resource groups
/dev/VG2/raid0_1 is a symbolic link to /dev/dm-7
This will destroy any data on /dev/dm-7
Failed to create resource group index entry: No space left on device
Failed to build resource groups
It appears to contain an existing filesystem (gfs2)
/dev/VG1/raid0_2 is a symbolic link to /dev/dm-10
This will destroy any data on /dev/dm-10
Device:                    /dev/VG1/raid0_2
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.45 GB (116665 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      457b4286-fb96-408f-ec78-36f24b44e578
It appears to contain an existing filesystem (gfs2)
/dev/VG2/raid0_2 is a symbolic link to /dev/dm-13
This will destroy any data on /dev/dm-13
Device:                    /dev/VG2/raid0_2
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.49 GB (129021 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      86a83e3d-0b9d-272a-c627-645f07646295
It appears to contain an existing filesystem (gfs2)
/dev/VG1/raid0_3 is a symbolic link to /dev/dm-16
This will destroy any data on /dev/dm-16
Device:                    /dev/VG1/raid0_3
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.49 GB (129021 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      dd95e52c-1bab-07f8-7d04-843764b1f352
It appears to contain an existing filesystem (gfs2)
/dev/VG2/raid0_3 is a symbolic link to /dev/dm-19
This will destroy any data on /dev/dm-19
Device:                    /dev/VG2/raid0_3
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.45 GB (116665 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      00e03e5c-55a7-3ceb-de68-cbcfa0a60ae4
It appears to contain an existing filesystem (gfs2)
/dev/VG1/raid0_4 is a symbolic link to /dev/dm-22
This will destroy any data on /dev/dm-22
Device:                    /dev/VG1/raid0_4
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.45 GB (116665 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      a9ceb0eb-723f-787b-6b6f-23198d46a67e
It appears to contain an existing filesystem (gfs2)
/dev/VG2/raid0_4 is a symbolic link to /dev/dm-25
This will destroy any data on /dev/dm-25
Device:                    /dev/VG2/raid0_4
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.49 GB (127801 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      14c4c284-50d2-7f17-27bd-7e8c8bd4b32c
It appears to contain an existing filesystem (gfs2)
/dev/VG1/raid0_5 is a symbolic link to /dev/dm-28
This will destroy any data on /dev/dm-28
Device:                    /dev/VG1/raid0_5
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.49 GB (129021 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      6b89e79a-707d-348b-05c5-4e3d75f718ce
It appears to contain an existing filesystem (gfs2)
/dev/VG2/raid0_5 is a symbolic link to /dev/dm-31
This will destroy any data on /dev/dm-31
Device:                    /dev/VG2/raid0_5
Block size:                4096
Device size:               0.49 GB (129024 blocks)
Filesystem size:           0.49 GB (129021 blocks)
Journals:                  1
Resource groups:           3
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      eef4782e-1105-4686-4aab-80c2912e3809




[root@host-075 ~]# lsblk --topology --ascii
NAME                     ALIGNMENT MIN-IO     OPT-IO PHY-SEC LOG-SEC ROTA SCHED    RQ-SIZE   RA WSAME
sda                              0    512          0     512     512    1 deadline     128 4096   32M
|-sda1                           0    512          0     512     512    1 deadline     128 4096   32M
| |-VG1-raid0_1_rimage_0         0    512          0     512     512    1              128 4096   32M
| | `-VG1-raid0_1                0  65536 1468006400     512     512    1              128  256    0B
| |-VG1-raid0_2_rimage_0         0    512          0     512     512    1              128 4096   32M
| | `-VG1-raid0_2                0  65536   67043328     512     512    1              128  256    0B
| |-VG1-raid0_3_rimage_0         0    512          0     512     512    1              128 4096   32M
| | `-VG1-raid0_3                0  65536          0     512     512    1              128  256    0B
| |-VG1-raid0_4_rimage_0         0    512          0     512     512    1              128 4096   32M
| | `-VG1-raid0_4                0  65536   67043328     512     512    1              128  256    0B
| `-VG1-raid0_5_rimage_0         0    512          0     512     512    1              128 4096   32M
|   `-VG1-raid0_5                0  65536          0     512     512    1              128  256    0B
`-sda2                           0    512          0     512     512    1 deadline     128 4096   32M
  |-VG2-raid0_1_rimage_0         0    512          0     512     512    1              128 4096   32M
  | `-VG2-raid0_1                0  65536 4277469184     512     512    1              128  256    0B
  |-VG2-raid0_2_rimage_0         0    512          0     512     512    1              128 4096   32M
  | `-VG2-raid0_2                0  65536          0     512     512    1              128  256    0B
  |-VG2-raid0_3_rimage_0         0    512          0     512     512    1              128 4096   32M
  | `-VG2-raid0_3                0  65536   67043328     512     512    1              128  256    0B
  |-VG2-raid0_4_rimage_0         0    512          0     512     512    1              128 4096   32M
  | `-VG2-raid0_4                0  65536    5308416     512     512    1              128  256    0B
  `-VG2-raid0_5_rimage_0         0    512          0     512     512    1              128 4096   32M
    `-VG2-raid0_5                0  65536          0     512     512    1              128  256    0B
sdb                              0    512          0     512     512    1 deadline     128 4096   32M
|-sdb1                           0    512          0     512     512    1 deadline     128 4096   32M
| |-VG1-raid0_1_rimage_1         0    512          0     512     512    1              128 4096   32M
| | `-VG1-raid0_1                0  65536 1468006400     512     512    1              128  256    0B
| |-VG1-raid0_2_rimage_1         0    512          0     512     512    1              128 4096   32M
| | `-VG1-raid0_2                0  65536   67043328     512     512    1              128  256    0B
| |-VG1-raid0_3_rimage_1         0    512          0     512     512    1              128 4096   32M
| | `-VG1-raid0_3                0  65536          0     512     512    1              128  256    0B
| |-VG1-raid0_4_rimage_1         0    512          0     512     512    1              128 4096   32M
| | `-VG1-raid0_4                0  65536   67043328     512     512    1              128  256    0B
| `-VG1-raid0_5_rimage_1         0    512          0     512     512    1              128 4096   32M
|   `-VG1-raid0_5                0  65536          0     512     512    1              128  256    0B
`-sdb2                           0    512          0     512     512    1 deadline     128 4096   32M
  |-VG2-raid0_1_rimage_1         0    512          0     512     512    1              128 4096   32M
  | `-VG2-raid0_1                0  65536 4277469184     512     512    1              128  256    0B
  |-VG2-raid0_2_rimage_1         0    512          0     512     512    1              128 4096   32M
  | `-VG2-raid0_2                0  65536          0     512     512    1              128  256    0B
  |-VG2-raid0_3_rimage_1         0    512          0     512     512    1              128 4096   32M
  | `-VG2-raid0_3                0  65536   67043328     512     512    1              128  256    0B
  |-VG2-raid0_4_rimage_1         0    512          0     512     512    1              128 4096   32M
  | `-VG2-raid0_4                0  65536    5308416     512     512    1              128  256    0B
  `-VG2-raid0_5_rimage_1         0    512          0     512     512    1              128 4096   32M
    `-VG2-raid0_5                0  65536          0     512     512    1              128  256    0B
sdc                              0    512          0     512     512    1 deadline     128 4096   32M
|-sdc1                           0    512          0     512     512    1 deadline     128 4096   32M
`-sdc2                           0    512          0     512     512    1 deadline     128 4096   32M
sdd                              0    512          0     512     512    1 deadline     128 4096   32M
|-sdd1                           0    512          0     512     512    1 deadline     128 4096   32M
`-sdd2                           0    512          0     512     512    1 deadline     128 4096   32M
sde                              0    512          0     512     512    1 deadline     128 4096   32M
|-sde1                           0    512          0     512     512    1 deadline     128 4096   32M
`-sde2                           0    512          0     512     512    1 deadline     128 4096   32M
sdf                              0    512          0     512     512    1 deadline     128 4096   32M
|-sdf1                           0    512          0     512     512    1 deadline     128 4096   32M
`-sdf2                           0    512          0     512     512    1 deadline     128 4096   32M
sdg                              0    512          0     512     512    1 deadline     128 4096   32M
|-sdg1                           0    512          0     512     512    1 deadline     128 4096   32M
`-sdg2                           0    512          0     512     512    1 deadline     128 4096   32M
sdh                              0    512          0     512     512    1 deadline     128 4096   32M
|-sdh1                           0    512          0     512     512    1 deadline     128 4096   32M
`-sdh2                           0    512          0     512     512    1 deadline     128 4096   32M

Comment 5 Andrew Price 2016-07-18 10:22:22 UTC
Those OPT-IO and MIN-IO values look broken to me. That seems to be the root of the problem.

Comment 6 Andrew Price 2016-07-19 10:59:32 UTC
https://bugzilla.redhat.com/show_bug.cgi?id=1356244#c13 confirms that the device topology is buggy. I'll keep this bz open as I don't think mkfs.gfs2 should cause OOM even with bogus topology values, but I think we can consider it as lower priority.

Comment 7 Andrew Price 2016-07-19 11:02:29 UTC
Sorry, got my bugs mixed up. "OOM" -> "failure".

Comment 8 Steve Whitehouse 2016-08-02 09:17:59 UTC
This is on the blocker list for 7.3, so we either need to fix it right away, or defer until 7.4 at this stage.

Comment 9 Andrew Price 2016-08-02 14:36:52 UTC
I'm going to nudge this out to 7.4 then, as the original blocker issue was verified fixed in https://bugzilla.redhat.com/show_bug.cgi?id=1356244#c21 and all that's left to do is guard against the device topology values from the kernel being untrustworthy (which really should never happen).

Comment 10 Nate Straz 2016-09-01 14:27:55 UTC
I ran into the issue while trying to mkfs.gfs2 on some EMC VNX2 luns.  

[root@dash-01 ~]# lsblk -o +OPT-IO,VENDOR,MODEL /dev/fsck/test -s
NAME       MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT   OPT-IO VENDOR   MODEL
fsck-test  253:15   0  16G  0 lvm              33553920          
└─mpatha1  253:3    0  50T  0 part             33553920          
  └─mpatha 253:2    0  50T  0 mpath            33553920          
    ├─sdb    8:16   0  50T  0 disk             33553920 DGC      VRAID          
    ├─sdc    8:32   0  50T  0 disk             33553920 DGC      VRAID          
    ├─sdm    8:192  0  50T  0 disk             33553920 DGC      VRAID          
    ├─sdr   65:16   0  50T  0 disk             33553920 DGC      VRAID          
    ├─sdw   65:96   0  50T  0 disk             33553920 DGC      VRAID          
    ├─sdaf  65:240  0  50T  0 disk             33553920 DGC      VRAID          
    ├─sdag  66:0    0  50T  0 disk             33553920 DGC      VRAID          
    └─sdaq  66:160  0  50T  0 disk             33553920 DGC      VRAID          
[root@dash-01 ~]# mkfs.gfs2 -O -p lock_nolock /dev/mapper/fsck-test  -D
It appears to contain an existing filesystem (gfs2)
alignment_offset: 7168
logical_sector_size: 512
minimum_io_size: 8192
optimal_io_size: 33553920
physical_sector_size: 512
File system options:
  bsize = 4096
  qcsize = 1
  jsize = 128
  journals = 1
  proto = lock_nolock
  table = 
  rgsize = 256
  fssize = 0
  sunit = 0
  swidth = 0
Could not initialise resource groups: Invalid argument

The workaround of "-o align=0" does help.

Comment 11 Andrew Price 2017-01-26 19:49:58 UTC
Patches posted upstream: https://www.redhat.com/archives/cluster-devel/2017-January/msg00091.html

Comment 12 Andrew Price 2017-01-27 10:56:06 UTC
The patches have been pushed upstream and will be pulled in when gfs2-utils is rebased.

Comment 19 Nate Straz 2017-05-11 18:05:27 UTC
Before:

[root@dash-01 ~]# rpm -q gfs2-utils
gfs2-utils-3.1.9-4.el7.x86_64

[root@dash-01 ~]# mkfs.gfs2 -O -p lock_nolock /dev/fsck/test -D
It appears to contain an existing filesystem (gfs2)
alignment_offset: 512
logical_sector_size: 512
minimum_io_size: 8192
optimal_io_size: 33553920
physical_sector_size: 512
File system options:
  bsize = 4096
  qcsize = 1
  jsize = 128
  journals = 1
  proto = lock_nolock
  table =
  rgsize = 256
  fssize = 0
  sunit = 0
  swidth = 0
Could not initialise resource groups: Invalid argument

[root@dash-01 ~]# rpm -q gfs2-utils
gfs2-utils-3.1.10-2.el7.x86_64

[root@dash-01 ~]# mkfs.gfs2 -O -p lock_nolock /dev/fsck/test -D
It appears to contain an existing filesystem (gfs2)
alignment_offset: 512
logical_sector_size: 512
minimum_io_size: 8192
optimal_io_size: 33553920
physical_sector_size: 512
Warning: device is not properly aligned. This may harm performance.
File system options:
  bsize = 4096
  qcsize = 1
  jsize = 128
  journals = 1
  proto = lock_nolock
  table =
  rgsize = 256
  fssize = 0
  sunit = 0
  swidth = 0
  rgrp align = (disabled)
/dev/fsck/test is a symbolic link to /dev/dm-16
This will destroy any data on /dev/dm-16
Discarding device contents (may take a while on large devices): Issuing discard request: range: 0 - 5368709120...Successful.
Done
Adding journals: Placing resource group for journal0
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 1  no_addr: 20  di_mode: 0100600  di_uid: 0  di_gid: 0  di_nlink: 1  di_size: 134217728  di_blocks: 32834  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 85  di_goal_data: 32853  di_flags: 0x00000200  di_payload_format: 0  di_height: 2  di_depth: 0  di_entries: 0  di_eattr: 0  ri_addr: 17  ri_length: 3  ri_data0: 20  ri_data: 32836  ri_bitbytes: 8209
Done
Building resource groups:   ri_addr: 32856  ri_length: 4  ri_data0: 32860  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 96750  ri_length: 4  ri_data0: 96754  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 160644  ri_length: 4  ri_data0: 160648  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 224538  ri_length: 4  ri_data0: 224542  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 288431  ri_length: 4  ri_data0: 288435  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 352324  ri_length: 4  ri_data0: 352328  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 416217  ri_length: 4  ri_data0: 416221  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 480110  ri_length: 4  ri_data0: 480114  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 544003  ri_length: 4  ri_data0: 544007  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 607896  ri_length: 4  ri_data0: 607900  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 671789  ri_length: 4  ri_data0: 671793  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 735682  ri_length: 4  ri_data0: 735686  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 799575  ri_length: 4  ri_data0: 799579  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 863468  ri_length: 4  ri_data0: 863472  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 927361  ri_length: 4  ri_data0: 927365  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 991254  ri_length: 4  ri_data0: 991258  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 1055147  ri_length: 4  ri_data0: 1055151  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 1119040  ri_length: 4  ri_data0: 1119044  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 1182933  ri_length: 4  ri_data0: 1182937  ri_data: 63888  ri_bitbytes: 15972
  ri_addr: 1246826  ri_length: 4  ri_data0: 1246830  ri_data: 63888  ri_bitbytes: 15972
Done

Master dir:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 2  no_addr: 32854  di_mode: 040755  di_uid: 0  di_gid: 0  di_nlink: 2  di_size: 3864  di_blocks: 1  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 32854  di_goal_data: 32854  di_flags: 0x00000201  di_payload_format: 1200  di_height: 0  di_depth: 0  di_entries: 2  di_eattr: 0
Jindex:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 3  no_addr: 32855  di_mode: 040700  di_uid: 0  di_gid: 0  di_nlink: 2  di_size: 3864  di_blocks: 1  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 32855  di_goal_data: 32855  di_flags: 0x00000201  di_payload_format: 1200  di_height: 0  di_depth: 0  di_entries: 3  di_eattr: 0
Inum Range 0:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 5  no_addr: 32861  di_mode: 0100600  di_uid: 0  di_gid: 0  di_nlink: 1  di_size: 16  di_blocks: 1  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 32861  di_goal_data: 32861  di_flags: 0x00000201  di_payload_format: 0  di_height: 0  di_depth: 0  di_entries: 0  di_eattr: 0
StatFS Change 0:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 6  no_addr: 32862  di_mode: 0100600  di_uid: 0  di_gid: 0  di_nlink: 1  di_size: 24  di_blocks: 1  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 32862  di_goal_data: 32862  di_flags: 0x00000201  di_payload_format: 0  di_height: 0  di_depth: 0  di_entries: 0  di_eattr: 0
Quota Change 0:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 7  no_addr: 32863  di_mode: 0100600  di_uid: 0  di_gid: 0  di_nlink: 1  di_size: 1048576  di_blocks: 257  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 33119  di_goal_data: 32863  di_flags: 0x00000200  di_payload_format: 0  di_height: 1  di_depth: 0  di_entries: 0  di_eattr: 0
per_node:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 4  no_addr: 32860  di_mode: 040700  di_uid: 0  di_gid: 0  di_nlink: 2  di_size: 3864  di_blocks: 1  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 32860  di_goal_data: 32860  di_flags: 0x00000201  di_payload_format: 1200  di_height: 0  di_depth: 0  di_entries: 5  di_eattr: 0
Inum Inode:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 8  no_addr: 33120  di_mode: 0100600  di_uid: 0  di_gid: 0  di_nlink: 1  di_size: 0  di_blocks: 1  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 33120  di_goal_data: 33120  di_flags: 0x00000201  di_payload_format: 0  di_height: 0  di_depth: 0  di_entries: 0  di_eattr: 0
StatFS Inode:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 9  no_addr: 33121  di_mode: 0100600  di_uid: 0  di_gid: 0  di_nlink: 1  di_size: 0  di_blocks: 1  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 33121  di_goal_data: 33121  di_flags: 0x00000201  di_payload_format: 0  di_height: 0  di_depth: 0  di_entries: 0  di_eattr: 0
Resource Index:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 10  no_addr: 33122  di_mode: 0100600  di_uid: 0  di_gid: 0  di_nlink: 1  di_size: 2016  di_blocks: 1  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 33122  di_goal_data: 33122  di_flags: 0x00000201  di_payload_format: 1100  di_height: 0  di_depth: 0  di_entries: 0  di_eattr: 0Creating quota file:
Root quota:
  qu_limit: 0  qu_warn: 0  qu_value: 1Done

Root directory:
  mh_magic: 0x01161970  mh_type: 4  mh_format: 400  no_formal_ino: 12  no_addr: 33124  di_mode: 040755  di_uid: 0  di_gid: 0  di_nlink: 2  di_size: 3864  di_blocks: 1  di_atime: 1494525596  di_mtime: 1494525596  di_ctime: 1494525596  di_major: 0  di_minor: 0  di_goal_meta: 33124  di_goal_data: 33124  di_flags: 0x00000001  di_payload_format: 1200  di_height: 0  di_depth: 0  di_entries: 2  di_eattr: 0
Next Inum: 13

Statfs:
  sc_total: 1310596  sc_free: 1277495  sc_dinodes: 12Writing superblock and syncing: Done
Device:                    /dev/fsck/test
Block size:                4096
Device size:               5.00 GB (1310720 blocks)
Filesystem size:           5.00 GB (1310718 blocks)
Journals:                  1
Resource groups:           21
Locking protocol:          "lock_nolock"
Lock table:                ""
UUID:                      5f3accba-81cc-4f12-88d1-a09bbf9b6f55

Comment 20 errata-xmlrpc 2017-08-01 21:57:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2226


Note You need to log in before you can comment on or make changes to this bug.