Bug 679931

Summary: mkfs.extN problems on devices with physical sector size > 4k
Product: Red Hat Enterprise Linux 6 Reporter: Eric Sandeen <esandeen>
Component: e2fsprogsAssignee: Eric Sandeen <esandeen>
Status: CLOSED ERRATA QA Contact: BaseOS QE - Apps <qe-baseos-apps>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.1CC: bnater, mishu, sct, yury
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: e2fsprogs-1.41.12-8.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-12-06 18:13:56 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Eric Sandeen 2011-02-23 21:35:23 UTC
Description of problem:

On block devices reporting a physical sector size > 4k, mkfs.extN will attempt to set block size to the physical sector size.  In some cases, that physical sector size is > page size, which won't mount; at the extreme, mkfs may even fail.

Version-Release number of selected component (if applicable):

e2fsprogs-1.41.12-6.e6

How reproducible:

Always

Steps to Reproduce:
1. modprobe scsi_debug dev_size_mb=1024 physblk_exp=11 opt_blks=0
2. mkfs.ext4 /dev/sdc (or whatever device got created)
  
Actual results:

# mkfs.ext4 /dev/sdc
mke2fs 1.41.12 (17-May-2010)
/dev/sdc is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=1048576 (log=10)
Fragment size=1048576 (log=10)
Stride=1 blocks, Stripe width=0 blocks
4096 inodes, 1024 blocks
51 blocks (4.98%) reserved for the super user
First data block=0
1 block group
65528 blocks per group, 65528 fragments per group
4096 inodes per group

Writing inode tables: done                            
ext2fs_mkdir: Invalid argument while creating root dir


Expected results:

successful mkfs

Additional info:

Can override with mkfs.ext4 -F -b 4096

Comment 2 Eric Sandeen 2011-02-23 21:45:32 UTC
These comits from 1.41.13 (specifically, the 2nd one) should fix it up.  The 2nd one sets default block size to logical block size, not physical, if needed.

commit f89f54aff479af859ee483c907041bcc9c0698f8
Author: Theodore Ts'o <tytso>
Date:   Sun Nov 21 09:56:53 2010 -0500

    mke2fs: Do not require -F for block size < physical size
    
    There will be SSD's out soon that have 8k or 16k phyiscal block sizes.
    So don't enforce a requirement that the block size be less than the
    physical block size unless the force option is given, and don't give a
    warning if the user can't do anything about it (i.e., if the physical
    block size is > than the page size).
    
    Signed-off-by: "Theodore Ts'o" <tytso>


commit 2b21a0d9b6c7e0efeb553e2b0f61aba1b27f9257
Author: Theodore Ts'o <tytso>
Date:   Mon Nov 22 11:14:35 2010 -0500

    mke2fs: Force the default blocksize to be at least the logical sector size
    
    Signed-off-by: "Theodore Ts'o" <tytso>

Comment 3 Eric Sandeen 2011-05-24 15:21:59 UTC
Fixed in e2fsprogs-1.41.12-8.el6

Comment 6 errata-xmlrpc 2011-12-06 18:13:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1735.html

Comment 7 Yury V. Zaytsev 2011-12-08 01:05:24 UTC
Hi Eric,

Thank you for your work on backporting this! Shameless plug: you could have also backported this one all along...

commit 45792c127645fdb4b665b74dff01748e5db789c5
Author: Yury V. Zaytsev <yury>  2011-09-16 05:08:52
Committer: Theodore Ts'o <tytso>  2011-09-16 05:46:27

    mke2fs: check that auto-detected blocksize <= sys_page_size

The motivation for this commit was the problem that I experienced upon deploying RHEL 6.1 on Dell PowerEdge servers with SSDs that had buggy firmware, which lead to failures during the installation process with completely bogus error messages right after trying to mount newly created file systems.

Z.

Comment 8 Eric Sandeen 2011-12-08 02:39:32 UTC
Seems we have a twisty path of fixes upstream.  :(  I'll review that one, actually in retrospect it seems odd that the two patches I did backport fixed it.  :)  It was a while back, I'll take another look - thanks for pointing out your commit.

-Eric