Bug 1393805

Summary: xfs_repair crashes with segfault.
Product: Red Hat Enterprise Linux 6 Reporter: Seiji Nishikawa <snishika>
Component: xfsprogsAssignee: Eric Sandeen <esandeen>
Status: CLOSED WONTFIX QA Contact: Filesystem QE <fs-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.1CC: cww, snishika, zlang
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-06-13 17:39:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Seiji Nishikawa 2016-11-10 11:26:57 UTC
Description of problem:

xfs_repair crashes with segfault

Version-Release number of selected component (if applicable):

xfsprogs-3.1.1-19.el6.x86_64

How reproducible:

Always

Steps to Reproduce:

1. Create XFS metadump image from the currupted XFS filesystem.

 # xfs_metadump /dev/sdg1 /tmp/xfs_metadump.20161028

2. run xfs_repair 

 # xfs_repair -f /tmp/xfs_file

3. xfs_repair crashes with segfault.


Actual results:

xfs_repair crashes with segfault.

Expected results:

xfs_repair doesn't crash with segfault.

Additional info:

Attaching the reproducer (XFS metadump image): xfs_metadump.20161028

Comment 3 Zorro Lang 2016-11-10 13:17:17 UTC
(In reply to Seiji Nishikawa from comment #0)
> Description of problem:
> 
> xfs_repair crashes with segfault
> 
> Version-Release number of selected component (if applicable):
> 
> xfsprogs-3.1.1-19.el6.x86_64
> 
> How reproducible:
> 
> Always
> 
> Steps to Reproduce:
> 
> 1. Create XFS metadump image from the currupted XFS filesystem.

How can we reproduce this XFS corruption?

> 
>  # xfs_metadump /dev/sdg1 /tmp/xfs_metadump.20161028

Would you please upload the metadump file of your corrupted XFS, which can trigger this segfault?
If you can run xfs_metadump with "-o" option, that'll be better.

Thanks so much,
Zorro

> 
> 2. run xfs_repair 
> 
>  # xfs_repair -f /tmp/xfs_file
> 
> 3. xfs_repair crashes with segfault.
> 
> 
> Actual results:
> 
> xfs_repair crashes with segfault.
> 
> Expected results:
> 
> xfs_repair doesn't crash with segfault.
> 
> Additional info:
> 
> Attaching the reproducer (XFS metadump image): xfs_metadump.20161028

Comment 4 Eric Sandeen 2016-11-10 15:14:13 UTC
(In reply to Seiji Nishikawa from comment #0)

> Additional info:
> 
> Attaching the reproducer (XFS metadump image): xfs_metadump.20161028

It's not attached.

There not yet anything in this bug that allows us to proceed with triage and diagnosis.

Please provide the metadump so that we can address this bug.

-Eric

Comment 7 Eric Sandeen 2017-03-01 14:30:02 UTC
Phase 6 - check inode connectivity...
        - traversing filesystem ...
entry "Pz3~,]-	\8}T[v in directory inode 4413 points to free inode 2699082, would junk entry
corrupt block 70 in directory inode 4413: would junk block
Segmentation fault

Program received signal SIGSEGV, Segmentation fault.
xfs_da_brelse (tp=0x0, dabuf=0x22a289) at xfs_da_btree.c:2392
2392		if ((nbuf = dabuf->nbuf) == 1) {
(gdb) bt
#0  xfs_da_brelse (tp=0x0, dabuf=0x22a289) at xfs_da_btree.c:2392
#1  0x0000000000423618 in longform_dir2_entry_check (mp=<value optimized out>, ino=4413, ip=0x1c7b7b0, num_illegal=0x7fffffffde80, 
    need_dot=0x7fffffffde8c, irec=0x6c2150, ino_offset=61, hashtab=0x42f6010) at phase6.c:2559
#2  0x00000000004283a9 in process_dir_inode (mp=0x7fffffffdf30, agno=<value optimized out>, irec=0x6c2150, ino_offset=61)
    at phase6.c:3291
#3  0x0000000000428954 in traverse_function (mp=0x7fffffffdf30) at phase6.c:3607
#4  traverse_ags (mp=0x7fffffffdf30) at phase6.c:3649
#5  phase6 (mp=0x7fffffffdf30) at phase6.c:3741
#6  0x00000000004316b9 in main (argc=<value optimized out>, argv=<value optimized out>) at xfs_repair.c:735
(gdb)

Comment 9 Chris Williams 2017-06-13 17:39:05 UTC
Red Hat Enterprise Linux 6 transitioned to the Production 3 Phase on May 10, 2017.  During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.
 
The official life cycle policy can be reviewed here:
 
http://redhat.com/rhel/lifecycle
 
This issue does not appear to meet the inclusion criteria for the Production Phase 3 and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification.  Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:
 
https://access.redhat.com