Bug 1567100

Summary: "Directory selfheal failed: Unable to form layout " log messages seen on client
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nag Pavan Chilakam <nchilaka>
Component: distributeAssignee: Raghavendra G <rgowdapp>
Status: CLOSED ERRATA QA Contact: Prasad Desala <tdesala>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.4CC: kompastver, nbalacha, redhat, rhinduja, rhs-bugs, storage-qa-internal
Target Milestone: ---   
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.12.2-9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-04 06:46:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1503137    

Description Nag Pavan Chilakam 2018-04-13 12:21:51 UTC
Description of problem:
----------------------
I had lot of files on my ec volume on 3.12.2-6 build. 
I did an offline server upgrade to 3.12.2-7 . Also upgraded the clients to the same build(did a fresh mount post upgrade)
I then tried to delete the old files, and when we access the files I am seeing below messages logged for every entry in fuse mount log

[2018-04-13 12:11:00.215891] I [MSGID: 109005] [dht-selfheal.c:2454:dht_selfheal_directory] 0-zen-dht: Directory selfheal failed: Unable to form layout for directory /rhs-client19.lab.eng.blr.redhat.com/linux-4.15.13/arch/arm/mach-orion5x
[2018-04-13 12:11:00.317994] I [MSGID: 109005] [dht-selfheal.c:2454:dht_selfheal_directory] 0-zen-dht: Directory selfheal failed: Unable to form layout for directory /rhs-client19.lab.eng.blr.redhat.com/linux-4.15.13/arch/arm/mach-prima2
[2018-04-13 12:11:00.360088] I [MSGID: 109005] [dht-selfheal.c:2454:dht_selfheal_directory] 0-zen-dht: Directory selfheal failed: Unable to form layout for directory /rhs-client19.lab.eng.blr.redhat.com/linux-4.15.13/arch/arm/mach-qcom

 
Volume Name: zen
Type: Distributed-Disperse
Volume ID: a6470510-3f32-4f34-8004-521d9670bec9
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: dhcp35-205.lab.eng.blr.redhat.com:/gluster/brick1/zen
Brick2: dhcp35-169.lab.eng.blr.redhat.com:/gluster/brick1/zen
Brick3: dhcp35-145.lab.eng.blr.redhat.com:/gluster/brick1/zen
Brick4: dhcp35-177.lab.eng.blr.redhat.com:/gluster/brick1/zen
Brick5: dhcp35-29.lab.eng.blr.redhat.com:/gluster/brick1/zen
Brick6: dhcp35-14.lab.eng.blr.redhat.com:/gluster/brick1/zen
Brick7: dhcp35-205.lab.eng.blr.redhat.com:/gluster/brick2/zen
Brick8: dhcp35-169.lab.eng.blr.redhat.com:/gluster/brick2/zen
Brick9: dhcp35-145.lab.eng.blr.redhat.com:/gluster/brick2/zen
Brick10: dhcp35-177.lab.eng.blr.redhat.com:/gluster/brick2/zen
Brick11: dhcp35-29.lab.eng.blr.redhat.com:/gluster/brick2/zen
Brick12: dhcp35-14.lab.eng.blr.redhat.com:/gluster/brick2/zen
Options Reconfigured:
nfs.disable: on
[root@dhcp35-205 ~]# 


Version-Release number of selected component (if applicable):
------------
3.12.2-7


Steps to Reproduce:
1.have an ec volume 2x(4+2) in 3.12.2-6  and have data on the volume
2.offline upgrade the volume to 3.12.2-7 
3.remount the clients after upgrading them
4. do a rm -rf or even an ls and you will see above messages

Additional Info:
=================
post successful rm -rf of old files, I created new dirs and deleted them, but didnt see these messages.

Comment 2 Nag Pavan Chilakam 2018-04-13 12:38:51 UTC
sosreports http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/nchilaka/bug.1567100/

Comment 4 Raghavendra G 2018-04-14 05:14:55 UTC
https://review.gluster.org/19727

Comment 9 Prasad Desala 2018-05-16 12:51:09 UTC
Verified this BZ while updating the machines from glusterfs-3.12.2-9 to glusterfs-3.12.2-10. Followed the same steps as in the description, didn't see the messages mentioned in the description above.

Moving this BZ to Verified.

Comment 11 errata-xmlrpc 2018-09-04 06:46:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607