Bug 1627060

Summary: ./tests/features/trash.t test case failing on s390x
Product: [Community] GlusterFS Reporter: abhays <abhaysingh1722>
Component: trash-xlatorAssignee: bugs <bugs>
Status: CLOSED UPSTREAM QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 4.1CC: abhaysingh1722, bugs, vbellur
Target Milestone: ---Flags: abhaysingh1722: needinfo+
Target Release: ---   
Hardware: s390x   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-03-12 12:33:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
patchy-rebalance.log for s390x
none
patchy-rebalance.log for x86
none
patchy3.log for s390x after running the ./tests/basic/namespace.t test case
none
Code to calculate the hash for a filename
none
Modified trash.t
none
trash_logs none

Description abhays 2018-09-10 11:08:57 UTC
Created attachment 1482096 [details]
patchy-rebalance.log for s390x

Description of problem:
I am working on Glusterfs v4.1.1 for all distributions on s390x architecture. After successful build and running the test cases, I encountered a test case failure. The subtests 39,63 of ./tests/features/trash.t are failing with the following errors:-
=========================
TEST 37 (line 190): 0 rebalance_completed
ok 37, LINENUM:190
RESULT 37: 0
ls: cannot access '/d/backends/patchy3/rebal*': No such file or directory
basename: missing operand
Try 'basename --help' for more information.
=========================
TEST 38 (line 196): Y wildcard_exists /d/backends/patchy3/1 /d/backends/patchy3/a
ok 38, LINENUM:196
RESULT 38: 0
=========================
TEST 39 (line 197): Y wildcard_exists /d/backends/patchy1/.trashcan/internal_op/*
not ok 39 Got "N" instead of "Y", LINENUM:197
RESULT 39: 1
=========================

=========================
TEST 63 (line 247): Y wildcard_exists /d/backends/patchy1/abc/internal_op/rebal*
not ok 63 Got "N" instead of "Y", LINENUM:247
RESULT 63: 1
rm: cannot remove '/mnt/glusterfs/0/abc/internal_op': Operation not permitted
=========================

Version-Release number of selected component (if applicable):
4.1.1

How reproducible:
Build Glusterfs v4.1.1 and run the test case with ./run-tests.sh prove -vf ./tests/features/trash.t


Steps to Reproduce:
1./run-tests.sh prove -vf ./tests/features/trash.t


Actual results:
Test fails with the above mentioned errors.


Expected results:
Test passes


Additional info:
As per my understanding the test case attempts to migrate rebal2 from /d/backends/patchy3/ to /d/backends/patchy1/.trashcan/internal_op/. Test case fails because rebal2 is not getting migrated on s390x.

After debugging and understanding the flow of the execution of the test, it was found that the rebalance operation when the test case is run fails. I found out that this is because the hashed subvol for patchy is getting wrongly calculated for s390x systems while migrating the file.

On big endian systems, the log says: [dht-rebalance.c:2823:gf_defrag_migrate_single_file] 0-patchy-dht: Attempting to migrate data (null) with gfid 00000000-0000-0000-0000-000000000001 from patchy-client-0 -> patchy-client-0
On little endian systems, the log says: [dht-rebalance.c:2823:gf_defrag_migrate_single_file] 0-patchy-dht: Attempting to migrate data (null) with gfid 00000000-0000-0000-0000-000000000001 from patchy-client-0 -> patchy-client-1


Tried following resolutions:-
1.Ran the test case on an xfs backend
2.Increasing thread_stack_size to check if its a stack overflow issue.

But the test still fails.

I am providing you with the patchy-rebalance.log for both the little endian and big-endian systems.

Comment 1 abhays 2018-09-10 11:52:14 UTC
Created attachment 1482106 [details]
patchy-rebalance.log for x86

Comment 2 abhays 2018-09-17 07:05:27 UTC
Hi,

Any updates on the issue??
Please let us know if any additional info is needed from our side.
Can you please take it on priority as the test failure is blocking our work on the package.

Comment 3 Nithya Balachandran 2018-09-19 03:22:00 UTC
(In reply to abhays from comment #2)
> Hi,
> 
> Any updates on the issue??
> Please let us know if any additional info is needed from our side.
> Can you please take it on priority as the test failure is blocking our work
> on the package.

Please provide the following:

1. the value of the calculated hash on both big and little endian systems.
2. The 'getfattr -e hex -m . -d ' output for the directories on each brick.

Comment 4 abhays 2018-09-19 04:56:05 UTC
(In reply to Nithya Balachandran from comment #3)
> (In reply to abhays from comment #2)
> > Hi,
> > 
> > Any updates on the issue??
> > Please let us know if any additional info is needed from our side.
> > Can you please take it on priority as the test failure is blocking our work
> > on the package.
> 
> Please provide the following:
> 
> 1. the value of the calculated hash on both big and little endian systems.
> 2. The 'getfattr -e hex -m . -d ' output for the directories on each brick.

Nithya,

1. I am not sure what exactly the hash values are getting calculated as on big endian. It'll be really helpful if you could help me with that.
However, while debugging the test case ./tests/basic/namespace.t, I did encounter hashes which were getting differently calculated for big endian systems.These values are as follows(for big endian):-
NAMESPACE_HASH=3253352021
NAMESPACE2_HASH=458775276
NAMESPACE3_HASH=1268089390

2. The output for 'getfattr -e hex -m . -d' for the directories on each brick is as follows:-
glusterfs]# getfattr -e hex -m . -d /d/backends/patchy1
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a64656661756c745f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0xdc7d1d11000000007fffffffffffffff
trusted.glusterfs.dht.commithash=0x3336393931383639363100
trusted.glusterfs.mdata=0x010000000000000000000000005b8fa39a00000000362be3c2000000005b8fa39a00000000362be3c2000000005b8fa39200000000109eda42
trusted.glusterfs.volume-id=0x6dd0e96680ba496aa01a23c9461cac95

glusterfs]# getfattr -e hex -m . -d /d/backends/patchy11
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy11
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a64656661756c745f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.mdata=0x010000000000000000000000005b8fa3b1000000001399cac2000000005b8fa3ad000000002a4021c2000000005b8fa3ad0000000026140442
trusted.glusterfs.volume-id=0xb12576c371714c859ab39161407ef114

glusterfs]# getfattr -e hex -m . -d /d/backends/patchy12
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy12
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a64656661756c745f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.patchy1-client-0=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xb12576c371714c859ab39161407ef114

glusterfs]# getfattr -e hex -m . -d /d/backends/patchy2
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy2
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a64656661756c745f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
trusted.glusterfs.mdata=0x010000000000000000000000005b8fa39a00000000362be3c2000000005b8fa39a00000000362be3c2000000005b8fa39200000000109eda42
trusted.glusterfs.volume-id=0x6dd0e96680ba496aa01a23c9461cac95

glusterfs]# getfattr -e hex -m . -d /d/backends/patchy3
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy3
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a64656661756c745f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0xdc7d1d1100000000000000007ffffffe
trusted.glusterfs.dht.commithash=0x3336393931383639363100
trusted.glusterfs.mdata=0x010000000000000000000000005b8fa3b600000000244a40c2000000005b8fa3a7000000003af097c2000000005b8fa3a90000000007ae08c2
trusted.glusterfs.volume-id=0x6dd0e96680ba496aa01a23c9461cac95

Comment 5 abhays 2018-09-24 11:53:05 UTC
Hi,

Any updates on the issue??
Please let us know if any additional info is needed from our side.

Comment 6 Nithya Balachandran 2018-11-12 04:38:54 UTC
(In reply to abhays from comment #4)
> (In reply to Nithya Balachandran from comment #3)
> > (In reply to abhays from comment #2)
> > > Hi,
> > > 
> > > Any updates on the issue??
> > > Please let us know if any additional info is needed from our side.
> > > Can you please take it on priority as the test failure is blocking our work
> > > on the package.
> > 
> > Please provide the following:
> > 
> > 1. the value of the calculated hash on both big and little endian systems.
> > 2. The 'getfattr -e hex -m . -d ' output for the directories on each brick.
> 
> Nithya,
> 
> 1. I am not sure what exactly the hash values are getting calculated as on
> big endian. It'll be really helpful if you could help me with that.
> However, while debugging the test case ./tests/basic/namespace.t, I did
> encounter hashes which were getting differently calculated for big endian
> systems.These values are as follows(for big endian):-
> NAMESPACE_HASH=3253352021
> NAMESPACE2_HASH=458775276
> NAMESPACE3_HASH=1268089390

What are NAMESPACE(2,3)? I would need the name of the file and the hash calculated for the same on the big endian system.

In this case, the file is rebal2. Please provide the hash calculated on the big endian system.

On my x86 system, I get the following which would mean the rebalance would have migrated it to patchy3.

./hashcompute rebal2
Name = rebal2, hash = 244726431, (hex = 0x0e963a9f)

Comment 7 abhays 2018-11-12 05:25:19 UTC
Created attachment 1504554 [details]
patchy3.log for s390x after running the ./tests/basic/namespace.t test case

Comment 8 abhays 2018-11-12 05:30:54 UTC
(In reply to Nithya Balachandran from comment #6)
> (In reply to abhays from comment #4)
> > (In reply to Nithya Balachandran from comment #3)
> > > (In reply to abhays from comment #2)
> > > > Hi,
> > > > 
> > > > Any updates on the issue??
> > > > Please let us know if any additional info is needed from our side.
> > > > Can you please take it on priority as the test failure is blocking our work
> > > > on the package.
> > > 
> > > Please provide the following:
> > > 
> > > 1. the value of the calculated hash on both big and little endian systems.
> > > 2. The 'getfattr -e hex -m . -d ' output for the directories on each brick.
> > 
> > Nithya,
> > 
> > 1. I am not sure what exactly the hash values are getting calculated as on
> > big endian. It'll be really helpful if you could help me with that.
> > However, while debugging the test case ./tests/basic/namespace.t, I did
> > encounter hashes which were getting differently calculated for big endian
> > systems.These values are as follows(for big endian):-
> > NAMESPACE_HASH=3253352021
> > NAMESPACE2_HASH=458775276
> > NAMESPACE3_HASH=1268089390
> 
> What are NAMESPACE(2,3)? I would need the name of the file and the hash
> calculated for the same on the big endian system.
> 
> In this case, the file is rebal2. Please provide the hash calculated on the
> big endian system.
> 
> On my x86 system, I get the following which would mean the rebalance would
> have migrated it to patchy3.
> 
> ./hashcompute rebal2
> Name = rebal2, hash = 244726431, (hex = 0x0e963a9f)

Thanks for the reply Nithya.

I am not sure how is the hash value calculated for the folder.
The NAMESPACE_HASH, NAMESPACE2_HASH, NAMESPACE3_HASH values have been verified from the logs generated for big endian systems from the path "/var/log/glusterfs/bricks/d-backends-$BRICK.log" after running the ./tests/basic/namespace.t test case.

PFA the patchy3 logs.

Also, I couldn't find the script you have mentioned to compute the hash anywhere i.e. ./hashcompute rebal2.
Could you please provide any other way in which I can do the same.
Thanks.

Comment 9 Nithya Balachandran 2018-11-12 05:38:52 UTC
The bug was filed on /tests/features/trash.t. It looks like you are providing information from another test.

Comment 10 abhays 2018-11-12 05:45:32 UTC
Yes, I am providing information from another test because I was able to find the hash values from that test itself. 
However, for the /tests/features/trash.t test case, I am not able to do the same. Additionally, I have provided the logs (patchy-rebalance.log for s390x), which are generated after the /tests/features/trash.t test case is run.

Comment 11 Nithya Balachandran 2018-11-12 05:50:27 UTC
> Could you please provide any other way in which I can do the same.
This script is one I wrote to do this and not published.
You can add a message in dht_layout_search to print the name and hash.
 

(In reply to abhays from comment #10)
> Yes, I am providing information from another test because I was able to find
> the hash values from that test itself. 
> However, for the /tests/features/trash.t test case, I am not able to do the
> same. Additionally, I have provided the logs (patchy-rebalance.log for
> s390x), which are generated after the /tests/features/trash.t test case is
> run.

Please update the code as described above and provide the details for trash.t

Comment 12 Nithya Balachandran 2018-11-12 06:36:09 UTC
Created attachment 1504577 [details]
Code to calculate the hash for a filename

Comment 13 abhays 2018-11-12 07:07:40 UTC
Hi,
Thanks for the code you provided.
The following are the hash values I am getting:-
./hashcompute /d/backends/patchy1/rebal2
Name = /d/backends/patchy1/rebal2, hash = 3197287436, (hex = 0xbe92bc0c)

./hashcompute /d/backends/patchy1
Name = /d/backends/patchy1, hash = 269829764, (hex = 0x10154684)

./hashcompute /d/backends/patchy3
Name = /d/backends/patchy3, hash = 1798194767, (hex = 0x6b2e464f)


Note:-Since rebal2 isn't getting migrated from patchy1 to patchy3, I have provided the hash values accordingly.

Comment 14 Nithya Balachandran 2018-11-12 07:24:47 UTC
(In reply to abhays from comment #13)
> Hi,
> Thanks for the code you provided.
> The following are the hash values I am getting:-
> ./hashcompute /d/backends/patchy1/rebal2
> Name = /d/backends/patchy1/rebal2, hash = 3197287436, (hex = 0xbe92bc0c)
> 
> ./hashcompute /d/backends/patchy1
> Name = /d/backends/patchy1, hash = 269829764, (hex = 0x10154684)
> 
> ./hashcompute /d/backends/patchy3
> Name = /d/backends/patchy3, hash = 1798194767, (hex = 0x6b2e464f)
> 
> 
> Note:-Since rebal2 isn't getting migrated from patchy1 to patchy3, I have
> provided the hash values accordingly.

The way dht works is as follows:
Each directory is created on each brick
A layout spanning 0x00000000 to 0xffffffff is divided between the bricks.
The directory on each brick is assigned a layout - a hash range which is stored in the xattr trusted.dht
For example :
trusted.glusterfs.dht=0xdc7d1d1100000000[00000000][7ffffffe]

(I have used [] to demarcate the start and stop of the range assigned as per the xattr.)



To decide where to place a file on the volume, dht calculates the hash of the filename and finds the brick on which the layout of the parent directory contains the range into which which the hash falls.

I only need the hash value of rebal2 and the xattrs on the brick roots after the rebalance is run.

Can you run the hashcompute on an x86  system and let me know the value for rebal2 as well?

Comment 15 abhays 2018-11-12 08:41:59 UTC
Hi,

Thanks for the explanation.
The following are the values you asked for(on x86):-
 ./hashcompute /d/backends/patchy3/rebal2
Name = /d/backends/patchy3/rebal2, hash = 2043700665, (hex = 0x79d065b9)

./hascompute /d/backends/patchy1
Name = /d/backends/patchy1, hash = 3232696951, (hex = 0xc0af0a77)

# getfattr -e hex -m . -d /d/backends/patchy1
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0xdc7d6904000000007fffffffffffffff
trusted.glusterfs.dht.commithash=0x3336393932303634303400
trusted.glusterfs.mdata=0x010000000000000000000000005b8fad18000000000b06afe3000000005b8fad18000000000b06afe3000000005b8fad0f0000000018d9f083
trusted.glusterfs.volume-id=0xfbf341bbd3214da09d45edfb4e6de998

# getfattr -e hex -m . -d /d/backends/patchy3
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy3
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0xdc7d690400000000000000007ffffffe
trusted.glusterfs.dht.commithash=0x3336393932303634303400
trusted.glusterfs.mdata=0x010000000000000000000000005b8fad34000000001047f865000000005b8fad25000000002fbf51d8000000005b8fad250000000034c110c4
trusted.glusterfs.volume-id=0xfbf341bbd3214da09d45edfb4e6de998

Comment 16 Nithya Balachandran 2018-11-12 11:10:25 UTC
(In reply to abhays from comment #15)
> Hi,
> 
> Thanks for the explanation.
> The following are the values you asked for(on x86):-
>  ./hashcompute /d/backends/patchy3/rebal2
> Name = /d/backends/patchy3/rebal2, hash = 2043700665, (hex = 0x79d065b9)
> 
> ./hascompute /d/backends/patchy1
> Name = /d/backends/patchy1, hash = 3232696951, (hex = 0xc0af0a77)
> 
> # getfattr -e hex -m . -d /d/backends/patchy1
> getfattr: Removing leading '/' from absolute path names
> # file: d/backends/patchy1
> trusted.gfid=0x00000000000000000000000000000001
> trusted.glusterfs.dht=0xdc7d6904000000007fffffffffffffff
> trusted.glusterfs.dht.commithash=0x3336393932303634303400
> trusted.glusterfs.
> mdata=0x010000000000000000000000005b8fad18000000000b06afe3000000005b8fad18000
> 000000b06afe3000000005b8fad0f0000000018d9f083
> trusted.glusterfs.volume-id=0xfbf341bbd3214da09d45edfb4e6de998
> 
> # getfattr -e hex -m . -d /d/backends/patchy3
> getfattr: Removing leading '/' from absolute path names
> # file: d/backends/patchy3
> trusted.gfid=0x00000000000000000000000000000001
> trusted.glusterfs.dht=0xdc7d690400000000000000007ffffffe
> trusted.glusterfs.dht.commithash=0x3336393932303634303400
> trusted.glusterfs.
> mdata=0x010000000000000000000000005b8fad34000000001047f865000000005b8fad25000
> 000002fbf51d8000000005b8fad250000000034c110c4
> trusted.glusterfs.volume-id=0xfbf341bbd3214da09d45edfb4e6de998




Sorry, I meant to say run it on s390x. 

The hash only needs to be calculated on the filename (not the file path). 
So 
./hashcompute rebal1
./hashcompute rebal2

Can you provide the info requested (hash and xattrs) for both x86 and s390x?

Comment 17 abhays 2018-11-12 11:16:44 UTC
(In reply to Nithya Balachandran from comment #16)
> (In reply to abhays from comment #15)
> > Hi,
> > 
> > Thanks for the explanation.
> > The following are the values you asked for(on x86):-
> >  ./hashcompute /d/backends/patchy3/rebal2
> > Name = /d/backends/patchy3/rebal2, hash = 2043700665, (hex = 0x79d065b9)
> > 
> > ./hascompute /d/backends/patchy1
> > Name = /d/backends/patchy1, hash = 3232696951, (hex = 0xc0af0a77)
> > 
> > # getfattr -e hex -m . -d /d/backends/patchy1
> > getfattr: Removing leading '/' from absolute path names
> > # file: d/backends/patchy1
> > trusted.gfid=0x00000000000000000000000000000001
> > trusted.glusterfs.dht=0xdc7d6904000000007fffffffffffffff
> > trusted.glusterfs.dht.commithash=0x3336393932303634303400
> > trusted.glusterfs.
> > mdata=0x010000000000000000000000005b8fad18000000000b06afe3000000005b8fad18000
> > 000000b06afe3000000005b8fad0f0000000018d9f083
> > trusted.glusterfs.volume-id=0xfbf341bbd3214da09d45edfb4e6de998
> > 
> > # getfattr -e hex -m . -d /d/backends/patchy3
> > getfattr: Removing leading '/' from absolute path names
> > # file: d/backends/patchy3
> > trusted.gfid=0x00000000000000000000000000000001
> > trusted.glusterfs.dht=0xdc7d690400000000000000007ffffffe
> > trusted.glusterfs.dht.commithash=0x3336393932303634303400
> > trusted.glusterfs.
> > mdata=0x010000000000000000000000005b8fad34000000001047f865000000005b8fad25000
> > 000002fbf51d8000000005b8fad250000000034c110c4
> > trusted.glusterfs.volume-id=0xfbf341bbd3214da09d45edfb4e6de998
> 
> 
> 
> 
> Sorry, I meant to say run it on s390x. 
> 
> The hash only needs to be calculated on the filename (not the file path). 
> So 
> ./hashcompute rebal1
> ./hashcompute rebal2
> 
> Can you provide the info requested (hash and xattrs) for both x86 and s390x?

Hi,

Following is the info you need:-
On s390x:-
./hashcompute /d/backends/patchy1/rebal1
Name = /d/backends/patchy1/rebal1, hash = 2070308055, (hex = 0x7b6664d7)

./hashcompute /d/backends/patchy1/rebal2
Name = /d/backends/patchy1/rebal2, hash = 3197287436, (hex = 0xbe92bc0c)

# getfattr -e hex -m . -d /d/backends/patchy1
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a64656661756c745f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0xdf49203a000000007fffffffffffffff
trusted.glusterfs.dht.commithash=0x3337343631313135343600
trusted.glusterfs.volume-id=0x9657838b022a433f99f349a4a393b7c8

# getfattr -e hex -m . -d /d/backends/patchy3
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy3
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a64656661756c745f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0xdf49203a00000000000000007ffffffe
trusted.glusterfs.dht.commithash=0x3337343631313135343600
trusted.glusterfs.volume-id=0x9657838b022a433f99f349a4a393b7c8

On x86:-
./hascompute /d/backends/patchy1/rebal1
Name = /d/backends/patchy1/rebal1, hash = 1162779124, (hex = 0x454e99f4)

./hashcompute /d/backends/patchy3/rebal2
Name = /d/backends/patchy3/rebal2, hash = 2043700665, (hex = 0x79d065b9)

# getfattr -e hex -m . -d /d/backends/patchy1
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0xdc7d6904000000007fffffffffffffff
trusted.glusterfs.dht.commithash=0x3336393932303634303400
trusted.glusterfs.mdata=0x010000000000000000000000005b8fad18000000000b06afe3000000005b8fad18000000000b06afe3000000005b8fad0f0000000018d9f083
trusted.glusterfs.volume-id=0xfbf341bbd3214da09d45edfb4e6de998

# getfattr -e hex -m . -d /d/backends/patchy3
getfattr: Removing leading '/' from absolute path names
# file: d/backends/patchy3
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0xdc7d690400000000000000007ffffffe
trusted.glusterfs.dht.commithash=0x3336393932303634303400
trusted.glusterfs.mdata=0x010000000000000000000000005b8fad34000000001047f865000000005b8fad25000000002fbf51d8000000005b8fad250000000034c110c4
trusted.glusterfs.volume-id=0xfbf341bbd3214da09d45edfb4e6de998

Comment 18 Nithya Balachandran 2018-11-12 11:20:39 UTC
> 
> On x86:-
> ./hascompute /d/backends/patchy1/rebal1
> Name = /d/backends/patchy1/rebal1, hash = 1162779124, (hex = 0x454e99f4)

This is required to be run only on the filename without the path (see previous comment)




Are the xattrs the ones on the bricks after trash.t was run?

Comment 19 abhays 2018-11-12 12:02:30 UTC
(In reply to Nithya Balachandran from comment #18)
> > 
> > On x86:-
> > ./hascompute /d/backends/patchy1/rebal1
> > Name = /d/backends/patchy1/rebal1, hash = 1162779124, (hex = 0x454e99f4)
> 
> This is required to be run only on the filename without the path (see
> previous comment)
> 
> 
> 
> 
> Are the xattrs the ones on the bricks after trash.t was run?


Yes, the xattrs are the ones generated after trash.t was run.
And the hash values are as follows:-
On s390x:-
./hashcompute rebal1
Name = rebal1, hash = 4114444863, (hex = 0xf53d723f)

./hashcompute rebal2
Name = rebal2, hash = 2825265154, (hex = 0xa8662002)

On x86:-
./hascompute rebal1
Name = rebal1, hash = 3310016104, (hex = 0xc54ad668)

./hashcompute rebal2
Name = rebal2, hash = 244726431, (hex = 0x0e963a9f)

Comment 20 Nithya Balachandran 2018-11-12 12:11:14 UTC
Thank you. I will take a look and get back

Comment 21 Nithya Balachandran 2019-02-13 09:00:03 UTC
Please provide the hash of filenames rebal1 to rebal10 on big-endian. I will change the filename in the test once we figure out a name that works for both.

Comment 22 Nithya Balachandran 2019-02-13 09:54:17 UTC
Created attachment 1534338 [details]
Modified trash.t

Please see if this works for you.

Comment 23 abhays 2019-02-14 12:18:16 UTC
(In reply to Nithya Balachandran from comment #22)
> Created attachment 1534338 [details]
> Modified trash.t
> 
> Please see if this works for you.

No, It doesn't.
=========================
TEST 41 (line 210): Y wildcard_exists /d/backends/patchy1/.trashcan/internal_op/*
not ok 41 Got "N" instead of "Y", LINENUM:210
RESULT 41: 1
=========================
=========================
TEST 65 (line 264): Y wildcard_exists /d/backends/patchy1/abc/internal_op/rebal*
not ok 65 Got "N" instead of "Y", LINENUM:264
RESULT 65: 1
rm: cannot remove '/mnt/glusterfs/0/abc/internal_op': Operation not permitted
=========================
Failed 2/68 subtests

Test Summary Report
-------------------
./tests/features/trash.t (Wstat: 0 Tests: 68 Failed: 2)
  Failed tests:  41, 65
Files=1, Tests=68, 86 wallclock secs ( 0.09 usr  0.01 sys + 16.29 cusr  2.30 csys = 18.69 CPU)
Result: FAIL
End of test ./tests/features/trash.t
================================================================================


PFA the logs for the same.

Comment 24 abhays 2019-02-14 12:19:12 UTC
Created attachment 1534784 [details]
trash_logs

Comment 25 abhays 2019-07-24 05:40:06 UTC
@Jiffin,Any Updates on this bug?

Comment 26 Jiffin 2019-07-24 06:05:46 UTC
Actually Nithya was looking into this issue, reassigning needinfo on her. For the time being, changing the assigne as well

Comment 27 Nithya Balachandran 2019-07-24 06:59:16 UTC
(In reply to Jiffin from comment #26)
> Actually Nithya was looking into this issue, reassigning needinfo on her.
> For the time being, changing the assigne as well

I had asked Amar to find someone else to look into this quite some time ago as I would no longer have the time. Reassigning to Amar.

Comment 29 Worker Ant 2020-03-12 12:33:21 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/903, and will be tracked there from now on. Visit GitHub issues URL for further details