Bug 1233805 - [Backup]: Glusterfind pre after add-brick not successfully logging namespace changes in the newly added brick
Summary: [Backup]: Glusterfind pre after add-brick not successfully logging namespace ...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfind
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1223636
TreeView+ depends on / blocked
 
Reported: 2015-06-19 13:31 UTC by Sweta Anandpara
Modified: 2018-04-16 03:03 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-16 03:03:52 UTC
Embargoed:


Attachments (Terms of Use)

Description Sweta Anandpara 2015-06-19 13:31:41 UTC
Description of problem:
In a 4node cluster and a 4*2 distribute replicate volume 'ozone', removed a replica brick pair of node3 and node4. The volume became 3*2. Rebalance ran successfully with all files getting moved to existing brick pairs. Verified at the backend as well if there were any remnant files. Deleted the direcotry entries, and then added the brick pair back to the volume 'ozone'. 

After adding a brick, ran rebalance manually and that completed successfully. Created a couple of files at the nfs/fuse mountpoint , and ran glusterfind pre on a pre-existing session, and also on a newly created session. Neither of the sessions logged the NEW entries of files 'file1' and 'file2'.

Created another file 'file3' and did data/metadata operations on couple of pre-existing data set and that was successfully recorded. 

When checked on the backend, file3 was created on a pre-existing brick resulting in successful logging in the outfile. 'file1' and 'file2' were created on the newly added bricks - which were not getting logged in the output file.

Version-Release number of selected component (if applicable):
glusterfs-3.7.1-4.el6rhs.x86_64

How reproducible: 1:1


Additional info:


#############         SERVER       ##############

#####    NODE 1


[root@dhcp43-191 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 9ef1ace8-505d-4d97-aa23-4296aa685f76
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.43.191:/rhs/thinbrick2/ozone
Brick4: 10.70.42.202:/rhs/thinbrick2/ozone
Brick5: 10.70.42.30:/rhs/thinbrick1/ozone
Brick6: 10.70.42.147:/rhs/thinbrick1/ozone
Brick7: 10.70.42.30:/rhs/thinbrick2/ozone
Brick8: 10.70.42.147:/rhs/thinbrick2/ozone
Options Reconfigured:
performance.readdir-ahead: on
storage.build-pgfid: on
changelog.changelog: on
changelog.capture-del-path: on
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v ^C
[root@dhcp43-191 ~]# gluster v remove-brick ozone replica 2 10.70.42.30:/rhs/thinbrick2/ozone 10.70.42.147:/rhs/thinbrick2/ozone start
volume remove-brick start: success
ID: 54455b81-cf52-4338-846b-7f2b8faefcac
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v remove-brick ozone replica 2 10.70.42.30:/rhs/thinbrick2/ozone 10.70.42.147:/rhs/thinbrick2/ozone status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                             10.70.42.30              186       361.1KB           389             0             0          in progress              10.00
                            10.70.42.147                0        0Bytes             0             0             0          in progress              11.00
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v remove-brick ozone replica 2 10.70.42.30:/rhs/thinbrick2/ozone 10.70.42.147:/rhs/thinbrick2/ozone status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                             10.70.42.30              286       826.9KB           612             0             0          in progress              20.00
                            10.70.42.147                0        0Bytes             0             0             0            completed              14.00
[root@dhcp43-191 ~]# gluster v remove-brick ozone replica 2 10.70.42.30:/rhs/thinbrick2/ozone 10.70.42.147:/rhs/thinbrick2/ozone status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                             10.70.42.30              393         3.2MB           791             0             0            completed              28.00
                            10.70.42.147                0        0Bytes             0             0             0            completed              14.00
[root@dhcp43-191 ~]# gluster v remove-brick ozone replica 2 10.70.42.30:/rhs/thinbrick2/ozone 10.70.42.147:/rhs/thinbrick2/ozone commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 9ef1ace8-505d-4d97-aa23-4296aa685f76
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.43.191:/rhs/thinbrick2/ozone
Brick4: 10.70.42.202:/rhs/thinbrick2/ozone
Brick5: 10.70.42.30:/rhs/thinbrick1/ozone
Brick6: 10.70.42.147:/rhs/thinbrick1/ozone
Options Reconfigured:
performance.readdir-ahead: on
storage.build-pgfid: on
changelog.changelog: on
changelog.capture-del-path: on
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-19 22:37:28      
sesso2                    ozone                     2015-06-19 22:44:40      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# glusterfind create sesso5 ozone
Session sesso5 created with volume ozone
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-19 22:37:28      
sesso5                    ozone                     2015-06-20 00:07:50      
sesso2                    ozone                     2015-06-19 22:44:40      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind pre sesso5 ozone /tmp/outo5.txt
10.70.43.191 - pre failed: [2015-06-19 18:37:55.934843] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-19 18:37:55.935459] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.43.191 - pre failed: /rhs/thinbrick1/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.30 - pre failed: [2015-06-19 18:37:56.764957] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-19 18:37:56.765241] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-06-19 18:37:56.765801] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
/rhs/thinbrick1/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.202 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.202 - pre failed: [2015-06-19 18:37:57.374882] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
/rhs/thinbrick1/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: [2015-06-19 18:37:56.994915] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
/rhs/thinbrick1/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo5.txt
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# ls -l /tmp/outo5.txt 
-rw-r--r--. 1 root root 0 Jun 20 00:07 /tmp/outo5.txt
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v add-brick ozone replica 2 10.70.42.30:/rhs/thinbrick2/ozone 10.70.42.147:/rhs/thinbrick2/ozone 
volume add-brick: success
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info
 
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: ced3ec30-654b-4bf5-956b-9e99bc51d445
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/brick1/gss
Brick2: 10.70.42.202:/rhs/brick1/gss
Brick3: 10.70.42.30:/rhs/brick1/gss
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 9ef1ace8-505d-4d97-aa23-4296aa685f76
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.43.191:/rhs/thinbrick2/ozone
Brick4: 10.70.42.202:/rhs/thinbrick2/ozone
Brick5: 10.70.42.30:/rhs/thinbrick1/ozone
Brick6: 10.70.42.147:/rhs/thinbrick1/ozone
Brick7: 10.70.42.30:/rhs/thinbrick2/ozone
Brick8: 10.70.42.147:/rhs/thinbrick2/ozone
Options Reconfigured:
performance.readdir-ahead: on
storage.build-pgfid: on
changelog.changelog: on
changelog.capture-del-path: on
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v rebalance ozone
Usage: volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}}
[root@dhcp43-191 ~]# gluster v rebalance ozone start
volume rebalance: ozone: success: Rebalance on ozone has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 9b0a88a2-a3db-409d-bb86-77369f5def91

[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v rebalance ozone status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost              108       129.1KB           375             0             2          in progress               8.00
                            10.70.42.202                0        0Bytes             0             0             0          in progress               8.00
                             10.70.42.30               43       148.9KB           241             0             0          in progress               7.00
                            10.70.42.147                0        0Bytes             0             0             0          in progress               7.00
volume rebalance: ozone: success: 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v rebalance ozone status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost              278         2.4MB          1044             1            15            completed              30.00
                            10.70.42.202                0        0Bytes             0             0             0            completed              29.00
                             10.70.42.30              115       842.4KB           590             0             0            completed              28.00
                            10.70.42.147                0        0Bytes             0             0             0            completed              28.00
volume rebalance: ozone: success: 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-19 22:37:28      
sesso5                    ozone                     2015-06-20 00:07:50      
sesso2                    ozone                     2015-06-19 22:44:40      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# glusterfind post sesso5 ozone
Session sesso5 with volume ozone updated
[root@dhcp43-191 ~]# glusterfind pre sesso5 ozone /tmp/outo5.txt
10.70.42.30 - pre failed: [2015-06-19 18:41:11.435721] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-06-19 18:41:11.436424] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
[2015-06-19 18:41:11.437221] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-19 18:41:11.437373] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 4
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: [2015-06-19 18:41:11.563684] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo5.txt
[root@dhcp43-191 ~]# ls -l /tmp/outo5.txt 
-rw-r--r--. 1 root root 0 Jun 20 00:11 /tmp/outo5.txt
[root@dhcp43-191 ~]# glusterfind pre sesso5 ozone /tmp/outo5.txt --regenerate-outfile
10.70.42.30 - pre failed: [2015-06-19 18:41:34.474530] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo5.txt
[root@dhcp43-191 ~]# ls -l /tmp/outo5.txt 
-rw-r--r--. 1 root root 0 Jun 20 00:11 /tmp/outo5.txt
[root@dhcp43-191 ~]# cat /tmp/outo
outo1.txt  outo2.txt  outo5.txt  
[root@dhcp43-191 ~]# cat /tmp/outo5.txt 
[root@dhcp43-191 ~]# date
Sat Jun 20 00:12:05 IST 2015
[root@dhcp43-191 ~]# glusterfind pre sesso5 ozone /tmp/outo5.txt --regenerate-outfile
10.70.42.30 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo5.txt
[root@dhcp43-191 ~]# cat /tmp/outo5.txt 
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt 
Post command is not run after last pre, use --regenerate-outfile
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt ^C
[root@dhcp43-191 ~]# glusterfind post sesso1 ozone
Session sesso1 with volume ozone updated
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt 
10.70.42.30 - pre failed: [2015-06-19 18:43:07.983767] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 4
[2015-06-19 18:43:07.984738] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
[2015-06-19 18:43:07.985031] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-06-19 18:43:07.985072] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo1.txt
[root@dhcp43-191 ~]# cat /tmp/outo1.txt 
[root@dhcp43-191 ~]# date
Sat Jun 20 00:14:02 IST 2015
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind pre sesso2 ozone /tmp/outo2.txt 
Post command is not run after last pre, use --regenerate-outfile
[root@dhcp43-191 ~]# glusterfind pre sesso2 ozone /tmp/outo2.txt --regenerate-outfile
10.70.42.30 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: [2015-06-19 18:44:21.375093] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo2.txt
[root@dhcp43-191 ~]# ls -l /tmp/outo2.txt 
-rw-r--r--. 1 root root 46 Jun 20 00:14 /tmp/outo2.txt
[root@dhcp43-191 ~]# cat /tmp/outo2.txt 
MODIFY newdir1%2Fdir2%2Fa 
MODIFY level01%2F 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind post
usage: glusterfind post [-h] [--debug] session volume
glusterfind post: error: too few arguments
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# after doing a few more changes to teh existing files
-bash: after: command not found
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt 
Post command is not run after last pre, use --regenerate-outfile
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt --regenerate-outfile
10.70.42.30 - pre failed: [2015-06-19 18:47:44.492585] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-19 18:47:44.494214] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo1.txt
[root@dhcp43-191 ~]# cat /tmp/outo1.txt 
DELETE a 
DELETE b 
MODIFY level01%2F 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind pre sesso5 ozone /tmp/outo5.txt --regenerate-outfile
10.70.42.30 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: [2015-06-19 18:48:19.484521] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-19 18:48:19.484940] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
[2015-06-19 18:48:19.485062] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 4
[2015-06-19 18:48:19.484891] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo5.txt
[root@dhcp43-191 ~]# cat /tmp/outo5.txt 
DELETE a 
DELETE b 
MODIFY level01%2F 
[root@dhcp43-191 ~]# date
Sat Jun 20 00:18:44 IST 2015
[root@dhcp43-191 ~]# # after creawting a new file
[root@dhcp43-191 ~]# date
Sat Jun 20 00:18:55 IST 2015
[root@dhcp43-191 ~]# date
Sat Jun 20 00:19:21 IST 2015
[root@dhcp43-191 ~]# glusterfind post sesso5 ozone  
Session sesso5 with volume ozone updated
[root@dhcp43-191 ~]# glusterfind pre sesso5 ozone /tmp/outo5.txt 
Generated output file /tmp/outo5.txt
[root@dhcp43-191 ~]# cat /tmp/outo5.txt 
NEW file3 
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt --regenerate-outfile
10.70.42.30 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo1.txt
[root@dhcp43-191 ~]# cat /tmp/outo1.txt 
DELETE a 
DELETE b 
MODIFY level01%2F 
NEW file3 
[root@dhcp43-191 ~]# cd /rhs/thinbrick1/ozone/
5582baeb%%SNRP8ENTE9  file3                 level00/              level02/              newdir1/              
etc/                  .glusterfs/           level01/              level10/              .trashcan/            
[root@dhcp43-191 ~]# cd /rhs/thinbrick2/ozone/
5582ba18%%FTZ39AW4RV.tar.gz  .glusterfs/                  level01/                     level10/                     .trashcan/                   
etc/                         level00/                     level02/                     newdir1/                     
[root@dhcp43-191 ~]# cd /rhs/thinbrick2/ozone/^C
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 9ef1ace8-505d-4d97-aa23-4296aa685f76
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.43.191:/rhs/thinbrick2/ozone
Brick4: 10.70.42.202:/rhs/thinbrick2/ozone
Brick5: 10.70.42.30:/rhs/thinbrick1/ozone
Brick6: 10.70.42.147:/rhs/thinbrick1/ozone
Brick7: 10.70.42.30:/rhs/thinbrick2/ozone
Brick8: 10.70.42.147:/rhs/thinbrick2/ozone
Options Reconfigured:
performance.readdir-ahead: on
storage.build-pgfid: on
changelog.changelog: on
changelog.capture-del-path: on
[root@dhcp43-191 ~]# gluster v rebalance status
Usage: volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}}
[root@dhcp43-191 ~]# cat /tmp/outo5.txt ^C
(reverse-i-search)`glus': ^Custer v rebalance status
[root@dhcp43-191 ~]# gluster v rebalance ozone status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost              278         2.4MB          1044             1            15            completed              30.00
                            10.70.42.202                0        0Bytes             0             0             0            completed              29.00
                             10.70.42.30              115       842.4KB           590             0             0            completed              28.00
                            10.70.42.147                0        0Bytes             0             0             0            completed              28.00
volume rebalance: ozone: success: 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 

################          NODE 3         ##################


[root@dhcp42-30 thinbrick2]# find ozone -type f | wc -l
10
[root@dhcp42-30 thinbrick2]# find ozone -type f 
ozone/.glusterfs/indices/xattrop/xattrop-91e2c857-ad2d-4d80-aaec-9ae3e201881d
ozone/.glusterfs/changelogs/htime/HTIME.1434735512
ozone/.glusterfs/changelogs/CHANGELOG.1434735708
ozone/.glusterfs/changelogs/CHANGELOG.1434735723
ozone/.glusterfs/changelogs/CHANGELOG.1434735768
ozone/.glusterfs/changelogs/CHANGELOG
ozone/.glusterfs/ozone.db
ozone/.glusterfs/health_check
ozone/.glusterfs/bc/8c/bc8c2154-1df2-4e6d-9b33-ec1424e52dbf
ozone/etc/selinux/targeted/modules/active/policy.kern
[root@dhcp42-30 thinbrick2]# ls -l ozone/etc/selinux/targeted/modules/active/policy.kern
---------T. 2 root root 8080641 Jun 19 23:18 ozone/etc/selinux/targeted/modules/active/policy.kern
[root@dhcp42-30 thinbrick2]# 
[root@dhcp42-30 thinbrick2]# rm -rf ozone
[root@dhcp42-30 thinbrick2]# cd /rhs/thinbrick1/ozone/
etc/        .glusterfs/ level00/    level01/    level02/    level10/    newdir1/    .trashcan/  
[root@dhcp42-30 thinbrick2]# cd /rhs/thinbrick2/ozone/
etc/          file1         file2         .glusterfs/   level00/      level01/      level02/      level10/      level20_sln2  newdir1/      .trashcan/    V6MO_newhdln  
[root@dhcp42-30 thinbrick2]# cd /rhs/thinbrick2/ozone/^C
[root@dhcp42-30 thinbrick2]# 


#######################             NODE 4           ###############3


[root@dhcp42-147 thinbrick2]# find ozone -type f | wc -l
10
[root@dhcp42-147 thinbrick2]# find ozone -type f 
ozone/.glusterfs/indices/xattrop/xattrop-0ff80e69-5da7-4185-825e-3044ab71a375
ozone/.glusterfs/changelogs/htime/HTIME.1434735513
ozone/.glusterfs/changelogs/CHANGELOG.1434735708
ozone/.glusterfs/changelogs/CHANGELOG.1434735723
ozone/.glusterfs/changelogs/CHANGELOG.1434735768
ozone/.glusterfs/changelogs/CHANGELOG
ozone/.glusterfs/ozone.db
ozone/.glusterfs/health_check
ozone/.glusterfs/bc/8c/bc8c2154-1df2-4e6d-9b33-ec1424e52dbf
ozone/etc/selinux/targeted/modules/active/policy.kern
[root@dhcp42-147 thinbrick2]# ls -l ozone/etc/selinux/targeted/modules/active/policy.kern
---------T. 2 root root 8080641 Jun 19 23:18 ozone/etc/selinux/targeted/modules/active/policy.kern
[root@dhcp42-147 thinbrick2]# 
[root@dhcp42-147 thinbrick2]# 
[root@dhcp42-147 thinbrick2]# rm -rf ozone
[root@dhcp42-147 thinbrick2]# 



############           CLIENT LOGS        #################


#########    FUSE


[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# mount | grep ozone
10.70.42.202:/ozone on /mnt/ozone type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# pwd
/mnt/ozone
[root@dhcp43-71 ozone]# df -k .
Filesystem          1K-blocks   Used Available Use% Mounted on
10.70.42.202:/ozone  33511424 180608  33330816   1% /mnt/ozone
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# echo "hello world" > file1
[root@dhcp43-71 ozone]# ls
5582ba18%%FTZ39AW4RV.tar.gz  5582baeb%%SNRP8ENTE9  a  b  etc  file1  level00  level01  level02  level10  level20_sln2  newdir1  V6MO_newhdln
[root@dhcp43-71 ozone]# echo "hello world" > newdir1/file1
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# ls -lrt
total 333
-rw-r--r--.   1 root root 102728 Jun 18 18:07 5582ba18%%FTZ39AW4RV.tar.gz
-rwxr-xr-x.   1 u1   g1   102400 Jun 18 18:36 V6MO_newhdln
-rw-r--r--.   1 root root 102400 Jun 18 20:47 5582baeb%%SNRP8ENTE9
lrwxrwxrwx.   1 root root      7 Jun 18 21:00 level20_sln2 -> level20
-rw-r--r--.   1 root root      0 Jun 19 15:46 a
-rw-r--r--.   1 root root      0 Jun 19 15:47 b
drwxr-xr-x. 100 root root  16384 Jun 20 00:10 etc
drwxr-xr-x.   2 root root     92 Jun 20 00:10 level00
drwxr--r--.   3 u2   g1     8281 Jun 20 00:10 level01
drwxr-xr-x.   3 root root    148 Jun 20 00:10 level02
drwxr-xr-x.   3 root root    148 Jun 20 00:10 level10
-rw-r--r--.   1 root root     12 Jun 20 00:10 file1
drwxr-xr-x.   3 root root     80 Jun 20 00:10 newdir1
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# ls -l newdir1/
total 2
drwxr-xr-x. 2 root root 32 Jun 20 00:10 dir2
-rw-r--r--. 1 root root 12 Jun 20 00:10 file1
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# touch file2
[root@dhcp43-71 ozone]# ls -lrt
total 333
-rw-r--r--.   1 root root 102728 Jun 18 18:07 5582ba18%%FTZ39AW4RV.tar.gz
-rwxr-xr-x.   1 u1   g1   102400 Jun 18 18:36 V6MO_newhdln
-rw-r--r--.   1 root root 102400 Jun 18 20:47 5582baeb%%SNRP8ENTE9
lrwxrwxrwx.   1 root root      7 Jun 18 21:00 level20_sln2 -> level20
-rw-r--r--.   1 root root      0 Jun 19 15:46 a
-rw-r--r--.   1 root root      0 Jun 19 15:47 b
drwxr-xr-x. 100 root root  16384 Jun 20 00:10 etc
drwxr-xr-x.   2 root root     92 Jun 20 00:10 level00
drwxr--r--.   3 u2   g1     8281 Jun 20 00:10 level01
drwxr-xr-x.   3 root root    148 Jun 20 00:10 level02
drwxr-xr-x.   3 root root    148 Jun 20 00:10 level10
-rw-r--r--.   1 root root     12 Jun 20 00:10 file1
drwxr-xr-x.   3 root root     80 Jun 20 00:10 newdir1
-rw-r--r--.   1 root root      0 Jun 20 00:13 file2
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# echo "hello owrld" >> file3
[root@dhcp43-71 ozone]# 



###########   NFS

[root@dhcp43-59 ozone2]# 
[root@dhcp43-59 ozone2]# ls -lrt
total 308
-rw-r--r--.   1 root root 102728 Jun 18 18:07 5582ba18%%FTZ39AW4RV.tar.gz
-rwxr-xr-x.   1  501  500 102400 Jun 18 18:36 V6MO_newhdln
-rw-r--r--.   1 root root 102400 Jun 18 20:47 5582baeb%%SNRP8ENTE9
lrwxrwxrwx.   1 root root      7 Jun 18 21:00 level20_sln2 -> level20
-rw-r--r--.   1 root root      0 Jun 19 15:46 a
-rw-r--r--.   1 root root      0 Jun 19 15:47 b
drwxr-xr-x.   3 root root     20 Jun 19 18:04 level02
drwxr-xr-x.   3 root root     17 Jun 19 22:35 newdir1
drwxr-xr-x. 100 root root   4096 Jun 20 00:10 etc
drwxr--r--.   3  502  500     54 Jun 20 00:10 level01
drwxr-xr-x.   2 root root      6 Jun 20 00:10 level00
drwxr-xr-x.   3 root root     20 Jun 20 00:10 level10
-rw-r--r--.   1 root root     12 Jun 20 00:10 file1
-rw-r--r--.   1 root root      0 Jun 20 00:13 file2
[root@dhcp43-59 ozone2]# 
[root@dhcp43-59 ozone2]# 
[root@dhcp43-59 ozone2]# rm a
rm: remove regular empty file `a'? y
[root@dhcp43-59 ozone2]# rm b
rm: remove regular empty file `b'? y
[root@dhcp43-59 ozone2]# 
[root@dhcp43-59 ozone2]# chmod 744 level01
[root@dhcp43-59 ozone2]# ls -lrt
total 308
-rw-r--r--.   1 root root 102728 Jun 18 18:07 5582ba18%%FTZ39AW4RV.tar.gz
-rwxr-xr-x.   1  501  500 102400 Jun 18 18:36 V6MO_newhdln
-rw-r--r--.   1 root root 102400 Jun 18 20:47 5582baeb%%SNRP8ENTE9
lrwxrwxrwx.   1 root root      7 Jun 18 21:00 level20_sln2 -> level20
drwxr-xr-x.   3 root root     20 Jun 19 18:04 level02
drwxr-xr-x.   3 root root     17 Jun 19 22:35 newdir1
drwxr-xr-x. 100 root root   4096 Jun 20 00:10 etc
drwxr-xr-x.   2 root root      6 Jun 20 00:10 level00
drwxr--r--.   3  502  500     54 Jun 20 00:10 level01
drwxr-xr-x.   3 root root     20 Jun 20 00:10 level10
-rw-r--r--.   1 root root     12 Jun 20 00:10 file1
-rw-r--r--.   1 root root      0 Jun 20 00:13 file2
[root@dhcp43-59 ozone2]#

Comment 2 Sweta Anandpara 2015-06-23 04:45:19 UTC
Sosreports updated at: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1233805/

Comment 4 Amar Tumballi 2018-04-16 03:03:52 UTC
Feel free to open this bug if the issue still persists and you require a fix. Closing this as WONTFIX as we are not working on this bug, and treating it as a 'TIMEOUT'.


Note You need to log in before you can comment on or make changes to this bug.