Bug 1229664 - [Backup]: Glusterfind create/pre/post/delete prompts for password of the peer node
Summary: [Backup]: Glusterfind create/pre/post/delete prompts for password of the peer...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfind
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: RHGS 3.1.0
Assignee: Aravinda VK
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On: 1224199
Blocks: 1202842 1223636
TreeView+ depends on / blocked
 
Reported: 2015-06-09 11:31 UTC by Sweta Anandpara
Modified: 2016-09-17 15:19 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.0-7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 05:00:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Sweta Anandpara 2015-06-09 11:31:09 UTC
Description of problem:

While creating a glusterfind session, or running pre command, it prompts for a password of the peer node. This behaviour was not seen before. The only change that has happened is the underlying rhel version - which has changed from 6.6 to 6.7. Password is prompted for as many times as the number of bricks (of the volume) in the peer node.


Version-Release number of selected component (if applicable):

glusterfs-3.7.0-3.el6rhs.x86_64
glusterfs-3.7.1-1.el6rhs.x86_64


How reproducible: Always


Steps to Reproduce:
1. Have a 2node cluster, and a volume 'ozone'
2. Create a glusterfind session 'sesso1' for 'ozone'
3. Run pre and post commands
4. Delete glusterfind session 'sesso1'

Actual results:
Steps 2,3,4 prompt for password of the peer node everytime they are run
The password is prompted as many times as the number of bricks of the volume residing on the peer node.
Running pre command does display some other unexpected output as well.

Please note that all the steps 2,3,4 do succeed eventually (after submitting the correct password multiple times)

Expected results:
Steps 2,3,4 should not require a password and should display the 'command completed successfully' output 

Additional info:

[root@dhcp43-93 ~]# glusterfind create sesso2 ozone
root.43.155's password: root.43.155's password: 


root.43.155's password: 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------   
sesso2                    ozone                     2015-06-09 18:49:04      
sesso1                    ozone                     2015-06-09 18:45:59      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso2                    ozone                     2015-06-09 18:49:04      
sesso1                    ozone                     2015-06-09 18:45:59      
[root@dhcp43-93 ~]# glusterfind pre sesso1 ozone /tmp/out.txt
root.43.155's password: root.43.155's password: 
The authenticity of host '10.70.43.93 (10.70.43.93)' can't be established.
RSA key fingerprint is 9f:46:28:90:a1:83:19:8c:a9:07:48:14:88:8a:37:19.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.43.93 (10.70.43.93)' can't be established.
RSA key fingerprint is 9f:46:28:90:a1:83:19:8c:a9:07:48:14:88:8a:37:19.
Are you sure you want to continue connecting (yes/no)? yes

root.43.155's password: root.43.155's password: Please type 'yes' or 'no': root.43.93's password: 


root.43.155's password: 
Please type 'yes' or 'no': 
root.43.93's password: 


10.70.43.93 - Copy command failed: Host key verification failed.

root.43.155's password: 

root.43.155's password: 
root.43.155's password: root.43.155's password: 
root.43.155's password: 

root.43.155's password: 
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cat /tmp/out.txt 
NEW test1 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post sesso1 ozone
root.43.155's password: root.43.155's password: 


root.43.155's password: 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind delete sesso2 ozone
root.43.155's password: root.43.155's password: 


root.43.155's password: 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso1                    ozone                     2015-06-09 20:14:02      
[root@dhcp43-93 ~]#

Comment 2 Sweta Anandpara 2015-06-10 04:51:17 UTC
Raising the priority, as the repeated password requests is blocking the automation that is being developed for this feature.

Comment 3 Aravinda VK 2015-06-10 06:03:37 UTC
If home directory is badly labeled(SELinux attrs), then it blocks the SSH login.
Following command fixes the issue

restorecon -R -v /root

BZ 499343 has more details about SELinux issue.

Comment 4 Aravinda VK 2015-06-10 06:32:19 UTC
SELinux label which got restored

restorecon reset /root/.ssh context unconfined_u:object_r:admin_home_t:s0->unconfined_u:object_r:ssh_home_t:s0
restorecon reset /root/.ssh/authorized_keys context unconfined_u:object_r:admin_home_t:s0->unconfined_u:object_r:ssh_home_t:s0

As per the comment https://bugzilla.redhat.com/show_bug.cgi?id=499343#c1

    This home directory is badly labeled.

    restorecon -R -v /home

    Or where ever the home directory is located.


    home_root_t is the label of the /home directory 

    Individual directories under /home should be labeled user_home_dir_t

    The .ssh directory should be labeled user_ssh_home_t

Comment 6 Aravinda VK 2015-07-02 07:08:06 UTC
This issue is fixed as part of Geo-rep BZ 1224199.

Comment 7 Aravinda VK 2015-07-02 13:57:40 UTC
Based on Comment 6, moving this bug to ON_QA

Comment 8 Rejy M Cyriac 2015-07-02 13:59:48 UTC
Accepted as Blocker at RHGS 3.1 Blocker BZ Status Check meeting on 02 July 2015

Comment 10 Sweta Anandpara 2015-07-03 09:57:01 UTC
Would you be certain if the fixed-in-version is 3.7.0.3 build?!

Because that is the build that I actually hit the issue in. And also the next build 3.7.1-1 (as mentioned in the description)

Would anywayz be creating a new setup  with the latest 3.7.1-7 and trying out the same.

Comment 11 Sweta Anandpara 2015-07-13 11:17:56 UTC
Tested and verified this on a fresh rhel7.1 setup, with the build 3.7.1-8

I do see the 'Host key verification failed' error the very *first* time 'glusterfind create' and 'pre' is executed. Do not hit it again afterwards, for newly created volume as well. 

Moving this BZ to fixed in 3.1 Everglades. Pasted below are the detailed logs:

SERVER
=======

[root@dhcp42-37 ~]# gluster v create ozone replica 2 10.70.42.37:/bricks/brick1/oz 10.70.43.94:/bricks/brick1/oz 10.70.42.37:/bricks/brick2/oz 10.70.43.94:/bricks/brick2/oz
volume create: ozone: success: please start the volume to access data
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 8b0064bf-bd2f-47bf-ae50-a225d2639887
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.37:/bricks/brick1/oz
Brick2: 10.70.43.94:/bricks/brick1/oz
Brick3: 10.70.42.37:/bricks/brick2/oz
Brick4: 10.70.43.94:/bricks/brick2/oz
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp42-37 ~]# gluster v start ozone
volume start: ozone: success
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 8b0064bf-bd2f-47bf-ae50-a225d2639887
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.37:/bricks/brick1/oz
Brick2: 10.70.43.94:/bricks/brick1/oz
Brick3: 10.70.42.37:/bricks/brick2/oz
Brick4: 10.70.43.94:/bricks/brick2/oz
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
No sessions found
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind create so1 ozone
The authenticity of host '10.70.43.94 (10.70.43.94)' can't be established.
ECDSA key fingerprint is d1:f0:8a:5a:a9:e8:23:3d:56:4a:56:f8:dc:d8:68:4e.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.43.94 (10.70.43.94)' can't be established.
ECDSA key fingerprint is d1:f0:8a:5a:a9:e8:23:3d:56:4a:56:f8:dc:d8:68:4e.
Are you sure you want to continue connecting (yes/no)? yes

10.70.43.94 - create failed: Host key verification failed.

Command create failed in 10.70.43.94:/bricks/brick2/oz
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so1                       ozone                     Session Corrupted        
[root@dhcp42-37 ~]# vi /var/log/glusterfs/glusterfind/
cli.log  so1/     
[root@dhcp42-37 ~]# vi /var/log/glusterfs/glusterfind/so1/ozone/cli.log 
[root@dhcp42-37 ~]# vi /var/log/glusterfs/glusterfind/
cli.log  so1/     
[root@dhcp42-37 ~]# vi /var/log/glusterfs/glusterfind/cli.log 
[root@dhcp42-37 ~]# vi /var/log/glusterfs/
bricks/                         cmd_history.log                 glusterfind/                    nfs.log                         
cli.log                         etc-glusterfs-glusterd.vol.log  glustershd.log                  snaps/                          
[root@dhcp42-37 ~]# vi /var/log/glusterfs/glustershd.log 
[root@dhcp42-37 ~]# gluster v status ozone
Status of volume: ozone
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.42.37:/bricks/brick1/oz         49152     0          Y       4653 
Brick 10.70.43.94:/bricks/brick1/oz         49152     0          Y       4651 
Brick 10.70.42.37:/bricks/brick2/oz         49153     0          Y       4671 
Brick 10.70.43.94:/bricks/brick2/oz         49153     0          Y       4669 
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       4699 
NFS Server on 10.70.43.94                   N/A       N/A        N       N/A  
Self-heal Daemon on 10.70.43.94             N/A       N/A        Y       4697 
 
Task Status of Volume ozone
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp42-37 ~]# vi /var/log/glusterfs/
bricks/                         cmd_history.log                 glusterfind/                    nfs.log                         
cli.log                         etc-glusterfs-glusterd.vol.log  glustershd.log                  snaps/                          
[root@dhcp42-37 ~]# vi /var/log/glusterfs/bricks/bricks-brick
bricks-brick1-oz.log  bricks-brick2-oz.log  
[root@dhcp42-37 ~]# vi /var/log/glusterfs/bricks/bricks-brick1-oz.log 
[root@dhcp42-37 ~]# vi /var/log/glusterfs/bricks/bricks-brick2-oz.log 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so1                       ozone                     Session Corrupted        
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so1                       ozone                     Session Corrupted        
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind pre so1 ozone /tmp/out.txt
Error Opening Session file /var/lib/glusterd/glusterfind/so1/ozone/status: [Errno 2] No such file or directory: '/var/lib/glusterd/glusterfind/so1/ozone/status'
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind delete so1 ozone
root.43.94's password: root.43.94's password: 


root.43.94's password: 
Session so1 with volume ozone deleted
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
No sessions found
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind create so2 ozone
Session so2 created with volume ozone
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so2                       ozone                     2015-07-13 19:50:55      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# ls /var/lib/glusterd/glusterfind/
so2
[root@dhcp42-37 ~]# ls /var/lib/glusterd/glusterfind/so2/
ozone
[root@dhcp42-37 ~]# ls /var/lib/glusterd/glusterfind/so2/ozone
%2Fbricks%2Fbrick1%2Foz.status  %2Fbricks%2Fbrick2%2Foz.status  so2_ozone_secret.pem  so2_ozone_secret.pem.pub  status
[root@dhcp42-37 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 8b0064bf-bd2f-47bf-ae50-a225d2639887
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.37:/bricks/brick1/oz
Brick2: 10.70.43.94:/bricks/brick1/oz
Brick3: 10.70.42.37:/bricks/brick2/oz
Brick4: 10.70.43.94:/bricks/brick2/oz
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# ls /var/log/glusterfs/glusterfind/
cli.log  so1/     so2/     
[root@dhcp42-37 ~]# ls /var/log/glusterfs/glusterfind/so2/ozone/cli.log 
/var/log/glusterfs/glusterfind/so2/ozone/cli.log
[root@dhcp42-37 ~]# cat /var/log/glusterfs/glusterfind/so2/ozone/cli.log 
[2015-07-13 19:50:39,774] INFO [main - 285:ssh_setup] - Ssh key generated /var/lib/glusterd/glusterfind/so2/ozone/so2_ozone_secret.pem
[2015-07-13 19:50:39,818] INFO [main - 307:ssh_setup] - Distributed ssh key to all nodes of Volume
[2015-07-13 19:50:39,948] INFO [main - 320:ssh_setup] - Ssh key added to authorized_keys of Volume nodes
[2015-07-13 19:50:40,261] INFO [main - 355:mode_create] - Volume option set ozone, build-pgfid on
[2015-07-13 19:50:40,519] INFO [main - 362:mode_create] - Volume option set ozone, changelog.changelog on
[2015-07-13 19:50:40,792] INFO [main - 369:mode_create] - Volume option set ozone, changelog.capture-del-path on
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# # after creating 3-4 different new files/dirs
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind pre so1 ozone /tmp/out1
Invalid session so1
[root@dhcp42-37 ~]# glusterfind pre so1 ozonee /tmp/out1
Invalid session so1
[root@dhcp42-37 ~]# glusterfind pre so2 ozonee /tmp/out1
Session so2 not created with volume ozonee
[root@dhcp42-37 ~]# glusterfind pre so2 ozone /tmp/out2
The authenticity of host '10.70.42.37 (10.70.42.37)' can't be established.
ECDSA key fingerprint is b5:12:8b:10:2c:50:f0:f7:62:94:cc:3d:cb:0f:e4:32.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.42.37 (10.70.42.37)' can't be established.
ECDSA key fingerprint is b5:12:8b:10:2c:50:f0:f7:62:94:cc:3d:cb:0f:e4:32.
Are you sure you want to continue connecting (yes/no)? yes

10.70.42.37 - Copy command failed: Host key verification failed.

Generated output file /tmp/out2
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so2                       ozone                     2015-07-13 19:50:55      
[root@dhcp42-37 ~]# cat /tmp/out2
MODIFY .trashcan%2F 
NEW test1 
NEW test2 
NEW dir1 
NEW dir1%2F%2Fdir2 
NEW dir1%2Fdir2%2F%2Fdir3 
NEW dir1%2Fdir2%2Fdir3%2F%2Fdir4 
NEW dir1%2Fdir2%2Fdir3%2F%2Ftest3 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind pre so2 ozone /tmp/out2
Post command is not run after last pre, use --regenerate-outfile
[root@dhcp42-37 ~]# glusterfind pre so2 ozone /tmp/out2 --regenerate-outfile
Generated output file /tmp/out2
[root@dhcp42-37 ~]# cat /tmp/out2
MODIFY .trashcan%2F 
NEW test1 
NEW test2 
NEW dir1 
NEW dir1%2F%2Fdir2 
NEW dir1%2Fdir2%2F%2Fdir3 
NEW dir1%2Fdir2%2Fdir3%2F%2Fdir4 
NEW dir1%2Fdir2%2Fdir3%2F%2Ftest3 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# # after creating a link and deleting 2 dirs
[root@dhcp42-37 ~]# glusterfind post so2 ozonee
Session so2 not created with volume ozonee
[root@dhcp42-37 ~]# glusterfind post so2 ozone
Session so2 with volume ozone updated
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so2                       ozone                     2015-07-13 19:54:01      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind pre so2 ozone /tmp/out2 
Generated output file /tmp/out2
[root@dhcp42-37 ~]# cat /tmp/out2
NEW dir1%2Fdir2%2F%2Ftest1_link 
DELETE dir1%2Fdir2%2Fdir3%2Fdir4 
DELETE dir1%2Fdir2%2Fdir3%2Ftest3 
DELETE dir1%2Fdir2%2F%2Fdir3 
DELETE dir1%2Fdir2%2Fdir3 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind post so2 ozone
Session so2 with volume ozone updated
[root@dhcp42-37 ~]# glusterfind pre so2 ozone /tmp/out2 
Generated output file /tmp/out2
[root@dhcp42-37 ~]# cat /tmp/out2
RENAME test2 dir1%2Fdir2%2F%2Ftest2
RENAME test1 test1new
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind create so3 ozone
Failed to set volume option build-pgfid on: volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again

[root@dhcp42-37 ~]# rpm -qa | grep gluster
glusterfs-libs-3.7.1-8.el7rhgs.x86_64
glusterfs-3.7.1-8.el7rhgs.x86_64
glusterfs-fuse-3.7.1-8.el7rhgs.x86_64
glusterfs-server-3.7.1-8.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-8.el7rhgs.x86_64
glusterfs-api-3.7.1-8.el7rhgs.x86_64
glusterfs-cli-3.7.1-8.el7rhgs.x86_64
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind create so3 ozone
Session so3 created with volume ozone
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind pre so3 ozone /tmp/out3
Generated output file /tmp/out3
[root@dhcp42-37 ~]# cat /tmp/out3
DELETE dir1%2Fdir2%2Ftest1_link 
DELETE dir1%2Fdir2%2Ftest2 
DELETE dir1%2Fdir2 
DELETE dir1 
DELETE test1new 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so2                       ozone                     2015-07-13 19:56:11      
so3                       ozone                     2015-07-13 20:01:50      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v create nash 10.70.42.37:/bricks/brick1/nash 10.70.43.94:/bricks/brick1/nash 10.70.42.37:/bricks/brick2/nash 10.70.43.94:/bricks/brick2/nash
volume create: nash: success: please start the volume to access data
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v info nash
 
Volume Name: nash
Type: Distribute
Volume ID: a32cc427-f508-40dc-bdd5-293e17eaa757
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.37:/bricks/brick1/nash
Brick2: 10.70.43.94:/bricks/brick1/nash
Brick3: 10.70.42.37:/bricks/brick2/nash
Brick4: 10.70.43.94:/bricks/brick2/nash
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so2                       ozone                     2015-07-13 19:56:11      
so3                       ozone                     2015-07-13 20:01:50      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind create sn1 nash
Volume nash is not online
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# gluster v start nash
volume start: nash: success
[root@dhcp42-37 ~]# gluster v info nash
 
Volume Name: nash
Type: Distribute
Volume ID: a32cc427-f508-40dc-bdd5-293e17eaa757
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.37:/bricks/brick1/nash
Brick2: 10.70.43.94:/bricks/brick1/nash
Brick3: 10.70.42.37:/bricks/brick2/nash
Brick4: 10.70.43.94:/bricks/brick2/nash
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind create sn1 nash
Session sn1 created with volume nash
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so2                       ozone                     2015-07-13 19:56:11      
so3                       ozone                     2015-07-13 20:01:50      
sn1                       nash                      2015-07-13 20:10:49      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind post sn1 nash
Pre script is not run
[root@dhcp42-37 ~]# glusterfind pre sn1 nash /tmp/outn1
10.70.42.37 - pre failed: /bricks/brick1/nash Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.37 - pre failed: /bricks/brick2/nash Historical Changelogs not available: [Errno 2] No such file or directory

10.70.43.94 - pre failed: /bricks/brick2/nash Historical Changelogs not available: [Errno 2] No such file or directory

10.70.43.94 - pre failed: /bricks/brick1/nash Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outn1
[root@dhcp42-37 ~]# ls /tmp/outn1 
/tmp/outn1
[root@dhcp42-37 ~]# ll /tmp/outn1
-rw-r--r--. 1 root root 0 Jul 13 20:11 /tmp/outn1
[root@dhcp42-37 ~]# glusterfind pre sn1 nash /tmp/outn1
Post command is not run after last pre, use --regenerate-outfile
[root@dhcp42-37 ~]# glusterfind pre sn1 nash /tmp/outn1 --regenerate-outfile
Generated output file /tmp/outn1
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# ll /tmp/outn1
-rw-r--r--. 1 root root 0 Jul 13 20:11 /tmp/outn1
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind post sn1 nash
Session sn1 with volume nash updated
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so2                       ozone                     2015-07-13 19:56:11      
so3                       ozone                     2015-07-13 20:01:50      
sn1                       nash                      2015-07-13 20:11:24      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# ll /var/log/glusterfs/glusterfind/sn1/nash/
total 32
-rw-------. 1 root root 10486 Jul 13 20:11 changelog.58585c06d210455e415f2a55d8e2448ed930033a.log
-rw-------. 1 root root 10595 Jul 13 20:11 changelog.79229ff8265dce50389eb89c6eb5299f9ed8dab8.log
-rw-r--r--. 1 root root  1686 Jul 13 20:11 changelog.log
-rw-r--r--. 1 root root  2283 Jul 13 20:11 cli.log
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# ll /var/lib/glusterd/glusterfind/
.keys/ sn1/   so2/   so3/   
[root@dhcp42-37 ~]# ll /var/lib/glusterd/glusterfind/sn1/nash/
%2Fbricks%2Fbrick1%2Fnash.status  %2Fbricks%2Fbrick2%2Fnash.status  sn1_nash_secret.pem               sn1_nash_secret.pem.pub           status
[root@dhcp42-37 ~]# ll /var/lib/glusterd/glusterfind/sn1/nash/
total 20
-rw-r--r--. 1 root root   10 Jul 13 20:11 %2Fbricks%2Fbrick1%2Fnash.status
-rw-r--r--. 1 root root   10 Jul 13 20:11 %2Fbricks%2Fbrick2%2Fnash.status
-rw-------. 1 root root 1679 Jul 13 20:10 sn1_nash_secret.pem
-rw-r--r--. 1 root root  419 Jul 13 20:10 sn1_nash_secret.pem.pub
-rw-r--r--. 1 root root   10 Jul 13 20:11 status
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# cat /tmp/outn1 
[root@dhcp42-37 ~]# glusterfind pre sn1 nash /tmp/outn1 --regenerate-outfile
Generated output file /tmp/outn1
[root@dhcp42-37 ~]# cat /tmp/outn1 
MODIFY .trashcan%2F 
NEW test1 
NEW dir1 
NEW dir1%2F%2Fdir2 
NEW dir1%2Fdir2%2F%2Fdir3 
NEW dir1%2Fdir2%2Fdir3%2F%2Fdir4 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind post sn1 nash
Session sn1 with volume nash updated
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so2                       ozone                     2015-07-13 19:56:11      
so3                       ozone                     2015-07-13 20:01:50      
sn1                       nash                      2015-07-13 22:12:58      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind delete so2 ozone
root.43.94's password: root.43.94's password: 


root.43.94's password: 
Session so2 with volume ozone deleted
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so3                       ozone                     2015-07-13 20:01:50      
sn1                       nash                      2015-07-13 22:12:58      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# ll /var/lib/glusterd/glusterfind/
.keys/ sn1/   so3/   
[root@dhcp42-37 ~]# ll /var/lib/glusterd/glusterfind/
total 0
drwxr-xr-x. 3 root root 17 Jul 13 20:10 sn1
drwxr-xr-x. 3 root root 18 Jul 13 19:58 so3
[root@dhcp42-37 ~]# gluster v stop nash
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: nash: success
[root@dhcp42-37 ~]# gluster v delete nash
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: nash: success
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so3                       ozone                     2015-07-13 20:01:50      
sn1                       nash                      2015-07-13 22:12:58      
[root@dhcp42-37 ~]# glusterfind delete sn1 nash
Unable to get volume details
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so3                       ozone                     2015-07-13 20:01:50      
sn1                       nash                      2015-07-13 22:12:58      
[root@dhcp42-37 ~]# gluster v create nash 10.70.42.37:/bricks/brick1/nash 10.70.43.94:/bricks/brick1/nash 10.70.42.37:/bricks/brick2/nash 10.70.43.94:/bricks/brick2/nash
volume create: nash: failed: /bricks/brick1/nash is already part of a volume
[root@dhcp42-37 ~]# gluster v create nash 10.70.42.37:/bricks/brick1/nash 10.70.43.94:/bricks/brick1/nash 10.70.42.37:/bricks/brick2/nash 10.70.43.94:/bricks/brick2/nash force
volume create: nash: success: please start the volume to access data
[root@dhcp42-37 ~]# glusterfind delete sn1 nash
root.43.94's password: root.43.94's password: 


root.43.94's password: 
root.43.94's password: 
10.70.43.94 - delete failed: no such identity: /var/lib/glusterd/glusterfind/sn1/nash/sn1_nash_secret.pem: No such file or directory
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Command delete failed in 10.70.43.94:/bricks/brick2/nash
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
so3                       ozone                     2015-07-13 20:01:50      
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# rpm -qa | grep gluster
glusterfs-libs-3.7.1-8.el7rhgs.x86_64
glusterfs-3.7.1-8.el7rhgs.x86_64
glusterfs-fuse-3.7.1-8.el7rhgs.x86_64
glusterfs-server-3.7.1-8.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-8.el7rhgs.x86_64
glusterfs-api-3.7.1-8.el7rhgs.x86_64
glusterfs-cli-3.7.1-8.el7rhgs.x86_64
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 
[root@dhcp42-37 ~]# 

=========
CLIENT
=========

[root@dhcp43-71 ~]# 
[root@dhcp43-71 ~]# mount | grep ozone
10.70.43.140:/ozone on /mnt/ozone type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[root@dhcp43-71 ~]# mkdir /mnt/oz
mkdir: cannot create directory `/mnt/oz': File exists
[root@dhcp43-71 ~]# mount -t glusterfs 10.70.42.37:/ozone /mnt/oz
[root@dhcp43-71 ~]# cd /mnt/oz
[root@dhcp43-71 oz]# ls -a
.  ..  .trashcan
[root@dhcp43-71 oz]# echo "whatever" > test1
[root@dhcp43-71 oz]# echo "what a beautiful day" > test2
[root@dhcp43-71 oz]# mkdir -p dir1/dir2/dir3/dir4
[root@dhcp43-71 oz]# touch dir1/dir2/dir3/test3
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# ln test1 dir1/dir2/test1_link
[root@dhcp43-71 oz]# rm dir1/dir2/dir3/
rm: cannot remove `dir1/dir2/dir3/': Is a directory
[root@dhcp43-71 oz]# rm -r dir1/dir2/dir3/
rm: descend into directory `dir1/dir2/dir3'? y
rm: remove directory `dir1/dir2/dir3/dir4'? y
rm: remove regular empty file `dir1/dir2/dir3/test3'? y
rm: remove directory `dir1/dir2/dir3'? y
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# ls -lrt
total 2
-rw-r--r--. 2 root root  9 Jul 13 19:52 test1
-rw-r--r--. 1 root root 21 Jul 13 19:52 test2
drwxr-xr-x. 3 root root 34 Jul 13 19:52 dir1
[root@dhcp43-71 oz]# ls dir1/
dir2
[root@dhcp43-71 oz]# ls dir1/dir2/
test1_link
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# mv test2 dir1/dir2/
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# mv test1 test1new
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# rpm -qa | grep gluster
glusterfs-fuse-3.7.1-3.el6.x86_64
glusterfs-libs-3.7.1-3.el6.x86_64
glusterfs-client-xlators-3.7.1-3.el6.x86_64
glusterfs-3.7.1-3.el6.x86_64
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# cd
[root@dhcp43-71 ~]# umount /mnt/oz
[root@dhcp43-71 ~]# mount -t nfs 10.70.42.37:/ozone /mnt/oz

^C
[root@dhcp43-71 ~]# 
[root@dhcp43-71 ~]# 
[root@dhcp43-71 ~]# mount -t glusterfs 10.70.42.37:/ozone /mnt/oz
[root@dhcp43-71 ~]# mkdir /mnt/oz2
[root@dhcp43-71 ~]# mount -t nfs 10.70.42.37:/ozone /mnt/oz2
^C
[root@dhcp43-71 ~]# mount -t nfs 10.70.43.94:/ozone /mnt/oz2
mount.nfs: Connection timed out
[root@dhcp43-71 ~]# 
[root@dhcp43-71 ~]# 
[root@dhcp43-71 ~]# vi /var/log/glusterfs/
Display all 150 possibilities? (y or n)
[root@dhcp43-71 ~]# vi /var/log/glusterfs/mnt-
mnt-cross3-.log              mnt-disp.log-20150712        mnt-nash.log-20150426.gz     mnt-oz.log-20150621.gz       mnt-ozonem-.log-20150315     mnt-pluto.log-20150621
mnt-cross3.log               mnt-glusterfs.log            mnt-nash.log-20150503.gz     mnt-oz.log-20150708.gz       mnt-ozonem.log-20150315.gz   mnt-sp1.log
mnt-cross3.log-20150607      mnt-glusterfs.log-20150712   mnt-nash.log-20150603.gz     mnt-oz.log-20150712          mnt-ozonem.log-20150322      mnt-sp1.log-20150621
mnt-cross3-.log-20150628     mnt-master.log               mnt-nash.log-20150607.gz     mnt-ozone.log                mnt-ozones.log               mnt-test.log
mnt-cthree.log               mnt-master.log-20150628.gz   mnt-nash.log-20150614.gz     mnt-ozone.log-20150329.gz    mnt-ozones.log-20150315.gz   mnt-test.log-20150419
mnt-cthree.log-20150628      mnt-master.log-20150705.gz   mnt-nash.log-20150621.gz     mnt-ozone.log-20150405.gz    mnt-ozones.log-20150322      mnt-testvol.log
mnt-demolv-.log              mnt-master.log-20150712      mnt-nash.log-20150628        mnt-ozone.log-20150603.gz    mnt-pluto.log                mnt-testvol.log-20150607.gz
mnt-demolv-.log-20150315     mnt-mnt-.log                 mnt-nn.log                   mnt-ozone.log-20150607.gz    mnt-pluto.log-20150405.gz    mnt-testvol.log-20150614
mnt-disp2.log                mnt-mnt-.log-20150628        mnt-nn.log-20150621          mnt-ozone.log-20150614.gz    mnt-pluto.log-20150412.gz    mnt-tmp.log
mnt-disp2.log-20150614       mnt-nashh.log                mnt-om.log                   mnt-ozone.log-20150621.gz    mnt-pluto.log-20150419.gz    mnt-tmp.log-20150712
mnt-disperse.log             mnt-nashh.log-20150412       mnt-om.log-20150315          mnt-ozone.log-20150628.gz    mnt-pluto.log-20150426.gz    mnt-vol1.log
mnt-disperse.log-20150607    mnt-nash.log                 mnt-os.log                   mnt-ozone.log-20150705.gz    mnt-pluto.log-20150503.gz    mnt-vol1.log-20150621.gz
mnt-disp.log                 mnt-nash.log-20150405.gz     mnt-os.log-20150315          mnt-ozone.log-20150712       mnt-pluto.log-20150603.gz    mnt-vol1.log-20150628.gz
mnt-disp.log-20150628.gz     mnt-nash.log-20150412.gz     mnt-oz.log                   mnt-ozonem-.log              mnt-pluto.log-20150607.gz    mnt-vol1.log-20150705.gz
mnt-disp.log-20150705.gz     mnt-nash.log-20150419.gz     mnt-oz.log-20150614.gz       mnt-ozonem.log               mnt-pluto.log-20150614.gz    mnt-vol1.log-20150712
[root@dhcp43-71 ~]# vi /var/log/glusterfs/mnt-oz
mnt-oz.log                  mnt-oz.log-20150712         mnt-ozone.log-20150603.gz   mnt-ozone.log-20150628.gz   mnt-ozonem.log              mnt-ozones.log
mnt-oz.log-20150614.gz      mnt-ozone.log               mnt-ozone.log-20150607.gz   mnt-ozone.log-20150705.gz   mnt-ozonem-.log-20150315    mnt-ozones.log-20150315.gz
mnt-oz.log-20150621.gz      mnt-ozone.log-20150329.gz   mnt-ozone.log-20150614.gz   mnt-ozone.log-20150712      mnt-ozonem.log-20150315.gz  mnt-ozones.log-20150322
mnt-oz.log-20150708.gz      mnt-ozone.log-20150405.gz   mnt-ozone.log-20150621.gz   mnt-ozonem-.log             mnt-ozonem.log-20150322     
[root@dhcp43-71 ~]# vi /var/log/glusterfs/mnt-oz.log
[root@dhcp43-71 ~]# 
[root@dhcp43-71 ~]# 
[root@dhcp43-71 ~]# cd /mnt/oz
[root@dhcp43-71 oz]# ls -a
.  ..  dir1  test1new  .trashcan
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# rm -rf *
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# ls -lrt
total 0
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# 
[root@dhcp43-71 oz]# mkdir /mnt/nn
[root@dhcp43-71 oz]# mount -t nfs 10.70.43.94:/nash /mnt/nn
^C
[root@dhcp43-71 oz]# mount -t glusterfs 10.70.43.94:/nash /mnt/nn
[root@dhcp43-71 oz]# cd /mnt/nn
[root@dhcp43-71 nn]# ls -a
.  ..  .trashcan
[root@dhcp43-71 nn]# echo "whatever" > test1
[root@dhcp43-71 nn]# mkdir -p dir1/dir2/dir3/dir4
[root@dhcp43-71 nn]#

Comment 12 errata-xmlrpc 2015-07-29 05:00:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.