Bug 1484156 - Can't attach volume tier to create hot tier
Summary: Can't attach volume tier to create hot tier
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.11
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-22 21:15 UTC by Fidel Rodriguez
Modified: 2017-08-28 19:16 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-08-28 19:16:26 UTC
Regression: ---
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
HDD configuration (40.84 KB, image/png)
2017-08-28 12:42 UTC, Fidel Rodriguez
no flags Details
cli log (967 bytes, application/rtf)
2017-08-28 13:01 UTC, Fidel Rodriguez
no flags Details
cmd_history log (967 bytes, application/rtf)
2017-08-28 13:01 UTC, Fidel Rodriguez
no flags Details
glusterd log (2.43 KB, application/rtf)
2017-08-28 13:02 UTC, Fidel Rodriguez
no flags Details

Description Fidel Rodriguez 2017-08-22 21:15:59 UTC
Description of problem:

I can't create a hot and cold tier by using:

1.
gluster volume tier vmVolume attach replica 2 glusterfs1:/caching/.brickscaching  glusterfs2:/caching/.brickscaching  glusterfs3:/caching/.brickscaching glusterfs4:/caching/.brickscaching  force


or

2.
gluster volume attach-tier vmVolume replica 2  glusterfs1:/caching/.brickscaching  glusterfs2:/caching/.brickscaching  glusterfs3:/caching/.brickscaching glusterfs4:/caching/.brickscaching  force


the ssd drives are mounted to /caching on each node.

when I run the first and current command I get output in /var/log/glusterfs/cli.log

[2017-08-22 19:39:16.905881] I [cli.c:757:main] 0-cli: Started running gluster with version 3.11.0rc0
[2017-08-22 19:39:17.035609] I [MSGID: 101190] [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-08-22 19:39:17.035712] I [socket.c:2426:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2017-08-22 19:39:17.035920] E [cli-cmd-parser.c:1771:cli_cmd_volume_add_brick_parse] 0-cli: Unable to parse add-brick CLI
[2017-08-22 19:39:17.036005] I [input.c:31:cli_batch] 0-: Exiting with: -1


gluster volume attach-tier vmVolume replica 2  glusterfs1:/caching/.brickscaching  glusterfs2:/caching/.brickscaching  glusterfs3:/caching/.brickscaching glusterfs4:/caching/.brickscaching  force
gluster volume attach-tier <VOLNAME> [<replica COUNT>] <NEW-BRICK>... is deprecated. Use the new command 'gluster volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>... [force]'
Do you want to Continue? (y/n) y
internet address ' glusterfs1' does not conform to standards
volume attach-tier: failed: Pre-validation failed on localhost. Please check log file for details
Tier command failed


when I run the old version of the tier command I get:

[2017-08-22 19:37:51.439210]  : volume attach-tier vmVolume replica 2  glusterfs1:/caching/.brickscaching glusterfs2:/caching/.brickscaching glusterfs3:/caching/.brickscaching glusterfs4:/caching/.brickscaching force : FAILED : Pre-validation failed on localhost. Please check log file for details

I am running gluster volumes in a bond0 on each node. I want the gluster to have its own network for the volume so its connected to an unmanage switch for now. while the other network 10.x.x.x routes to the internet.

My /etc/hosts in each server contains

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

#nodes ip’s (different network)
10.0.1.11 srv1.example.com
10.0.1.12 srv2.example.com
10.0.1.13 srv3.example.com
10.0.1.14 srv4.example.com


#glusterfs volumes ip’s (different network)
172.16.0.11 glusterfs1
172.16.0.12 glusterfs2
172.16.0.13 glusterfs3
172.16.0.14 glusterfs4








gluster peer status
Number of Peers: 3

Hostname: 172.16.0.13
Uuid: 87f1cfcb-bf65-4e3e-b1e6-a2859043e7bb
State: Peer in Cluster (Connected)
Other names:
glusterfs3

Hostname: 172.16.0.12
Uuid: d45ce727-4a3a-42f0-b82f-04887f81f227
State: Peer in Cluster (Connected)
Other names:
glusterfs2

Hostname: 172.16.0.14
Uuid: 2af53acc-d991-4d39-8db7-4053d6cb2a07
State: Peer in Cluster (Connected)
Other names:
glusterfs4






Version-Release number of selected component (if applicable):

Updated:
  glusterfs-server.x86_64 0:3.11.0-0.1.rc0.el7                                                                                       

Dependency Updated:
  glusterfs.x86_64 0:3.11.0-0.1.rc0.el7                             glusterfs-api.x86_64 0:3.11.0-0.1.rc0.el7                       
  glusterfs-cli.x86_64 0:3.11.0-0.1.rc0.el7                         glusterfs-client-xlators.x86_64 0:3.11.0-0.1.rc0.el7            
  glusterfs-extra-xlators.x86_64 0:3.11.0-0.1.rc0.el7               glusterfs-fuse.x86_64 0:3.11.0-0.1.rc0.el7                      
  glusterfs-geo-replication.x86_64 0:3.11.0-0.1.rc0.el7             glusterfs-libs.x86_64 0:3.11.0-0.1.rc0.el7                      
  python2-gluster.x86_64 0:3.11.0-0.1.rc0.el7                      


How reproducible:

create a distributed-replicated 4 nodes with one brick in each and then attach a 4 brick ssd tier for caching. 

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Please let me know if I missed any information to provide.

Comment 1 Fidel Rodriguez 2017-08-28 12:39:25 UTC
gluster volume status:

Status of volume: vmVolume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 172.16.0.11:/vmVolume/.bricksvm       49154     0          Y       3230 
Brick 172.16.0.11:/vmVolume2/.bricksvm      49155     0          Y       3213 
Brick 172.16.0.12:/vmVolume/.bricksvm       49154     0          Y       2348 
Brick 172.16.0.12:/vmVolume2/.bricksvm      49155     0          Y       2375 
Brick 172.16.0.13:/vmVolume/.bricksvm       49154     0          Y       3216 
Brick 172.16.0.13:/vmVolume2/.bricksvm      49155     0          Y       3225 
Brick 172.16.0.14:/vmVolume/.bricksvm       49154     0          Y       3203 
Brick 172.16.0.14:/vmVolume2/.bricksvm      49155     0          Y       3209 
Self-heal Daemon on localhost               N/A       N/A        Y       3312 
Self-heal Daemon on 172.16.0.14             N/A       N/A        Y       3366 
Self-heal Daemon on 172.16.0.12             N/A       N/A        Y       3234 
Self-heal Daemon on 172.16.0.13             N/A       N/A        Y       3302 
 
Task Status of Volume vmVolume
------------------------------------------------------------------------------
There are no active volume tasks




Gluster volume info vmVolume:

Volume Name: vmVolume
Type: Distributed-Replicate
Volume ID: 10a58c68-b042-4354-b3a5-cc20076bf0fd
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 172.16.0.11:/vmVolume/.bricksvm
Brick2: 172.16.0.11:/vmVolume2/.bricksvm
Brick3: 172.16.0.12:/vmVolume/.bricksvm
Brick4: 172.16.0.12:/vmVolume2/.bricksvm
Brick5: 172.16.0.13:/vmVolume/.bricksvm
Brick6: 172.16.0.13:/vmVolume2/.bricksvm
Brick7: 172.16.0.14:/vmVolume/.bricksvm
Brick8: 172.16.0.14:/vmVolume2/.bricksvm
Options Reconfigured:
nfs.disable: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-gid: 36
storage.owner-uid: 36


Please let me know if there is any other logs I can provide

Comment 2 Fidel Rodriguez 2017-08-28 12:42:15 UTC
Created attachment 1319069 [details]
HDD configuration

Comment 3 Fidel Rodriguez 2017-08-28 13:01:09 UTC
Created attachment 1319074 [details]
cli log

Comment 4 Fidel Rodriguez 2017-08-28 13:01:29 UTC
Created attachment 1319075 [details]
cmd_history log

Comment 5 Fidel Rodriguez 2017-08-28 13:02:08 UTC
Created attachment 1319076 [details]
glusterd log

Comment 6 Fidel Rodriguez 2017-08-28 19:15:13 UTC
I solved the issue by applying the correct order of drives.

gluster volume create vmVolume  replica 2 \
172.16.0.11:/vmVolume/.bricksvm \
172.16.0.12:/vmVolume/.bricksvm \
172.16.0.13:/vmVolume/.bricksvm \
172.16.0.14:/vmVolume/.bricksvm \
172.16.0.11:/vmVolume2/.bricksvm \
172.16.0.12:/vmVolume2/.bricksvm \
172.16.0.13:/vmVolume2/.bricksvm \
172.16.0.14:/vmVolume2/.bricksvm \
force


gluster volume attach-tier vmVolume replica 2 \
172.16.0.11:/caching/.brickscaching \
172.16.0.12:/caching/.brickscaching \
172.16.0.13:/caching/.brickscaching \
172.16.0.14:/caching/.brickscaching \
force

reference:

https://www.redhat.com/cms/managed-files/st-RHGS-QCT-config-size-guide-technology-detail-INC0436676-201608-en.pdf . Page 30


Note You need to log in before you can comment on or make changes to this bug.