Bug 1346226 - [Tiering]: unable to mount a tiered volume from rhel-5 client
Summary: [Tiering]: unable to mount a tiered volume from rhel-5 client
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-14 10:12 UTC by krishnaram Karthick
Modified: 2019-04-25 10:56 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-16 13:48:36 UTC
Embargoed:


Attachments (Terms of Use)

Description krishnaram Karthick 2016-06-14 10:12:37 UTC
Description of problem:

Attempt to mount a tiered volume from rhel5 client fails.

mount -t glusterfs 10.70.37.167:/master /mnt/master
Mount failed. Please check the log file for more details.
[root@wingo ~]# tail -10 /var/log/glusterfs/mnt-master.log
[2016-06-12 08:05:21.995440] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.9 (args: /usr/sbin/glusterfs --volfile-server=10.70.37.167 --volfile-id=/master /mnt/master)
[2016-06-12 08:05:22.007520] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2016-06-12 08:05:22.013776] W [MSGID: 101095] [xlator.c:199lator_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/cluster/tier.so: cannot open shared object file: No such file or directory
[2016-06-12 08:05:22.013812] E [graph.y:212:volume_type] 0-parser: Volume 'master-tier-dht', line 178: type 'cluster/tier' is not valid or not found on this machine
[2016-06-12 08:05:22.013870] E [graph.y:321:volume_end] 0-parser: "type" not specified for volume master-tier-dht
[2016-06-12 08:05:22.014019] E [MSGID: 100026] [glusterfsd.c:2192:glusterfs_process_volfp] 0-: failed to construct the graph
[2016-06-12 08:05:22.014324] E [graph.c:944:glusterfs_graph_destroy] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x274) [0x40cf74] -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x10e) [0x405c4e] -->/usr/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x7a) [0x2b65a295a11a] ) 0-graph: invalid argument: graph [Invalid argument]
[2016-06-12 08:05:22.014541] W [glusterfsd.c:1251:cleanup_and_exit] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x274) [0x40cf74] -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x136) [0x405c76] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6a) [0x405a9a] ) 0-: received signum (1), shutting down
[2016-06-12 08:05:22.014584] I [fuse-bridge.c:5714:fini] 0-fuse: Unmounting '/mnt/master'.

Version-Release number of selected component (if applicable):
3.7.9-10

How reproducible:
Always

Steps to Reproduce:
1. created a tiered vol
2. Mount the volume from rhel-5 client

Actual results:
Unable to mount

Expected results:
mount should be success

Additional info:

Comment 2 Dan Lambright 2016-06-14 16:50:18 UTC
It appears the version of sqlite shipped in RHEL5 does not have certain interfaces we use in the gfdb library.

To fix this we would have to remove the usage of those interfaces and revert to older code.  The ones I found were: sqlite3_open_v2() and sqlite3_prepare_v2(). This may be disruptive to the current code base.

Comment 3 Dan Lambright 2016-06-16 13:48:36 UTC
We do not plan to fix this. The bug can be reopened if this supported is needed.

Comment 5 Petr 2019-04-25 10:56:36 UTC
Hello.
I have this problem on Centos7

[2019-04-25 10:45:49.631018] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 20374
[2019-04-25 10:45:49.642692] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2019-04-25 10:45:49.642763] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2019-04-25 10:45:49.650056] W [MSGID: 101095] [xlator.c:374:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/cluster/tier.so: невозможно открыть разделяемый объектный файл: Нет такого файла или каталога
[2019-04-25 10:45:49.650093] E [MSGID: 101002] [graph.y:213:volume_type] 0-parser: Volume 'freezer-tier-dht', line 680: type 'cluster/tier' is not valid or not found on this machine
[2019-04-25 10:45:49.650114] E [MSGID: 101019] [graph.y:321:volume_end] 0-parser: "type" not specified for volume freezer-tier-dht
[2019-04-25 10:45:49.650524] E [MSGID: 100026] [glusterfsd.c:2636:glusterfs_process_volfp] 0-: failed to construct the graph
[2019-04-25 10:45:49.650864] W [glusterfsd.c:1570:cleanup_and_exit] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x8a1) [0x5637acdb8ad1] -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x249) [0x5637acdb1cb9] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5637acdb106b] ) 0-: received signum (-1), shutting down
[2019-04-25 10:45:49.650900] I [fuse-bridge.c:6807:fini] 0-fuse: Unmounting '/mnt'.
[2019-04-25 10:45:49.657254] I [fuse-bridge.c:6812:fini] 0-fuse: Closing fuse connection to '/mnt'.
[2019-04-25 10:45:49.657401] W [glusterfsd.c:1570:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f6d8dd11dd5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5637acdb1205] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5637acdb106b] ) 0-: received signum (15), shutting down

[root@dtln-ceph04 ~]# uname -a
Linux dtln-ceph04 3.10.0-957.10.1.el7.x86_64 #1 SMP Mon Mar 18 15:06:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

[root@dtln-ceph04 ~]# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core) 

[root@dtln-ceph04 ~]# gluster --version
glusterfs 6.1
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


Can this be fixed ?


Note You need to log in before you can comment on or make changes to this bug.