Bug 1688226 - Brick Still Died After Restart Glusterd & Glusterfsd Services
Summary: Brick Still Died After Restart Glusterd & Glusterfsd Services
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 4.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-13 11:27 UTC by Eng Khalid Jamal
Modified: 2019-06-08 08:14 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-08 08:14:37 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Volume Logs & Status (38.04 KB, text/plain)
2019-03-13 11:27 UTC, Eng Khalid Jamal
no flags Details

Description Eng Khalid Jamal 2019-03-13 11:27:38 UTC
Created attachment 1543584 [details]
Volume Logs & Status

Description of problem:

i have two node with 4 hard disk on both of them , i create volume with replica two , my issue is i have bricks died when i reboot system one of the two server entered into emergency mode when i check i find the disk is corrupt and not work any more for that i replace the disk and added to clusters then try many things to make it online without success 

Version-Release number of selected component (if applicable):

Version release 4.1.6

How reproducible:



Steps to Reproduce:
1. gluster volume replace-brick gv0 gfs2:/sd2/gv0 gfs2:/sd5/gv0 commit force
2. new brick is online and added to the cluster
3. gluster volume heal gv0
4. heal operation unsuccessful because of one brick is down
5. restart glusterd & glusterfsd services for make bricks online
6. reboot system
7. gluster volume start gv0 force to make died briks work with new process id


Actual results:
1. heal still unsuccessful 
2. brick still down

Expected results:
after restart glusterfsd who responsible of brick process , it must be work correctly .

Additional info:
i attached my logs

Comment 2 Atin Mukherjee 2019-03-14 04:45:25 UTC
Can you please share the glusterd and brick log?

Comment 3 Eng Khalid Jamal 2019-03-14 08:23:11 UTC
(In reply to Atin Mukherjee from comment #2)
> Can you please share the glusterd and brick log?

[root@gfs2 ~]# tailf -n 200 /var/log/glusterfs/glusterd.log
[2019-03-11 12:55:20.603709] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped
[2019-03-11 12:55:20.606580] E [MSGID: 106028] [glusterd-utils.c:8213:glusterd_brick_signal] 0-glusterd: Unable to open pidfile: /var/run/gluster/vols/gv0/gfs2-sd2-gv0.pid [No such file or directory]
[2019-03-11 12:55:20.771445] I [glusterd-utils.c:6090:glusterd_brick_start] 0-management: starting a fresh brick process for brick /sd5/gv0
[2019-03-11 12:55:20.776267] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-03-11 12:55:20.776544] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped
[2019-03-11 12:55:20.776559] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped
[2019-03-11 12:55:20.776601] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed
[2019-03-11 12:55:20.778284] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: glustershd already stopped
[2019-03-11 12:55:20.778330] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: glustershd service is stopped
[2019-03-11 12:55:20.778444] I [MSGID: 106567] [glusterd-svc-mgmt.c:203:glusterd_svc_start] 0-management: Starting glustershd service
[2019-03-11 12:55:21.783574] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped
[2019-03-11 12:55:21.783650] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped
[2019-03-11 12:55:21.783757] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped
[2019-03-11 12:55:21.783786] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped
[2019-03-11 12:55:21.809237] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2019-03-11 12:55:21.809990] I [MSGID: 106005] [glusterd-handler.c:6131:__glusterd_brick_rpc_notify] 0-management: Brick gfs2:/sd5/gv0 has disconnected from glusterd.
[2019-03-11 12:55:21.810091] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/vols/gv0/gfs2-sd5-gv0.pid
[2019-03-11 12:55:21.856725] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /sd5/gv0 on port 49155
[2019-03-11 12:57:44.489957] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2019-03-11 13:15:45.522170] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2019-03-11 13:16:31.187881] I [MSGID: 106533] [glusterd-volume-ops.c:938:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume gv0
[2019-03-11 13:16:31.191518] E [MSGID: 106152] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on gfs1.optimum.com. Please check log file for details.
[2019-03-11 13:28:08.770729] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2019-03-11 13:31:40.618390] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2019-03-11 13:38:07.458844] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2019-03-11 13:42:34.927344] I [MSGID: 106488] [glusterd-handler.c:1549:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
[2019-03-11 13:42:34.928596] I [MSGID: 106488] [glusterd-handler.c:1549:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
[2019-03-12 06:53:54.495956] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 06:53:54.495956] and [2019-03-12 06:53:54.496385]
[2019-03-12 08:33:22.035898] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2019-03-12 08:33:34.928645] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 08:33:22.035898] and [2019-03-12 08:33:22.036388]
[2019-03-12 10:24:38.124839] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2019-03-12 10:24:44.816838] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 10:24:38.124839] and [2019-03-12 10:24:38.125282]
[2019-03-12 19:46:34.197405] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2019-03-12 19:47:22.984644] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2019-03-12 19:47:41.638494] E [MSGID: 106061] [glusterd-utils.c:10171:glusterd_max_opversion_use_rsp_dict] 0-management: Maximum supported op-version not set in destination dictionary
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 19:46:34.197405] and [2019-03-12 19:46:34.197842]
[2019-03-12 19:53:40.160887] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 19:53:40.160887] and [2019-03-12 19:53:40.161339]
[2019-03-13 10:50:07.965388] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-13 10:50:07.965388] and [2019-03-13 10:50:07.965827]
[2019-03-13 11:14:52.585627] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2019-03-13 20:03:10.182845] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2019-03-13 20:03:50.979475] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-13 20:03:10.182845] and [2019-03-13 20:03:10.183295]
[2019-03-13 20:20:24.749941] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2019-03-13 20:33:54.334392] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2019-03-13 20:34:10.135421] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2019-03-13 20:34:17.716964] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2019-03-13 20:39:59.874639] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2019-03-13 20:41:06.476894] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2019-03-14 05:46:44.179862] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2019-03-14 05:47:42.658812] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-14 05:46:44.179862] and [2019-03-14 05:46:44.180315]
[2019-03-14 07:39:51.361002] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-14 07:39:51.361002] and [2019-03-14 07:39:51.361437]
[2019-03-14 07:52:50.623420] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0

-----------------------------
[root@gfs2 ~]# tailf -n 200 /var/log/glusterfs/bricks/sd3-gv0.log
693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:35.781523] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:35.781587] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620726: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:37.814421] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:37.814490] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620777: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:37.816570] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:37.816638] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620778: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:37.819059] W [MSGID: 113020] [posix-helpers.c:996:posix_gfid_set] 0-gv0-posix: setting GFID on /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox failed  [Read-only file system]
[2019-03-10 20:00:37.819127] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:37.819175] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620782: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:37.822139] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:37.822205] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620786: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:39.855014] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:39.855088] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620837: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:39.856901] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:39.856994] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620838: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:39.859475] W [MSGID: 113020] [posix-helpers.c:996:posix_gfid_set] 0-gv0-posix: setting GFID on /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox failed  [Read-only file system]
[2019-03-10 20:00:39.859545] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:39.859593] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620842: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:39.862682] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:39.862748] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620846: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:41.893161] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:41.893227] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620902: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:41.895424] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:41.895488] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620903: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:41.897991] W [MSGID: 113020] [posix-helpers.c:996:posix_gfid_set] 0-gv0-posix: setting GFID on /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox failed  [Read-only file system]
[2019-03-10 20:00:41.898058] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:41.898107] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620907: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:41.900809] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:41.900885] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620911: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:43.930204] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:43.930272] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620961: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:43.932317] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:43.932379] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620962: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:43.935121] W [MSGID: 113020] [posix-helpers.c:996:posix_gfid_set] 0-gv0-posix: setting GFID on /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox failed  [Read-only file system]
[2019-03-10 20:00:43.935187] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:43.935234] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620966: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:43.938270] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available]
[2019-03-10 20:00:43.938332] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620970: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available]
[2019-03-10 20:00:44.363460] W [MSGID: 113075] [posix-helpers.c:1895:posix_fs_health_check] 0-gv0-posix: open_for_write() on /sd3/gv0/.glusterfs/health_check returned [Read-only file system]
[2019-03-10 20:00:44.363629] M [MSGID: 113075] [posix-helpers.c:1962:posix_health_check_thread_proc] 0-gv0-posix: health-check failed, going down
[2019-03-10 20:00:44.363785] M [MSGID: 113075] [posix-helpers.c:1981:posix_health_check_thread_proc] 0-gv0-posix: still alive! -> SIGTERM
[2019-03-10 20:01:14.364221] W [glusterfsd.c:1514:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e25) [0x7f490bec9e25] -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x5585a9df1d65] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x5585a9df1b8b] ) 0-: received signum (15), shutting down

Comment 4 Eng Khalid Jamal 2019-06-08 08:14:37 UTC
i think there is no one can solve this issue for me , when i check my brick i find my disk is completely offline i replace my disk , and i make gluster replace brick then re balance my volume then  heal it every thing is going write but in future is there any solution for this issue .
Best regards


Note You need to log in before you can comment on or make changes to this bug.