Description of problem: Version-Release number of selected component (if applicable): How reproducible: got only once, unable to reproduce Steps to Reproduce: 1.[root@DVM6 ~]# gluster v start master1 Connection failed. Please check if gluster daemon is operational. [root@DVM6 ~]#service glusterd status glusterd dead but subsys locked 2. 3. Actual results: Expected results: Additional info: glusterd log snippet [2013-08-23 03:03:38.292762] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2013-08-23 03:03:38.309275] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2013-08-23 03:03:38.309325] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1 [2013-08-23 03:03:38.309391] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-2 [2013-08-23 03:03:38.324188] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2013-08-23 03:03:38.324248] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1 [2013-08-23 03:03:38.324280] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-2 [2013-08-23 03:03:38.487008] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2013-08-23 03:03:38.487073] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1 [2013-08-23 03:03:38.487103] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-2 [2013-08-23 03:03:38.487130] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-3 [2013-08-23 03:03:38.739632] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2013-08-23 03:03:38.739676] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1 [2013-08-23 03:03:38.739693] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-2 [2013-08-23 03:03:38.739708] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-3 [2013-08-23 03:03:38.739723] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-4 [2013-08-23 03:03:38.739738] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-5 [2013-08-23 03:03:38.953731] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2013-08-23 03:03:38.953771] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1 [2013-08-23 03:03:38.953787] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-2 [2013-08-23 03:03:38.961072] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2013-08-23 03:03:38.961128] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1 [2013-08-23 03:03:38.961157] E [glusterd-store.c:1874:glusterd_store_retrieve_volume] 0-: Unknown key: brick-2 [2013-08-23 03:03:38.978778] I [glusterd-handler.c:2886:glusterd_friend_add] 0-management: connect returned 0 [2013-08-23 03:03:38.981261] I [glusterd-handler.c:2886:glusterd_friend_add] 0-management: connect returned 0 [2013-08-23 03:03:38.985587] I [glusterd-handler.c:2886:glusterd_friend_add] 0-management: connect returned 0 [2013-08-23 03:03:38.994050] I [glusterd-handler.c:2886:glusterd_friend_add] 0-management: connect returned 0 ... [2013-08-23 03:03:43.870272] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 989d267f-9b8c-43e7-841e-87859a8ffe27 , host: 10.70.37.110 [2013-08-23 03:03:43.870328] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: d54fadac-a435-4f53-941e-0c46cddd952e , host: 10.70.37.192 [2013-08-23 03:03:43.870410] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 828edd1a-ee84-4c94-8af4-cb35e835a7da , host: 10.70.37.81 [2013-08-23 03:03:43.870444] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 6b7ec72c-3f0a-45c2-9cdb-656231b6c04d, host: 10.70.37.128 [2013-08-23 03:03:43.874151] I [glusterd-rpc-ops.c:560:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 828edd1a-ee84-4c94-8af4-cb35e835a7da [2013-08-23 03:03:43.874216] I [glusterd-rpc-ops.c:560:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: d54fadac-a435-4f53-941e-0c46cddd952e [2013-08-23 03:03:43.880004] I [glusterd-rpc-ops.c:560:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: d54fadac-a435-4f53-941e-0c46cddd952e [2013-08-23 03:03:45.837882] I [glusterd-rpc-ops.c:363:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 989d267f-9b8c-43e7-841e-87859a8ffe27, host: 10.70.37.110, port: 0 [2013-08-23 03:03:45.844505] E [glusterd-utils.c:4135:_local_gsyncd_start] 0-: Unable to fetch conf file path. [2013-08-23 03:03:45.844562] E [glusterd-utils.c:4135:_local_gsyncd_start] 0-: Unable to fetch conf file path. [2013-08-23 03:03:45.851044] I [glusterd-rpc-ops.c:560:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 989d267f-9b8c-43e7-841e-87859a8ffe27 [2013-08-23 03:03:45.851970] I [glusterd-handler.c:2028:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 989d267f-9b8c-43e7-841e-87859a8ffe27 [2013-08-23 03:03:45.852349] I [glusterd-handler.c:3059:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.70.37.110 (0), ret: 0 [2013-08-23 03:03:45.857772] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 989d267f-9b8c-43e7-841e-87859a8ffe27, host: 10.70.37.110 [2013-08-23 03:03:45.857824] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: d54fadac-a435-4f53-941e-0c46cddd952e, host: 10.70.37.192 [2013-08-23 03:03:45.857851] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 828edd1a-ee84-4c94-8af4-cb35e835a7da, host: 10.70.37.81 [2013-08-23 03:03:45.857875] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 6b7ec72c-3f0a-45c2-9cdb-656231b6c04d, host: 10.70.37.128
It should be ensured that after a package update, the system is returned with all the processes in the *same state* as it was before the update. It is the sysadmin's prerogative on what state a process must be on the system. If a running process is a blocker to an update due to some reason, any of the following methods should be used 1) Print out an error message that the process has to be stopped before update, and prevent the update from happening 2) If it is safe to stop the process during the update process, the process may be stopped, but it should be ensured that the process is brought back to the original state after the update
Upstream 3.6 bug for this is https://bugzilla.redhat.com/show_bug.cgi?id=1113543. Upstream patch is http://review.gluster.org/#/c/8855 (for master) and http://review.gluster.org/#/c/8857/ (for 3.6)
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.