Bug 763619 (GLUSTER-1887)

Summary: logging mess
Product: [Community] GlusterFS Reporter: Csaba Henk <csaba>
Component: loggingAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: low    
Version: mainlineCC: gluster-bugs, lakshmipathi, vijay
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Csaba Henk 2010-10-09 22:37:59 UTC
Glusterd starts up like:

# glusterfs -N -l /dev/stdout -f /tmp/glusterd.vol
[2010-10-10 00:12:22.335751] I [glusterd.c:274:init] management: Using /etc/glusterd as working directory
[2010-10-10 00:12:22.340714] E [rpc-transport.c:899:rpc_transport_load] rpc-transport: /meta/progs/glusterfs-git/usr/lib/glusterfs/3.1.0git/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
[2010-10-10 00:12:22.340745] E [rpc-transport.c:903:rpc_transport_load] rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
[2010-10-10 00:12:22.340920] C [glusterd-store.c:752:glusterd_retrieve_uuid] : Retrieving uuid failed ret = -1
[2010-10-10 00:12:22.341171] I [glusterd.c:95:glusterd_uuid_init] glusterd: generated UUID: 18780ddb-2e0c-446a-8754-9cbf26035837
Given volfile:
+------------------------------------------------------------------------------+

Client starts up like:

# glusterfs -N -l /dev/stdout  -s localhost --volfile-id=pop /mnt/gluster0/
[2010-10-10 00:10:41.170932] W [io-stats.c:1637:init] pop: dangling volume. check volfile
[2010-10-10 00:10:41.171210] C [dict.c:1204:data_to_str] dict: @data=(nil)
[2010-10-10 00:10:41.171230] C [dict.c:1204:data_to_str] dict: @data=(nil)
Given volfile:
+------------------------------------------------------------------------------+

It's not acceptable that a lot of bogus errors, warnings and critical (!) log messages are generated upon normal operation. At least "critical" is definitely not. It should be reserved for the cases the software is facing an IRRECOVERABLE FATAL CONDITION which leaves termination as the only choice, or when DATA LOSS OCCURS, things which makes you shout out loud.

And it's not something under the hood. Using crit for debug messages just does not scale with enterprisey audience.

What comes after is not so... critical, still hurts the eye severely:

+------------------------------------------------------------------------------+
[2010-10-10 00:24:12.281388] I [glusterd-op-sm.c:5259:glusterd_op_set_cli_op] : Returning 0
[2010-10-10 00:24:12.281490] I [glusterd-handler.c:753:glusterd_handle_create_volume] glusterd: Received create volume req
[2010-10-10 00:24:12.484879] I [glusterd-utils.c:223:glusterd_lock] glusterd: Cluster lock held by 18780ddb-2e0c-446a-8754-9cbf26035837
[2010-10-10 00:24:12.484918] I [glusterd-handler.c:2612:glusterd_op_txn_begin] glusterd: Acquired local lock
[2010-10-10 00:24:12.484933] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_START_LOCK'
[2010-10-10 00:24:12.484946] I [glusterd-handler.c:2616:glusterd_op_txn_begin] glusterd: Returning 0
[2010-10-10 00:24:12.484982] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_START_LOCK'
[2010-10-10 00:24:12.485014] I [glusterd3_1-mops.c:1098:glusterd3_1_cluster_lock] glusterd: Sent lock req to 0 peers
[2010-10-10 00:24:12.485093] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.485119] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Default' to 'Lock sent' due to event 'GD_OP_EVENT_START_LOCK'
[2010-10-10 00:24:12.485134] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.490396] I [glusterd3_1-mops.c:1240:glusterd3_1_stage_op] glusterd: Sent op req to 0 peers
[2010-10-10 00:24:12.490416] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.490430] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Lock sent' to 'Stage op sent' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.490443] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.490455] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_STAGE_ACC'
[2010-10-10 00:24:12.490467] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.490479] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Stage op sent' to 'Stage op sent' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.490491] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_STAGE_ACC'
[2010-10-10 00:24:12.538913] I [glusterd3_1-mops.c:1330:glusterd3_1_commit_op] glusterd: Sent op req to 0 peers
[2010-10-10 00:24:12.539096] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.539114] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Stage op sent' to 'Commit op sent' due to event 'GD_OP_EVENT_STAGE_ACC'
[2010-10-10 00:24:12.539129] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.539143] I [glusterd3_1-mops.c:1152:glusterd3_1_cluster_unlock] glusterd: Sent unlock req to 0 peers
[2010-10-10 00:24:12.539155] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.539166] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Commit op sent' to 'Unlock sent' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.539179] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.539195] I [glusterd-op-sm.c:4617:glusterd_op_txn_complete] glusterd: Cleared local lock
[2010-10-10 00:24:12.539368] I [glusterd-op-sm.c:4581:glusterd_op_send_cli_response] : Returning 0
[2010-10-10 00:24:12.539430] I [glusterd-op-sm.c:4660:glusterd_op_txn_complete] glusterd: Returning 0
[2010-10-10 00:24:12.539445] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Unlock sent' to 'Default' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.539458] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.539470] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Default' to 'Default' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.539482] W [glusterd3_1-mops.c:1497:glusterd_handle_rpc_msg] : Returning 0
[2010-10-10 00:24:12.539800] I [glusterd-op-sm.c:5259:glusterd_op_set_cli_op] : Returning 0
[2010-10-10 00:24:12.539825] I [glusterd-handler.c:904:glusterd_handle_cli_start_volume] glusterd: Received start vol reqfor volume pop
[2010-10-10 00:24:12.539865] I [glusterd-utils.c:223:glusterd_lock] glusterd: Cluster lock held by 18780ddb-2e0c-446a-8754-9cbf26035837
[2010-10-10 00:24:12.539880] I [glusterd-handler.c:2612:glusterd_op_txn_begin] glusterd: Acquired local lock
[2010-10-10 00:24:12.539892] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_START_LOCK'
[2010-10-10 00:24:12.539905] I [glusterd-handler.c:2616:glusterd_op_txn_begin] glusterd: Returning 0
[2010-10-10 00:24:12.539942] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_START_LOCK'
[2010-10-10 00:24:12.539957] I [glusterd3_1-mops.c:1098:glusterd3_1_cluster_lock] glusterd: Sent lock req to 0 peers
[2010-10-10 00:24:12.539991] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.540029] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Default' to 'Lock sent' due to event 'GD_OP_EVENT_START_LOCK'
[2010-10-10 00:24:12.540045] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.543552] I [glusterd3_1-mops.c:1240:glusterd3_1_stage_op] glusterd: Sent op req to 0 peers
[2010-10-10 00:24:12.543575] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.543589] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Lock sent' to 'Stage op sent' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.543602] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.543615] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_STAGE_ACC'
[2010-10-10 00:24:12.543627] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.543638] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Stage op sent' to 'Stage op sent' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:12.543651] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_STAGE_ACC'
[2010-10-10 00:24:12.928238] I [glusterd-utils.c:959:glusterd_volume_start_glusterfs] : About to start glusterfs for brick 192.168.2.10:/tmp/ex0
[2010-10-10 00:24:13.46499] I [glusterd3_1-mops.c:1330:glusterd3_1_commit_op] glusterd: Sent op req to 0 peers
[2010-10-10 00:24:13.46682] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:13.46700] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Stage op sent' to 'Commit op sent' due to event 'GD_OP_EVENT_STAGE_ACC'
[2010-10-10 00:24:13.46715] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:13.46729] I [glusterd3_1-mops.c:1152:glusterd3_1_cluster_unlock] glusterd: Sent unlock req to 0 peers
[2010-10-10 00:24:13.46741] I [glusterd-op-sm.c:5116:glusterd_op_sm_inject_event] glusterd: Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:13.46753] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Commit op sent' to 'Unlock sent' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:13.46765] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:13.46784] I [glusterd-op-sm.c:4617:glusterd_op_txn_complete] glusterd: Cleared local lock
[2010-10-10 00:24:13.46985] I [glusterd-op-sm.c:4581:glusterd_op_send_cli_response] : Returning 0
[2010-10-10 00:24:13.47054] I [glusterd-op-sm.c:4660:glusterd_op_txn_complete] glusterd: Returning 0
[2010-10-10 00:24:13.47069] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Unlock sent' to 'Default' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:13.47082] I [glusterd-op-sm.c:5164:glusterd_op_sm] : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:13.47094] I [glusterd-op-sm.c:4796:glusterd_op_sm_transition_state] : Transitioning from 'Default' to 'Default' due to event 'GD_OP_EVENT_ALL_ACC'
[2010-10-10 00:24:13.47107] W [glusterd3_1-mops.c:1497:glusterd_handle_rpc_msg] : Returning 0
[2010-10-10 00:24:13.74197] I [glusterd-pmap.c:237:pmap_registry_bind] pmap: adding brick /tmp/ex0 on port 24009

What irregularity can a user imply upon being warned to "Returning 0"? 
And again, this is just _noise_. All these info messages should be debug ones; or maybe some of them could remain info, but with a much comprehensive wording. Do ourselves a favour and make a big s/GF_LOG_NORMAL/GF_LOG_DEBUG/ party.

Comment 1 Vijay Bellur 2010-10-11 11:31:28 UTC
PATCH: http://patches.gluster.com/patch/5428 in master (Set correct logging level for log messages)

Comment 2 Anand Avati 2010-10-27 02:55:52 UTC
PATCH: http://patches.gluster.com/patch/5565 in master (fuse: "terminating upon EBADF" log message is confusing, set its level to DEBUG)

Comment 3 Lakshmipathi G 2010-11-22 07:16:38 UTC
verified with 3.1.1qa9  server logs are

#glusterfs -N -l /dev/stdout -f repd.10.209.59.112.mnt-311qa9.vol 
Given volfile:
+------------------------------------------------------------------------------+
  1: volume repd-posix
  2:     type storage/posix
  3:     option directory /mnt/311qa9
  4: end-volume
  5: 
  6: volume repd-access-control
  7:     type features/access-control

but client log starts with

#glusterfs -N -l /dev/stdout -s 10.209.59.112 --volfile-id=repd /mnt
[2010-11-22 05:12:56.198936] W [io-stats.c:1644:init] repd: dangling volume. check volfile 
[2010-11-22 05:12:56.199079] W [dict.c:1204:data_to_str] dict: @data=(nil)
[2010-11-22 05:12:56.199122] W [dict.c:1204:data_to_str] dict: @data=(nil)
Given volfile: