Bug 762211 (GLUSTER-479) - Log level seems to be getting ignored
Summary: Log level seems to be getting ignored
Keywords:
Status: CLOSED NOTABUG
Alias: GLUSTER-479
Product: GlusterFS
Classification: Community
Component: logging
Version: mainline
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Vikas Gorur
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-12-16 08:00 UTC by Shehjar Tikoo
Modified: 2010-03-30 10:17 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: RTNR
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Shehjar Tikoo 2009-12-16 08:00:51 UTC
...resulting in all log messages being printed.

The log level is NONE, with the log file set to /dev/stdout.
I'd've thought on setting log level to NONE, the log file is
ignored but it is not. I am still getting log messages from TRACE
level. Any ideas whats going on here or if this is the expected
behaviour?

This happens even when I set the log file to a disk file.

root@indus:glusterfs# /usr/local/sbin/glusterfsd -f /home/shehjart/code/glusterfs/test.vol -L NONE -l /dev/stdout --no-daemon
================================================================================
Version      : glusterfs 3.0.0git built on Dec 15 2009 16:07:13
git: git://git.sv.gnu.org/gluster.git
Starting Time: 2009-12-16 13:22:44
Command line : /usr/local/sbin/glusterfsd -f /home/shehjart/code/glusterfs/test.vol -L NONE -l /dev/stdout --no-daemon
PID          : 6139
System name  : Linux
Nodename     : indus
Kernel Release : 2.6.30-2-amd64
Hardware Identifier: x86_64

Given volfile:
+------------------------------------------------------------------------------+
  1: volume pos1
  2:         type storage/posix
  3:         option directory /test/disk1
  4: end-volume
  5:
  6: #volume pos1-iot
  7: #        type performance/io-threads
  8: #        subvolumes pos1
  9: #end-volume
 10:
 11: volume pos1-locked
 12:         type features/locks
 13:         subvolumes pos1
 14: end-volume
 15:
 16: volume pos2
 17:         type storage/posix
 18:         option directory /test/disk2
 19: end-volume
 20:
 21: volume pos2-locked
 22:         type features/locks
 23:         subvolumes pos2
 24: end-volume
 25:
 26: volume pos3
 27:         type storage/posix
 28:         option directory /test/disk3
 29: end-volume
 30:
 31: volume pos3-locked
 32:         type features/locks
 33:         subvolumes pos3
 34: end-volume
 35:
 36: volume pos4
 37:         type storage/posix
 38:         option directory /test/disk4
 39: end-volume
 40:
 41: volume trace1
 42:         type debug/trace
 43:         subvolumes pos1-locked
 44: end-volume
 45:
 46: volume trace2
 47:         type debug/trace
 48:         subvolumes pos2-locked
 49: end-volume
 50:
 51: volume trace3
 52:         type debug/trace
 53:         subvolumes pos3-locked
 54: end-volume
 55:
 56: #volume repl
 57: #        type cluster/afr
 58: #        subvolumes pos1-locked pos2-locked pos3-locked
 59: #end-volume
 60:
 61: #volume trace-repl
 62: #        type debug/trace
 63: #        subvolumes repl
 64: #end-volume
 65:
 66: volume dist
 67:         type cluster/dht
 68:         subvolumes pos1 pos2 pos3
 69: end-volume
 70:
 71: volume trace-dist
 72:         type debug/trace
 73:         subvolumes dist
 74:         option force-log-level TRACE
 75: end-volume
 76:
 77: #volume client
 78: #        type protocol/client
 79: #        option transport-type tcp
 80: #        option remote-host 127.0.0.1
 81: #        option remote-port 6102
 82: #        option remote-subvolume localdisk
 83: #end-volume
 84:
 85: volume nfs
 86:         type nfs/server
 87:         subvolumes                      pos1
 88:         option mem-factor               21
 89:
 90: #        option nfs3.pos1.volume-access  read-only
 91: #        option nfs3.pos2.volume-access  read-only
 92: #        option nfs3.pos3.volume-access  read-only
 93:
 94:
 95:         option rpc-auth.auth-unix       enable
 96:
 97: #NYI
 98: #        option rpc-auth.pos1.uidmap    5=5,0=65535
 99: #        option rpc-auth.pos2.uidmap    5=5,0=65535
100: #        option rpc-auth.pos3.uidmap    5=5,0=65535
101:
102:
103: #       option nfs3.read-size           64Kb
104: #       option nfs3.write-size          64Kb
105: #       option nfs3.readdir-size        64Kb
106:
107: #       option portmapper               149
108: #       option nfs3.listen-port         2049
109:
110: #       option volume-alias.pos1        /exports/share1
111: #       option volume-alias.pos3        /exports/share3
112:
113: #       option nfs-vers.pos1            3
114: #       option nfs-vers.pos3            3
115:
116: end-volume

+------------------------------------------------------------------------------+
[2009-12-16 13:22:44] W [trace.c:2457:init] trace3: dangling volume. check volfile
[2009-12-16 13:22:44] C [dict.c:1142:data_to_str] dict: @data=(nil)
[2009-12-16 13:22:44] C [dict.c:1142:data_to_str] dict: @data=(nil)
[2009-12-16 13:22:44] W [trace.c:2457:init] trace2: dangling volume. check volfile
[2009-12-16 13:22:44] C [dict.c:1142:data_to_str] dict: @data=(nil)
[2009-12-16 13:22:44] C [dict.c:1142:data_to_str] dict: @data=(nil)
[2009-12-16 13:22:44] W [trace.c:2457:init] trace1: dangling volume. check volfile
[2009-12-16 13:22:44] C [dict.c:1142:data_to_str] dict: @data=(nil)
[2009-12-16 13:22:44] C [dict.c:1142:data_to_str] dict: @data=(nil)
[2009-12-16 13:22:44] W [posix.c:4791:init] pos4: Volume is dangling. Please check the volume file.
[2009-12-16 13:22:44] D [dht-diskusage.c:71:dht_du_info_cbk] dist: on subvolume 'pos1': avail_percent is: 86.00 and avail_space is: 209716654080
[2009-12-16 13:22:44] D [dht-diskusage.c:71:dht_du_info_cbk] dist: on subvolume 'pos1': avail_percent is: 86.00 and avail_space is: 209716654080
[2009-12-16 13:22:44] N [glusterfsd.c:1361:main] glusterfs: Successfully started
[2009-12-16 13:23:07] T [rpcsvc.c:227:rpcsvc_conn_init] rpc-service: tx pool: 735
[2009-12-16 13:23:07] T [rpcsvc.c:234:rpcsvc_conn_init] rpc-service: rx pool: 735
[2009-12-16 13:23:07] D [rpcsvc.c:250:rpcsvc_conn_init] rpc-service: New connection inited: sockfd: 10
[2009-12-16 13:23:07] D [rpcsvc.c:1563:rpcsvc_conn_listening_handler] rpc-service: New Connection: Program MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2009-12-16 13:23:07] T [rpcsvc.c:1413:rpcsvc_conn_data_poll_in] rpc-service: Data Read: 4
[2009-12-16 13:23:07] T [rpcsvc.c:555:rpcsvc_record_read_complete_fraghdr] rpc-service: Received fragment size: 40
[2009-12-16 13:23:07] T [rpcsvc.c:1413:rpcsvc_conn_data_poll_in] rpc-service: Data Read: 40
[2009-12-16 13:23:07] T [rpcsvc.c:635:rpcsvc_record_read_complete_frag] rpc-service: Fragment remaining: 0
[2009-12-16 13:23:07] T [rpcsvc.c:1373:rpcsvc_record_update_state] rpc-service: Full Record Received.
[2009-12-16 13:23:07] T [rpcsvc-auth.c:274:rpcsvc_auth_request_init] rpc-service: Auth handler: AUTH_NULL

Comment 1 Raghavendra G 2009-12-16 14:23:57 UTC
It seems to be working fine on my laptop on master (d049402c7336df73b1a). I started the client and server, did an ls on mount point, killed the server and again did ls for which I got ENOTCONN error, but nothing was logged into logfile as expected.

Comment 2 Shehjar Tikoo 2009-12-17 06:03:16 UTC
My tree had a patch that adds some new function to trace. That patch caused the said behaviour. Setting bug to invalid.


Note You need to log in before you can comment on or make changes to this bug.