RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1320714 - [RFE] Allow the libgfapi logging level to be controlled.
Summary: [RFE] Allow the libgfapi logging level to be controlled.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Jeff Cody
QA Contact: jingzhao
URL:
Whiteboard:
Depends On:
Blocks: 1151859
TreeView+ depends on / blocked
 
Reported: 2016-03-23 19:34 UTC by Ademar Reis
Modified: 2017-06-28 08:44 UTC (History)
12 users (show)

Fixed In Version: qemu-kvm-rhev-2.6.0-13.el7
Doc Type: Enhancement
Doc Text:
Clone Of: 1151859
Environment:
Last Closed: 2016-11-07 21:02:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2673 0 normal SHIPPED_LIVE qemu-kvm-rhev bug fix and enhancement update 2016-11-08 01:06:13 UTC

Description Ademar Reis 2016-03-23 19:34:37 UTC
+++ This bug was initially created as a clone of Bug #1151859 +++

1. Proposed title of this feature request  
[RFE] Allow the libgfapi logging level to be controlled.
  
2. Who is the customer behind the request?  
N/A - GSS are requesting this feature.
  
3. What is the nature and description of the request?  
 
The logging level of libgfapi is currently hard coded within QEMU as LOG_WARNING. It would be useful to allow control of this via a command line argument to ease debugging of any Gluster related issues.

4. Why does the customer need this? (List the business requirements here) 

Ease debugging, remove the need to recompile QEMU to increase the logging level etc. 
  
5. How would the customer like to achieve this? (List the functional requirements here)  

A simple QEMU argument able to control the logging level.

6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.  

Ensure that the level provided is reflected in the logs.

7. Is there already an existing RFE upstream or in Red Hat Bugzilla?  

No.

8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?  
 
Both RHEL 6 and RHEL 7.

9. Is the sales team involved in this request and do they have any additional input?  

N/A.

10. List any affected packages or components.  

qemu-kvm
qemu-kvm-rhev

11. Would the customer be able to assist in testing this functionality if implemented?  

Yes GSS will be able to test this.

Comment 2 Jeff Cody 2016-04-28 19:08:37 UTC
Patches are upstream for QEMU 2.7:

https://lists.gnu.org/archive/html/qemu-devel/2016-04/msg01310.html

Comment 3 Miroslav Rezanina 2016-07-12 08:48:26 UTC
Fix included in qemu-kvm-rhev-2.6.0-13.el7

Comment 5 jingzhao 2016-08-17 05:22:22 UTC
Reproduce it on qemu-kvm-rhev-2.3.0-31.el7_2.21.x86_64 and host-kernel-3.10.0-327.8.1.el7.x86_64

[root@localhost home]# /usr/libexec/qemu-kvm -drive file=gluster://10.73.196.191/test-volume/rhel.img,file.debug=4
qemu-kvm: -drive file=gluster://10.73.196.191/test-volume/rhel.img,file.debug=4: Block protocol 'gluster' doesn't support the option 'debug'

Tested it on qemu-kvm-rhev-2.6.0-20.el7.x86_64 and host-kernel-3.10.0-492.el7.x86_64
[root@localhost bug]# /usr/libexec/qemu-kvm -drive file=gluster://10.73.196.191/test-volume/rhel.img,file.debug=4 -monitor stdio
QEMU 2.6.0 monitor - type 'help' for more information
(qemu) VNC server running on '::1;5900'

(qemu) 


Is it enough? I want to check the log file but failed
[root@localhost bug]# /usr/libexec/qemu-kvm -drive file=gluster://10.73.196.191/test-volume/rhel.img,file.debug=4,file.logfile=/var/log/glusterfs/gfpi.log -monitor stdio
qemu-kvm: -drive file=gluster://10.73.196.191/test-volume/rhel.img,file.debug=4,file.logfile=/var/log/glusterfs/gfpi.log: Block protocol 'gluster' doesn't support the option 'logfile'


Any other methods for checking the log file. could you share with me

Thanks
Jing Zhao

Comment 6 Jeff Cody 2016-09-02 14:33:46 UTC
Hi,

The 'logfile' option is not part of this patch series, the proper way to test is with the debug= option, that you are passing.  As you increase the debug argument, you will see increasing levels of libgfapi debug output on stderr/stdout.

Comment 7 jingzhao 2016-09-06 08:36:15 UTC
Verified it on qemu-kvm-rhev-2.6.0-22.el7.x86_64 and host-kernel-3.10.0-503.el7.x86_64

[root@jinzhao bug]# /usr/libexec/qemu-kvm -drive file=gluster://10.66.144.26/test-volume/1.qcow2,file.debug=0 -monitor stdio
[2016-09-06 08:30:00.990739] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 6a696e7a-6861-6f2d-3537-32392d323031 (0) coming up
[2016-09-06 08:30:00.990762] I [MSGID: 114020] [client.c:2106:notify] 0-test-volume-client-0: parent translators are ready, attempting connect on transport
[2016-09-06 08:30:00.994318] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-0: changing port to 49152 (from 0)
[2016-09-06 08:30:00.997264] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-09-06 08:30:00.998234] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-0: Connected to test-volume-client-0, attached to remote volume '/home/brick1'.
[2016-09-06 08:30:00.998254] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-09-06 08:30:01.005182] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-0: Server lk version = 1
[2016-09-06 08:30:01.007137] I [MSGID: 104041] [glfs-resolve.c:869:__glfs_active_subvol] 0-test-volume: switched to graph 6a696e7a-6861-6f2d-3537-32392d323031 (0)
QEMU 2.6.0 monitor - type 'help' for more information
(qemu) VNC server running on '127.0.0.1;5900'


[root@jinzhao bug]# /usr/libexec/qemu-kvm -drive file=gluster://10.66.144.26/test-volume/1.qcow2,file.debug=4 -monitor stdio
[2016-09-06 08:34:30.939682] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 6a696e7a-6861-6f2d-3538-37352d323031 (0) coming up
[2016-09-06 08:34:30.939703] I [MSGID: 114020] [client.c:2106:notify] 0-test-volume-client-0: parent translators are ready, attempting connect on transport
[2016-09-06 08:34:30.943113] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-0: changing port to 49152 (from 0)
[2016-09-06 08:34:30.946056] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-09-06 08:34:30.947034] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-0: Connected to test-volume-client-0, attached to remote volume '/home/brick1'.
[2016-09-06 08:34:30.947045] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-09-06 08:34:30.953660] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-0: Server lk version = 1
[2016-09-06 08:34:30.955661] I [MSGID: 104041] [glfs-resolve.c:869:__glfs_active_subvol] 0-test-volume: switched to graph 6a696e7a-6861-6f2d-3538-37352d323031 (0)
QEMU 2.6.0 monitor - type 'help' for more information
(qemu) VNC server running on '127.0.0.1;5900'


[root@jinzhao bug]# /usr/libexec/qemu-kvm -drive file=gluster://10.66.144.26/test-volume/1.qcow2,file.debug=9 -monitor stdio
[2016-09-06 08:35:09.201452] D [logging.c:1792:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5
[2016-09-06 08:35:09.201759] D [rpc-clnt.c:987:rpc_clnt_connection_init] 0-gfapi: defaulting frame-timeout to 30mins
[2016-09-06 08:35:09.201768] D [rpc-clnt.c:1001:rpc_clnt_connection_init] 0-gfapi: disable ping-timeout
[2016-09-06 08:35:09.201787] D [rpc-transport.c:288:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/3.7.9/rpc-transport/socket.so
[2016-09-06 08:35:09.203370] T [MSGID: 0] [options.c:91:xlator_option_validate_int] 0-gfapi: no range check required for 'option remote-port 24007'
[2016-09-06 08:35:09.203388] D [socket.c:3846:socket_init] 0-gfapi: Configued transport.tcp-user-timeout=0
[2016-09-06 08:35:09.203392] D [socket.c:3929:socket_init] 0-gfapi: SSL support on the I/O path is NOT enabled
[2016-09-06 08:35:09.203395] D [socket.c:3932:socket_init] 0-gfapi: SSL support for glusterd is NOT enabled
[2016-09-06 08:35:09.203397] D [socket.c:3949:socket_init] 0-gfapi: using system polling thread
[2016-09-06 08:35:09.203404] D [rpc-clnt.c:1475:rpcclnt_cbk_program_register] 0-gfapi: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1
[2016-09-06 08:35:09.203409] T [rpc-clnt.c:418:rpc_clnt_reconnect] 0-gfapi: attempting reconnect
[2016-09-06 08:35:09.203412] T [socket.c:2888:socket_connect] 0-gfapi: connecting 0x7fcff147a040, state=0 gen=0 sock=-1
[2016-09-06 08:35:09.203420] T [MSGID: 0] [common-utils.c:290:gf_resolve_ip6] 0-resolver: DNS cache not present, freshly probing hostname: 10.66.144.26
[2016-09-06 08:35:09.205258] D [MSGID: 0] [common-utils.c:333:gf_resolve_ip6] 0-resolver: returning ip-10.66.144.26 (port-24007) for hostname: 10.66.144.26 and port: 24007
[2016-09-06 08:35:09.205266] D [socket.c:2834:socket_fix_ssl_opts] 0-gfapi: disabling SSL for portmapper connection
[2016-09-06 08:35:09.205283] T [socket.c:806:__socket_nodelay] 0-gfapi: NODELAY enabled for socket 17
[2016-09-06 08:35:09.205288] T [socket.c:879:__socket_keepalive] 0-gfapi: Keep-alive enabled for socket 17, interval 2, idle: 20, timeout: 0
[2016-09-06 08:35:09.206023] T [rpc-clnt.c:1404:rpc_clnt_record] 0-gfapi: Auth Info: pid: 0, uid: 0, gid: 0, owner: 
[2016-09-06 08:35:09.206134] T [rpc-clnt.c:1261:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 148, payload: 84, rpc hdr: 64
[2016-09-06 08:35:09.206155] T [rpc-clnt.c:1598:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: GlusterFS Handshake, ProgVers: 2, Proc: 2) to rpc-transport (gfapi)
[2016-09-06 08:35:09.206173] D [rpc-clnt-ping.c:281:rpc_clnt_start_ping] 0-gfapi: ping timeout is 0, returning
[2016-09-06 08:35:09.207579] T [rpc-clnt.c:661:rpc_clnt_reply_init] 0-gfapi: received rpc message (RPC XID: 0x1 Program: GlusterFS Handshake, ProgVers: 2, Proc: 2) from rpc-transport (gfapi)
[2016-09-06 08:35:09.207951] T [graph.y:186:new_volume] 0-parser: New node for 'test-volume-client-0'
[2016-09-06 08:35:09.207961] T [MSGID: 0] [xlator.c:194:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/3.7.9/xlator/protocol/client.so
[2016-09-06 08:35:09.208586] T [graph.y:217:volume_type] 0-parser: Type:test-volume-client-0:protocol/client
[2016-09-06 08:35:09.208594] T [graph.y:252:volume_option] 0-parser: Option:test-volume-client-0:send-gids:true
[2016-09-06 08:35:09.208598] T [graph.y:252:volume_option] 0-parser: Option:test-volume-client-0:transport-type:tcp
[2016-09-06 08:35:09.208602] T [graph.y:252:volume_option] 0-parser: Option:test-volume-client-0:remote-subvolume:/home/brick1
[2016-09-06 08:35:09.208606] T [graph.y:252:volume_option] 0-parser: Option:test-volume-client-0:remote-host:10.66.144.26
[2016-09-06 08:35:09.208609] T [graph.y:252:volume_option] 0-parser: Option:test-volume-client-0:ping-timeout:42
[2016-09-06 08:35:09.208612] T [graph.y:324:volume_end] 0-parser: end:test-volume-client-0
[2016-09-06 08:35:09.208616] T [graph.y:186:new_volume] 0-parser: New node for 'test-volume-dht'
[2016-09-06 08:35:09.208620] T [MSGID: 0] [xlator.c:194:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/3.7.9/xlator/cluster/distribute.so
[2016-09-06 08:35:09.209313] T [graph.y:217:volume_type] 0-parser: Type:test-volume-dht:cluster/distribute
[2016-09-06 08:35:09.209320] T [graph.y:307:volume_sub] 0-parser: child:test-volume-dht->test-volume-client-0
[2016-09-06 08:35:09.209323] T [graph.y:324:volume_end] 0-parser: end:test-volume-dht
[2016-09-06 08:35:09.209327] T [graph.y:186:new_volume] 0-parser: New node for 'test-volume-write-behind'
[2016-09-06 08:35:09.209331] T [MSGID: 0] [xlator.c:194:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/3.7.9/xlator/performance/write-behind.so
[2016-09-06 08:35:09.209521] T [MSGID: 0] [xlator.c:249:xlator_dynload] 0-xlator: dlsym(notify) on /usr/lib64/glusterfs/3.7.9/xlator/performance/write-behind.so: undefined symbol: notify -- neglecting
[2016-09-06 08:35:09.209532] T [graph.y:217:volume_type] 0-parser: Type:test-volume-write-behind:performance/write-behind
[2016-09-06 08:35:09.209548] T [graph.y:307:volume_sub] 0-parser: child:test-volume-write-behind->test-volume-dht
[2016-09-06 08:35:09.209552] T [graph.y:324:volume_end] 0-parser: end:test-volume-write-behind
[2016-09-06 08:35:09.209556] T [graph.y:186:new_volume] 0-parser: New node for 'test-volume-read-ahead'
[2016-09-06 08:35:09.209569] T [MSGID: 0] [xlator.c:194:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/3.7.9/xlator/performance/read-ahead.so
[2016-09-06 08:35:09.209698] T [MSGID: 0] [xlator.c:249:xlator_dynload] 0-xlator: dlsym(notify) on /usr/lib64/glusterfs/3.7.9/xlator/performance/read-ahead.so: undefined symbol: notify -- neglecting
[2016-09-06 08:35:09.209704] T [graph.y:217:volume_type] 0-parser: Type:test-volume-read-ahead:performance/read-ahead
[2016-09-06 08:35:09.209708] T [graph.y:307:volume_sub] 0-parser: child:test-volume-read-ahead->test-volume-write-behind
[2016-09-06 08:35:09.209711] T [graph.y:324:volume_end] 0-parser: end:test-volume-read-ahead
[2016-09-06 08:35:09.209714] T [graph.y:186:new_volume] 0-parser: New node for 'test-volume-readdir-ahead'
[2016-09-06 08:35:09.209718] T [MSGID: 0] [xlator.c:194:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/3.7.9/xlator/performance/readdir-ahead.so
[2016-09-06 08:35:09.209789] T [MSGID: 0] [xlator.c:249:xlator_dynload] 0-xlator: dlsym(notify) on /usr/lib64/glusterfs/3.7.9/xlator/performance/readdir-ahead.so: undefined symbol: notify -- neglecting
[2016-09-06 08:35:09.209795] T [MSGID: 0] [xlator.c:256:xlator_dynload] 0-xlator: dlsym(dumpops) on /usr/lib64/glusterfs/3.7.9/xlator/performance/readdir-ahead.so: undefined symbol: dumpops -- neglecting
[2016-09-06 08:35:09.209800] T [graph.y:217:volume_type] 0-parser: Type:test-volume-readdir-ahead:performance/readdir-ahead
[2016-09-06 08:35:09.209804] T [graph.y:307:volume_sub] 0-parser: child:test-volume-readdir-ahead->test-volume-read-ahead
[2016-09-06 08:35:09.209807] T [graph.y:324:volume_end] 0-parser: end:test-volume-readdir-ahead
[2016-09-06 08:35:09.209810] T [graph.y:186:new_volume] 0-parser: New node for 'test-volume-io-cache'
[2016-09-06 08:35:09.209814] T [MSGID: 0] [xlator.c:194:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/3.7.9/xlator/performance/io-cache.so
[2016-09-06 08:35:09.209982] T [MSGID: 0] [xlator.c:249:xlator_dynload] 0-xlator: dlsym(notify) on /usr/lib64/glusterfs/3.7.9/xlator/performance/io-cache.so: undefined symbol: notify -- neglecting
[2016-09-06 08:35:09.209988] T [graph.y:217:volume_type] 0-parser: Type:test-volume-io-cache:performance/io-cache
[2016-09-06 08:35:09.209992] T [graph.y:307:volume_sub] 0-parser: child:test-volume-io-cache->test-volume-readdir-ahead
[2016-09-06 08:35:09.209995] T [graph.y:324:volume_end] 0-parser: end:test-volume-io-cache
[2016-09-06 08:35:09.210000] T [graph.y:186:new_volume] 0-parser: New node for 'test-volume-quick-read'
[2016-09-06 08:35:09.210004] T [MSGID: 0] [xlator.c:194:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/3.7.9/xlator/performance/quick-read.so
[2016-09-06 08:35:09.210133] T [MSGID: 0] [xlator.c:249:xlator_dynload] 0-xlator: dlsym(notify) on /usr/lib64/glusterfs/3.7.9/xlator/performance/quick-read.so: undefined symbol: notify -- neglecting
[2016-09-06 08:35:09.210139] T [graph.y:217:volume_type] 0-parser: Type:test-volume-quick-read:performance/quick-read
[2016-09-06 08:35:09.210143] T [graph.y:307:volume_sub] 0-parser: child:test-volume-quick-read->test-volume-io-cache
[2016-09-06 08:35:09.210146] T [graph.y:324:volume_end] 0-parser: end:test-volume-quick-read[2016-09-06 08:35:09.215650] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-0: changing port to 49152 (from 0)
[2016-09-06 08:35:09.218494] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-09-06 08:35:09.219359] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-0: Connected to test-volume-client-0, attached to remote volume '/home/brick1'.
[2016-09-06 08:35:09.219368] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-09-06 08:35:09.226064] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-0: Server lk version = 1
[2016-09-06 08:35:09.227968] I [MSGID: 104041] [glfs-resolve.c:869:__glfs_active_subvol] 0-test-volume: switched to graph 6a696e7a-6861-6f2d-3538-39332d323031 (0)
QEMU 2.6.0 monitor - type 'help' for more information
(qemu) VNC server running on '127.0.0.1;5900'

According to comment 5 and comment7, marked the bz as verified


Thanks
Jing Zhao

Comment 8 jingzhao 2016-09-06 08:36:55 UTC
According to comment 5 and comment7, marked the bz as verified

Comment 9 jingzhao 2016-09-07 01:40:36 UTC
Hi Jefy

 For comment 7,as QE understanding,there are output message when file.debug =0 because bz 1360535. the same output message with file.debug=0 when sent file.debug as 4 because there are no error message and only debug message (bz 1360535). Is it right? 
 Could you help to confirm the comment 7 ? 

Thanks
Jing Zhao

Comment 10 Jeff Cody 2016-09-07 06:42:14 UTC
Hi Jing,

This is correct.  The libgfapi library is printing out more debug messages than requested, however this is out of QEMU's control.  The behavior seen in comment #7 indicates that this is working as currently expected (debug output increases greatly with debug=9).  I concur that this is verified.

Comment 12 errata-xmlrpc 2016-11-07 21:02:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2673.html


Note You need to log in before you can comment on or make changes to this bug.