Description of problem: BEFORE: posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 13635, owner=2dd2c3a11706dc8c, client=0x7f159012b000, connection-id=(null), granted at 2018-12-31 14:20:42 AFTER: posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 10977, owner=b485e33df21bdaa2, client=0x7fa24c01ab90, connection-id=CTX_ID:68e12340-eed2-4386-bf5e-1f43cf8693d9-GRAPH_ID:0- PID:10901-HOST:dhcp35-215.lab.eng.blr.redhat.com-PC_NAME:patchy-client-0- RECON_NO:-0, granted at 2018-12-31 14:33:50 Printing the connection id helps figure which host the client is running on and what it's pid is. This information was found to be lacking in one of the bzs where there was a deadlock in containerized workload due to lock not released posisbly by heketi and the client could not be easily located due to missing conn id info. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/21968 (features/locks: Dump connection_id even for posix locks) posted (#1) for review on master by Krutika Dhananjay
REVIEW: https://review.gluster.org/21968 (features/locks: Dump connection_id even for posix locks) posted (#5) for review on master by Krutika Dhananjay
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/