Description of problem: For a 2x2 volume with all bricks on a single node and the client mounted on the same node , setfattr -n trusted.io-stats-dump -v /tmp/io-stats-pre.txt /mnt/gluster generates: -rw-rw-rw-. 1 root root 12839 Oct 17 15:40 -tmp-io-stats-pre.txt????et-req.vol -rw-rw-rw-. 1 root root 12979 Oct 17 15:40 -tmp-io-stats-pre.txt????fid.vol2-i -rw-rw-rw-. 1 root root 12979 Oct 17 15:40 -tmp-io-stats-pre.txt????o.vol2-io- Version-Release number of selected component (if applicable): How reproducible: Consistently Steps to Reproduce: 1. Create a 2x2 volume with all bricks on a single node 2. Fuse mount the volume on the same server node 3. setfattr -n trusted.io-stats-dump -v /tmp/io-stats-pre.txt /mnt/gluster Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/21442 (debug/io-stats: io stats filenames contain garbage) posted (#1) for review on master by N Balachandran
RCA: dict_unserialize does not null terminate values. conditional_dump () uses snprintf to create the filename causing a lot of garbage characters to be included in the buffer. Fix: Use memcpy.
COMMIT: https://review.gluster.org/21442 committed in master by "Shyamsundar Ranganathan" <srangana> with a commit message- debug/io-stats: io stats filenames contain garbage As dict_unserialize does not null terminate the value, using snprintf adds garbage characters to the buffer used to create the filename. The code also used this->name in the filename which will be the same for all bricks for a volume. The files were thus overwritten if a node contained multiple bricks for a volume. The code now uses the conf->unique instead if available. Change-Id: I2c72534b32634b87961d3b3f7d53c5f2ca2c068c fixes: bz#1640165 Signed-off-by: N Balachandran <nbalacha>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/