Description of problem: We are currently using gluster version 4.0.0. We have three gluster server nodes and a gluster volume, MyVol with quota being enabled and a limit usage is set to a arbitrary number such as 1GB. We see a lot of logging messages at INFO level in /var/log/glusterfs/bricks/mnt-[xxx]-[VOLUME_NAME].log: [<TIMESTAMP1>] I [dict.c:491: dict_get] ( -->/usr/lib64/glsuterfs/4.0.0/xlator/storage/posix.so(+0x25953) [0x7f73e0c62953] --> /usr/lib64/glusterfs/4.0.0/xlator/storage/posix.so(+0xc132) [0x7f73e0c49132] --> /lib64/libglusterfs.so.0(dict_get +0x10c) [0x7f73e836487c] 0-dict: !this || key = trusted.glusterfs.protect.writes [Invalid argument] [<TIMESTAMP2>] I [dict.c:491: dict_get] ( -->/usr/lib64/glsuterfs/4.0.0/xlator/storage/posix.so(+0xc1b7) [0x7f73e0c491b7] --> /usr/lib64/glusterfs/4.0.0/xlator/storage/posix.so(+0xc132) [0x7f73e0c49132] --> /lib64/libglusterfs.so.0(dict_get +0x10c) [0x7f73e836487c] 0-dict: !this || key = glusterfs.avoid.overwrite [Invalid argument] These two messages (with different HEX codes) are written to the same log file over and over and filled up the disk within a few days. Does anyone know what is going on and any/or issue related to these messages? Thank you for your support. Additional info: 1. gluster v status Status of volume: MyVol Gluster Process TCP Port RDMA Port Online PID -------------------------------------------------------- Brick server1:/mnt/xxx/MyVol 49152 0 Y 109 Brick server2:/mnt/xxx/MyVol 49152 0 Y 109 Brick server3:/mnt/xxx/MyVol 49152 0 Y 109 Self-heal Daemon on localhost NA NA Y 91 Quota-Daemon on localhost NA NA Y 100 Self-heal Daemon on server1 NA NA Y 91 Quota-Daemon on server1 NA NA Y 100 Self-heal Daemon on server2 NA NA Y 91 Quota-Daemon on server2 NA NA
Here's the source code I found that is related to this report: https://github.com/gluster/glusterfs/blob/release-4.0/libglusterfs/src/dict.c
Closing this bug. Please raise an upstream bug for all the upstream gluster versions.
"Please raise an upstream bug for all the upstream gluster versions." I am confused and still do not understand why this bug report was closed. If this is not the right place to file this bug request, please let me know the correct place to do so. Thank you very much for your support.
(In reply to kcao22003 from comment #0) > Description of problem: > > We are currently using gluster version 4.0.0. We have three gluster server > nodes > and a gluster volume, MyVol with quota being enabled and a limit usage is > set to a arbitrary number such as 1GB. > > We see a lot of logging messages at INFO level in > /var/log/glusterfs/bricks/mnt-[xxx]-[VOLUME_NAME].log: > > [<TIMESTAMP1>] I [dict.c:491: dict_get] ( > -->/usr/lib64/glsuterfs/4.0.0/xlator/storage/posix.so(+0x25953) > [0x7f73e0c62953] --> > /usr/lib64/glusterfs/4.0.0/xlator/storage/posix.so(+0xc132) [0x7f73e0c49132] > --> /lib64/libglusterfs.so.0(dict_get +0x10c) [0x7f73e836487c] 0-dict: !this > || key = trusted.glusterfs.protect.writes [Invalid argument] > > [<TIMESTAMP2>] I [dict.c:491: dict_get] ( > -->/usr/lib64/glsuterfs/4.0.0/xlator/storage/posix.so(+0xc1b7) > [0x7f73e0c491b7] --> > /usr/lib64/glusterfs/4.0.0/xlator/storage/posix.so(+0xc132) [0x7f73e0c49132] > --> /lib64/libglusterfs.so.0(dict_get +0x10c) [0x7f73e836487c] 0-dict: !this > || key = glusterfs.avoid.overwrite [Invalid argument] > > > These two messages (with different HEX codes) are written to the same log > file over and over and filled up the disk within a few days. > > Does anyone know what is going on and any/or issue related to these > messages? > > Thank you for your support. > Were you running a rebalance when you saw these messages?
No. We were not running any rebalance. Is there any additional information you need me to provide to determine the cause of this issue? If yes, please let me know. Thank you for your support.
I am experiencing the exact same issue after upgrade from 3.12.6 to 4.1.0 on CentOS7. I'm not sure what logs I can provide to help. Let me know.
(In reply to Chad Cropper from comment #7) > I am experiencing the exact same issue after upgrade from 3.12.6 to 4.1.0 on > CentOS7. I'm not sure what logs I can provide to help. Let me know. I set my logs to WARNING instead of INFO just to keep the logs from filling /var every 5 minutes.
(In reply to Chad Cropper from comment #8) > (In reply to Chad Cropper from comment #7) > > I am experiencing the exact same issue after upgrade from 3.12.6 to 4.1.0 on > > CentOS7. I'm not sure what logs I can provide to help. Let me know. > > I set my logs to WARNING instead of INFO just to keep the logs from filling > /var every 5 minutes. Please paste the message here.
This has been fixed in master. We will backport it to the release branches. https://review.gluster.org/#/c/glusterfs/+/20250/
REVIEW: https://review.gluster.org/20694 (storage/posix: Fix excessive logging in WRITE fop path) posted (#1) for review on release-4.1 by Krutika Dhananjay
COMMIT: https://review.gluster.org/20694 committed in release-4.1 by "Krutika Dhananjay" <kdhananj> with a commit message- storage/posix: Fix excessive logging in WRITE fop path Backport of: https://review.gluster.org/#/c/glusterfs/+/20250 I was running some write-intensive tests on my volume, and in a matter of 2 hrs, the 50GB space in my root partition was exhausted. On inspecting further, figured that excessive logging in bricks was the cause - specifically in posix write when posix_check_internal_writes() does dict_get() without a NULL-check on xdata. Change-Id: I89de57a3a90ca5c375e5b9477801a9e5ff018bbf fixes: bz#1596686 Signed-off-by: Krutika Dhananjay <kdhananj> (cherry picked from commit 81701e4d92ae7b1d97e5bc955703719f2e9e773a)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.3, please open a new bug report. glusterfs-4.1.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-August/000111.html [2] https://www.gluster.org/pipermail/gluster-users/