Description of problem: Architecture: LXC running Ubuntu 14.04 (physical host, too). Logical Volume device is mounted directly into the LXC container through its fstab config (/var/lib/lxc/mylxc/fstab). Glusterfs daemon is running in the LXC. I am seeing continuously repeated log entries in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log: -------------- snipp ------------ [2014-06-20 08:26:32.273383] I [glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management: Received status volume req for volume applications [2014-06-20 08:26:32.400642] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status [2014-06-20 08:26:32.400691] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size [2014-06-20 08:26:48.550989] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status [2014-06-20 08:26:48.551041] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size [2014-06-20 08:26:49.271236] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status [2014-06-20 08:26:49.271300] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size [2014-06-20 08:26:55.311658] I [glusterd-volume-ops.c:478:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume home [2014-06-20 08:26:55.386682] I [glusterd-handler.c:3260:__glusterd_handle_status_volume] 0-management: Received status volume req for volume home [2014-06-20 08:26:55.515313] E [glusterd-utils.c:4584:glusterd_add_inode_size_to_dict] 0-management: tune2fs exited with non-zero exit status [2014-06-20 08:26:55.515364] E [glusterd-utils.c:4604:glusterd_add_inode_size_to_dict] 0-management: failed to get inode size This seems to happen, because tune2fs doesn't work within an LXC container: # tune2fs -l /dev/lxc1/storage03-brick tune2fs 1.42.9 (4-Feb-2014) tune2fs: No such file or directory while trying to open /dev/lxc1/storage03-brick Couldn't find valid filesystem superblock. # echo $? 1 When I run the same on the physical server, the tune2fs command works: # tune2fs -l /dev/lxc1/storage03-brick tune2fs 1.42.9 (4-Feb-2014) Filesystem volume name: <none> Last mounted on: /usr/lib/x86_64-linux-gnu/lxc Filesystem UUID: 18f84853-4070-4c6a-af95-efb27fe3eace Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean [...] How reproducible: Steps to Reproduce: 1. Create a LV on a physical host 2. Attach/Mount this LV into the LXC container through the LXC fstab file 3. Launch glusterfs in the LXC container (with peers on other LXC's as well) Actual results: After time the mentioned log entries appear in the gluster log (see above). It seems that the glusterd continously tries to get some inode information from the brick but as tune2fs doesn't work in the LXC, it just tries and tries it all over. Expected results: Give up ;-). Seriously, I don't know how severe it is for glusterd to not be able to run tune2fs on the brick. If it is serious, then output an error in the log, that the LXC must have the correct permissions to run tune2fs on the mounted LV. If not serious, give up and log an entry like "failed to get inode size - unable to run tune2fs on XXX" (where XXX would be the brick). Additional info: see mailing list http://supercolony.gluster.org/pipermail/gluster-users/2014-June/040694.html
Kaleb KEITHLEY created a patch to address this issue (why check inode size continously - they won't change) http://review.gluster.org/#/c/8134/
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed.
See https://www.mail-archive.com/gluster-users@gluster.org/msg21415.html, according to my tests the bug still exists in version 3.6.4.
GlusterFS 3.4.x has reached end-of-life.\ \ If this bug still exists in a later release please reopen this and change the version or open a new bug.