+++ This bug was initially created as a clone of Bug #1193388 +++ Description of problem: ====================== Below error messages are seen on the client mount log during delete operation from the mount point. [2015-02-17 09:56:16.205313] E [ec-common.c:1209:ec_update_size_version_done] 0-testvol-disperse-0: Failed to update version and size (error 2) While the server brick logs show below messages at the same time. [2015-02-17 09:57:09.233520] E [posix.c:4456:_posix_handle_xattr_keyvalue_pair] 0-testvol-posix: getxattr failed on /rhs/brick1/b1/.glusterfs/6b/68/6b68da3a-6e48-483e-8aef-430c72908ef3 while doing xattrop: Key:trusted.ec.version (No such file or directory) [2015-02-17 09:57:09.244803] I [server-rpc-fops.c:1797:server_xattrop_cbk] 0-testvol-server: 2565966: XATTROP <gfid:6b68da3a-6e48-483e-8aef-430c72908ef3> (6b68da3a-6e48-483e-8aef-430c72908ef3) ==> (No such file or directory) Version-Release number of selected component (if applicable): ============================================================= glusterfs 3.7dev built on Feb 14 2015 01:05:51 Gluster volume options: ======================= [root@dhcp37-120 tmp]# gluster volume get testvol all Option Value ------ ----- cluster.lookup-unhashed on cluster.min-free-disk 10% cluster.min-free-inodes 5% cluster.rebalance-stats off cluster.subvols-per-directory (null) cluster.readdir-optimize off cluster.rsync-hash-regex (null) cluster.extra-hash-regex (null) cluster.dht-xattr-name trusted.glusterfs.dht cluster.randomize-hash-range-by-gfid off cluster.local-volume-name (null) cluster.weighted-rebalance on cluster.switch-pattern (null) cluster.entry-change-log on cluster.read-subvolume (null) cluster.read-subvolume-index -1 cluster.read-hash-mode 1 cluster.background-self-heal-count 16 cluster.metadata-self-heal on cluster.data-self-heal on cluster.entry-self-heal on cluster.self-heal-daemon on cluster.heal-timeout 600 cluster.self-heal-window-size 1 cluster.data-change-log on cluster.metadata-change-log on cluster.data-self-heal-algorithm (null) cluster.eager-lock on cluster.quorum-type none cluster.quorum-count (null) cluster.choose-local true cluster.self-heal-readdir-size 1KB cluster.post-op-delay-secs 1 cluster.ensure-durability on cluster.stripe-block-size 128KB cluster.stripe-coalesce true diagnostics.latency-measurement off diagnostics.dump-fd-stats off diagnostics.count-fop-hits off diagnostics.brick-log-level INFO diagnostics.client-log-level INFO diagnostics.brick-sys-log-level CRITICAL diagnostics.client-sys-log-level CRITICAL diagnostics.brick-logger (null) diagnostics.client-logger (null) diagnostics.brick-log-format (null) diagnostics.client-log-format (null) diagnostics.brick-log-buf-size 5 diagnostics.client-log-buf-size 5 diagnostics.brick-log-flush-timeout 120 diagnostics.client-log-flush-timeout 120 performance.cache-max-file-size 0 performance.cache-min-file-size 0 performance.cache-refresh-timeout 1 performance.cache-priority performance.cache-size 32MB performance.io-thread-count 16 performance.high-prio-threads 16 performance.normal-prio-threads 16 performance.low-prio-threads 16 performance.least-prio-threads 1 performance.enable-least-priority on performance.least-rate-limit 0 performance.cache-size 128MB performance.flush-behind on performance.nfs.flush-behind on performance.write-behind-window-size 1MB performance.nfs.write-behind-window-size1MB performance.strict-o-direct off performance.nfs.strict-o-direct off performance.strict-write-ordering off performance.nfs.strict-write-ordering off performance.lazy-open yes performance.read-after-open no performance.read-ahead-page-count 4 performance.md-cache-timeout 1 features.encryption off encryption.master-key (null) encryption.data-key-size 256 encryption.block-size 4096 network.frame-timeout 1800 network.ping-timeout 42 network.tcp-window-size (null) features.lock-heal off features.grace-timeout 10 network.remote-dio disable client.event-threads 2 network.tcp-window-size (null) network.inode-lru-limit 16384 auth.allow * auth.reject (null) transport.keepalive (null) server.allow-insecure (null) server.root-squash off server.anonuid 65534 server.anongid 65534 server.statedump-path /var/run/gluster server.outstanding-rpc-limit 64 features.lock-heal off features.grace-timeout (null) server.ssl (null) auth.ssl-allow * server.manage-gids off client.send-gids on server.gid-timeout 2 server.own-thread (null) server.event-threads 2 performance.write-behind on performance.read-ahead on performance.readdir-ahead off performance.io-cache on performance.quick-read on performance.open-behind on performance.stat-prefetch on performance.client-io-threads off performance.nfs.write-behind on performance.nfs.read-ahead off performance.nfs.io-cache off performance.nfs.quick-read off performance.nfs.stat-prefetch off performance.nfs.io-threads off performance.force-readdirp true features.file-snapshot off features.uss off features.snapshot-directory .snaps features.show-snapshot-directory off network.compression off network.compression.window-size -15 network.compression.mem-level 8 network.compression.min-size 0 network.compression.compression-level -1 network.compression.debug false features.limit-usage (null) features.quota-timeout 0 features.default-soft-limit 80% features.soft-timeout 60 features.hard-timeout 5 features.alert-time 86400 features.quota-deem-statfs off geo-replication.indexing off geo-replication.indexing off geo-replication.ignore-pid-check off geo-replication.ignore-pid-check off features.quota on debug.trace off debug.log-history no debug.log-file no debug.exclude-ops (null) debug.include-ops (null) debug.error-gen off debug.error-failure (null) debug.error-number (null) debug.random-failure off debug.error-fops (null) nfs.enable-ino32 no nfs.mem-factor 15 nfs.export-dirs on nfs.export-volumes on nfs.addr-namelookup off nfs.dynamic-volumes off nfs.register-with-portmap on nfs.outstanding-rpc-limit 16 nfs.port 2049 nfs.rpc-auth-unix on nfs.rpc-auth-null on nfs.rpc-auth-allow all nfs.rpc-auth-reject none nfs.ports-insecure off nfs.trusted-sync off nfs.trusted-write off nfs.volume-access read-write nfs.export-dir nfs.disable false nfs.nlm on nfs.acl on nfs.mount-udp off nfs.mount-rmtab /var/lib/glusterd/nfs/rmtab nfs.rpc-statd /sbin/rpc.statd nfs.server-aux-gids off nfs.drc off nfs.drc-size 0x20000 nfs.read-size (1 * 1048576ULL) nfs.write-size (1 * 1048576ULL) nfs.readdir-size (1 * 1048576ULL) features.read-only off features.worm off storage.linux-aio off storage.batch-fsync-mode reverse-fsync storage.batch-fsync-delay-usec 0 storage.owner-uid -1 storage.owner-gid -1 storage.node-uuid-pathinfo off storage.health-check-interval 30 storage.build-pgfid off storage.bd-aio off cluster.server-quorum-type off cluster.server-quorum-ratio 0 changelog.changelog off changelog.changelog-dir (null) changelog.encoding ascii changelog.rollover-time 15 changelog.fsync-interval 5 changelog.changelog-barrier-timeout 120 features.barrier disable features.barrier-timeout 120 locks.trace disable cluster.disperse-self-heal-daemon enable [root@dhcp37-120 tmp]# [root@dhcp37-120 tmp]# Gluster volume status: ====================== [root@dhcp37-120 tmp]# gluster volume status Status of volume: testvol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick dhcp37-208:/rhs/brick1/b1 49170 Y 18379 Brick dhcp37-183:/rhs/brick1/b1 49169 Y 2544 Brick dhcp37-178:/rhs/brick1/b1 49169 Y 1152 Brick dhcp37-120:/rhs/brick1/b1 49159 Y 4711 Brick dhcp37-208:/rhs/brick2/b2 49171 Y 18392 Brick dhcp37-183:/rhs/brick2/b2 49170 Y 2557 Brick dhcp37-178:/rhs/brick2/b2 49170 Y 1167 Brick dhcp37-120:/rhs/brick2/b2 49160 Y 4724 Brick dhcp37-208:/rhs/brick3/b3 49172 Y 18405 Brick dhcp37-183:/rhs/brick3/b3 49171 Y 2570 Brick dhcp37-178:/rhs/brick3/b3 49171 Y 1180 NFS Server on localhost 2049 Y 9516 Quota Daemon on localhost N/A Y 9532 NFS Server on dhcp37-178 2049 Y 5729 Quota Daemon on dhcp37-178 N/A Y 5746 NFS Server on dhcp37-208 2049 Y 23074 Quota Daemon on dhcp37-208 N/A Y 23090 NFS Server on dhcp37-183 2049 Y 7215 Quota Daemon on dhcp37-183 N/A Y 7231 Task Status of Volume testvol ------------------------------------------------------------------------------ There are no active volume tasks Gluster volume info: ==================== [root@dhcp37-120 tmp]# gluster v info Volume Name: testvol Type: Disperse Volume ID: 6ce3dc6f-85ca-4397-9b5c-fe3cc64e62c6 Status: Started Number of Bricks: 1 x (8 + 3) = 11 Transport-type: tcp Bricks: Brick1: dhcp37-208:/rhs/brick1/b1 Brick2: dhcp37-183:/rhs/brick1/b1 Brick3: dhcp37-178:/rhs/brick1/b1 Brick4: dhcp37-120:/rhs/brick1/b1 Brick5: dhcp37-208:/rhs/brick2/b2 Brick6: dhcp37-183:/rhs/brick2/b2 Brick7: dhcp37-178:/rhs/brick2/b2 Brick8: dhcp37-120:/rhs/brick2/b2 Brick9: dhcp37-208:/rhs/brick3/b3 Brick10: dhcp37-183:/rhs/brick3/b3 Brick11: dhcp37-178:/rhs/brick3/b3 Options Reconfigured: features.uss: off features.quota: on [root@dhcp37-120 tmp]# How reproducible: ================ 100% Steps to Reproduce: 1. Untar a kernel tarball on the mount point 2. delete the directory from the client 3. Check the client mount logs for the error messages Actual results: =============== Failed to update version and size (error 2) seen in client mount logs Expected results: ================= No error messages should be seen Additional info: ================ sosreports of client and servers will be attached. --- Additional comment from Bhaskarakiran on 2015-02-17 05:01:21 EST --- --- Additional comment from Bhaskarakiran on 2015-02-17 05:03:08 EST --- --- Additional comment from Bhaskarakiran on 2015-02-17 05:03:38 EST --- --- Additional comment from Bhaskarakiran on 2015-02-17 05:26:00 EST --- --- Additional comment from Bhaskarakiran on 2015-02-17 05:26:18 EST --- --- Additional comment from Bhaskarakiran on 2015-05-12 05:25:10 EDT --- The attributes ec.version and ec.size doesn't get updated for all the bricks if they go down and come up and when new bricks are added. This is not just a logging issue. --- Additional comment from Anand Avati on 2015-05-14 07:40:05 EDT --- REVIEW: http://review.gluster.org/10784 (Changing log level to DEBUG in case of ENOENT) posted (#1) for review on master by Ashish Pandey (aspandey) --- Additional comment from Anand Avati on 2015-05-29 02:17:56 EDT --- REVIEW: http://review.gluster.org/10784 (Changing log level to DEBUG in case of ENOENT) posted (#2) for review on master by Ashish Pandey (aspandey) --- Additional comment from Anand Avati on 2015-05-29 03:12:42 EDT --- REVIEW: http://review.gluster.org/10784 (Changing log level to DEBUG in case of ENOENT) posted (#3) for review on master by Ashish Pandey (aspandey) --- Additional comment from Anand Avati on 2015-06-08 09:19:28 EDT --- COMMIT: http://review.gluster.org/10784 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit f57c2d1ecbd547360137c9d3a36f64349e6e0fba Author: Ashish Pandey <aspandey> Date: Thu May 14 17:06:25 2015 +0530 Changing log level to DEBUG in case of ENOENT Change-Id: I264e47ca679d8b57cd8c80306c07514e826f92d8 BUG: 1193388 Signed-off-by: Ashish Pandey <aspandey> Reviewed-on: http://review.gluster.org/10784 Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Xavier Hernandez <xhernandez>
REVIEW: http://review.gluster.org/11132 (Changing log level to DEBUG in case of ENOENT) posted (#2) for review on release-3.7 by Ashish Pandey (aspandey)
COMMIT: http://review.gluster.org/11132 committed in release-3.7 by Xavier Hernandez (xhernandez) ------ commit 92bf6fcdfbaad7f20efa86eec34cd9e14233c6de Author: Ashish Pandey <aspandey> Date: Thu May 14 17:06:25 2015 +0530 Changing log level to DEBUG in case of ENOENT Change-Id: I264e47ca679d8b57cd8c80306c07514e826f92d8 BUG: 1229563 Signed-off-by: Ashish Pandey <aspandey> Reviewed-on: http://review.gluster.org/10784 Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Xavier Hernandez <xhernandez> Signed-off-by: Ashish Pandey <aspandey> Reviewed-on: http://review.gluster.org/11132 Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user