Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.On A working U5 swift setup.(Ran Functional test cases on it). Took the backup of all config files. 2. swift-init main stop 3. gluster volume stop test 4. cd /etc/yum.repos.d/ ;wget http://rhsqe-repo.lab.eng.blr.redhat.com/rhs2.0-update6/rhs2.0-update6.repo 5. # yum update Loaded plugins: aliases, changelog, downloadonly, fastestmirror, filter-data, keys, list-data, merge-conf, priorities, product-id, protectbase, : security, subscription-manager, tmprepo, tsflags, upgrade-helper, verify, versionlock Updating certificate-based repositories. Determining fastest mirrors Skipping filters plugin, no data 0 packages excluded due to repository protections Setting up Update Process Resolving Dependencies Skipping filters plugin, no data --> Running transaction check ---> Package gluster-swift-plugin.noarch 0:1.0-5 will be updated ---> Package gluster-swift-plugin.noarch 0:1.0-6 will be an update ---> Package glusterfs.x86_64 0:3.3.0.11rhs-1.el6rhs will be updated ---> Package glusterfs.x86_64 0:3.3.0.12rhs-2.el6rhs will be an update ---> Package glusterfs-fuse.x86_64 0:3.3.0.11rhs-1.el6rhs will be updated ---> Package glusterfs-fuse.x86_64 0:3.3.0.12rhs-2.el6rhs will be an update ---> Package glusterfs-geo-replication.x86_64 0:3.3.0.11rhs-1.el6rhs will be updated ---> Package glusterfs-geo-replication.x86_64 0:3.3.0.12rhs-2.el6rhs will be an update ---> Package glusterfs-rdma.x86_64 0:3.3.0.11rhs-1.el6rhs will be updated ---> Package glusterfs-rdma.x86_64 0:3.3.0.12rhs-2.el6rhs will be an update ---> Package glusterfs-server.x86_64 0:3.3.0.11rhs-1.el6rhs will be updated ---> Package glusterfs-server.x86_64 0:3.3.0.12rhs-2.el6rhs will be an update --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================================== Package Arch Version Repository Size ====================================================================================================================================================== Updating: gluster-swift-plugin noarch 1.0-6 rhs2.0-update6 34 k glusterfs x86_64 3.3.0.12rhs-2.el6rhs rhs2.0-update6 1.7 M glusterfs-fuse x86_64 3.3.0.12rhs-2.el6rhs rhs2.0-update6 62 k glusterfs-geo-replication x86_64 3.3.0.12rhs-2.el6rhs rhs2.0-update6 110 k glusterfs-rdma x86_64 3.3.0.12rhs-2.el6rhs rhs2.0-update6 41 k glusterfs-server x86_64 3.3.0.12rhs-2.el6rhs rhs2.0-update6 552 k Transaction Summary ====================================================================================================================================================== Upgrade 6 Package(s) Total download size: 2.5 M Is this ok [y/N]: y Downloading Packages: (1/6): gluster-swift-plugin-1.0-6.noarch.rpm | 34 kB 00:00 (2/6): glusterfs-3.3.0.12rhs-2.el6rhs.x86_64.rpm | 1.7 MB 00:03 (3/6): glusterfs-fuse-3.3.0.12rhs-2.el6rhs.x86_64.rpm | 62 kB 00:00 (4/6): glusterfs-geo-replication-3.3.0.12rhs-2.el6rhs.x86_64.rpm | 110 kB 00:00 (5/6): glusterfs-rdma-3.3.0.12rhs-2.el6rhs.x86_64.rpm | 41 kB 00:00 (6/6): glusterfs-server-3.3.0.12rhs-2.el6rhs.x86_64.rpm | 552 kB 00:01 ------------------------------------------------------------------------------------------------------------------------------------------------------ Total 331 kB/s | 2.5 MB 00:07 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : glusterfs-3.3.0.12rhs-2.el6rhs.x86_64 1/12 Updating : glusterfs-fuse-3.3.0.12rhs-2.el6rhs.x86_64 2/12 Updating : glusterfs-server-3.3.0.12rhs-2.el6rhs.x86_64 3/12 warning: /var/lib/glusterd/vols/test/test.10.65.207.97.mnt-lv2.vol saved as /var/lib/glusterd/vols/test/test.10.65.207.97.mnt-lv2.vol.rpmsave warning: /var/lib/glusterd/vols/test/test.10.65.207.97.mnt-lv3.vol saved as /var/lib/glusterd/vols/test/test.10.65.207.97.mnt-lv3.vol.rpmsave warning: /var/lib/glusterd/vols/test/test.10.65.207.97.mnt-lv1.vol saved as /var/lib/glusterd/vols/test/test.10.65.207.97.mnt-lv1.vol.rpmsave warning: /var/lib/glusterd/vols/test/test.10.65.207.97.mnt-lv4.vol saved as /var/lib/glusterd/vols/test/test.10.65.207.97.mnt-lv4.vol.rpmsave warning: /var/lib/glusterd/vols/test/test-fuse.vol saved as /var/lib/glusterd/vols/test/test-fuse.vol.rpmsave warning: /var/lib/glusterd/vols/test/trusted-test-fuse.vol saved as /var/lib/glusterd/vols/test/trusted-test-fuse.vol.rpmsave Updating : glusterfs-geo-replication-3.3.0.12rhs-2.el6rhs.x86_64 4/12 Updating : glusterfs-rdma-3.3.0.12rhs-2.el6rhs.x86_64 5/12 Updating : gluster-swift-plugin-1.0-6.noarch 6/12 Cleanup : glusterfs-server-3.3.0.11rhs-1.el6rhs.x86_64 7/12 Cleanup : glusterfs-fuse-3.3.0.11rhs-1.el6rhs.x86_64 8/12 Cleanup : glusterfs-rdma-3.3.0.11rhs-1.el6rhs.x86_64 9/12 Cleanup : glusterfs-geo-replication-3.3.0.11rhs-1.el6rhs.x86_64 10/12 Cleanup : gluster-swift-plugin-1.0-5.noarch 11/12 Cleanup : glusterfs-3.3.0.11rhs-1.el6rhs.x86_64 12/12 Installed products updated. Updated: gluster-swift-plugin.noarch 0:1.0-6 glusterfs.x86_64 0:3.3.0.12rhs-2.el6rhs glusterfs-fuse.x86_64 0:3.3.0.12rhs-2.el6rhs glusterfs-geo-replication.x86_64 0:3.3.0.12rhs-2.el6rhs glusterfs-rdma.x86_64 0:3.3.0.12rhs-2.el6rhs glusterfs-server.x86_64 0:3.3.0.12rhs-2.el6rhs Complete! 6. gluster volume start test starting volume test has been successful 7. swift-init main start Unable to locate config for proxy-server Unable to locate config for container-server Unable to locate config for account-server Unable to locate config for object-server 8. restore old config files cd /etc/swift/ ; cp ~/swift-u5-config/swift.conf . cp ~/swift-u5-config/object-server/1.conf object-server/1.conf cp ~/swift-u5-config/account-server/1.conf account-server/1.conf cp ~/swift-u5-config/container-server/1.conf container-server/1.conf 9. Start the swift services again swift-init main start Starting proxy-server...(/etc/swift/proxy-server.conf) Starting container-server...(/etc/swift/container-server/1.conf) Starting account-server...(/etc/swift/account-server/1.conf) Starting object-server...(/etc/swift/object-server/1.conf) 10. Hitting 503 for every requests nosetests --exe ~/gluster-swift/test/functional/tests.py EEEEEEEE Aug 23 17:38:26 dhcp207-97 proxy-server Account GET returning 503 for [500] (txn: txc0d8b8d2db3d41139cf18817363d5e39) (client_ip: 127.0.0.1) Aug 23 17:38:26 dhcp207-97 proxy-server 127.0.0.1 127.0.0.1 23/Aug/2013/17/38/26 GET /v1/AUTH_test HTTP/1.0 503 - - test%2CAUTH_tk9743d3f589ad47a3ba8768f3b911313c - - - txc0d8b8d2db3d41139cf18817363d5e39 - 0.0018 - Aug 23 17:38:31 dhcp207-97 proxy-server ERROR 500 Traceback (most recent call last):#012 File "/usr/lib/python2.6/site-packages/eventlet/wsgi.py", line 336, in handle_one_response#012 result = self.application(self.environ, start_response)#012 File "/usr/lib/python2.6/site-packages/swift/common/middleware/gluster.py", line 39, in __call__#012 env['fs_object'] = fs_object()#012 File "/usr/lib/python2.6/site-packages/swift/plugins/Glusterfs.py", line 27, in __init__#012 self.mount_path = self.fs_conf.get('DEFAULT', 'mount_path', '/mnt/gluster-object')#012 File "/usr/lib64/python2.6/ConfigParser.py", line 541, in get#012 raise NoOptionError(option, section)#012NoOptionError: No option 'mount_path' in section: 'DEFAULT'#012 From Account Server 127.0.0.1:6012 (txn: tx49dbfd4d5e7d44d9a8fa13c5abd1a5f0) 11. Even tried :- a. ) mount -t glusterfs localhost:test /mnt/gluster-object/test [root@dhcp207-97 swift]# mount /dev/mapper/vg_dhcp20797-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/mapper/vg-lv1 on /mnt/lv1 type xfs (rw) /dev/mapper/vg-lv2 on /mnt/lv2 type xfs (rw) /dev/mapper/vg-lv3 on /mnt/lv3 type xfs (rw) /dev/mapper/vg-lv4 on /mnt/lv4 type xfs (rw) /dev/mapper/vg-lv5 on /mnt/lv5 type xfs (rw) /dev/mapper/vg-lv6 on /mnt/lv6 type xfs (rw) /dev/mapper/vg-lv7 on /mnt/lv7 type xfs (rw) /dev/mapper/vg-lv8 on /mnt/lv8 type xfs (rw) /dev/mapper/vg-lv9 on /mnt/lv9 type xfs (rw) localhost:test on /mnt/gluster-object/AUTH_test type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) localhost:test on /mnt/gluster-object/test type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) b. ) [root@dhcp207-97 swift]# swift-init main restart Signal proxy-server pid: 6456 signal: 15 Signal container-server pid: 6457 signal: 15 Signal account-server pid: 6458 signal: 15 Signal object-server pid: 6459 signal: 15 proxy-server (6456) appears to have stopped container-server (6457) appears to have stopped account-server (6458) appears to have stopped object-server (6459) appears to have stopped Starting proxy-server...(/etc/swift/proxy-server.conf) Starting container-server...(/etc/swift/container-server/1.conf) Starting account-server...(/etc/swift/account-server/1.conf) Starting object-server...(/etc/swift/object-server/1.conf) c.) nosetests --exe ~/gluster-swift/test/functional/tests.py E Aug 23 17:40:46 dhcp207-97 proxy-server Account GET returning 503 for [500] (txn: tx9b6e79c227d048c2b8766d07ea86dea7) (client_ip: 127.0.0.1) Aug 23 17:40:46 dhcp207-97 proxy-server 127.0.0.1 127.0.0.1 23/Aug/2013/17/40/46 GET /v1/AUTH_test HTTP/1.0 503 - - test%2CAUTH_tk9743d3f589ad47a3ba8768f3b911313c - - - tx9b6e79c227d048c2b8766d07ea86dea7 - 0.0021 - Aug 23 17:40:51 dhcp207-97 proxy-server ERROR 500 Traceback (most recent call last):#012 File "/usr/lib/python2.6/site-packages/eventlet/wsgi.py", line 336, in handle_one_response#012 result = self.application(self.environ, start_response)#012 File "/usr/lib/python2.6/site-packages/swift/common/middleware/gluster.py", line 39, in __call__#012 env['fs_object'] = fs_object()#012 File "/usr/lib/python2.6/site-packages/swift/plugins/Glusterfs.py", line 27, in __init__#012 self.mount_path = self.fs_conf.get('DEFAULT', 'mount_path', '/mnt/gluster-object')#012 File "/usr/lib64/python2.6/ConfigParser.py", line 541, in get#012 raise NoOptionError(option, section)#012NoOptionError: No option 'mount_path' in section: 'DEFAULT'#012 From Account Server 127.0.0.1:6012 (txn: tx8d5c5f11a2944191aaf4f0b041c4aacb) Actual results: gluster-swift-account-1.4.8-5.el6rhs.noarch gluster-swift-object-1.4.8-5.el6rhs.noarch gluster-swift-container-1.4.8-5.el6rhs.noarch gluster-swift-doc-1.4.8-5.el6rhs.noarch gluster-swift-plugin-1.0-6.noarch gluster-swift-1.4.8-5.el6rhs.noarch gluster-swift-proxy-1.4.8-5.el6rhs.noarch Expected results: Additional info:
We need to fix this before shipping u6.
Seems that fix https://code.engineering.redhat.com/gerrit/#/c/8453/ for bug 969224 (https://bugzilla.redhat.com/show_bug.cgi?id=969224) did not actually resolve the issue since the config files were still being deleted. We now include the config files as part of the RPM and include the directive %config(noreplace) in the RPM specfile for upgrades, installations, and removals. When the system is now upgraded to this fix number, any of the config files edited are kepted verbatim. When the system is upgraded to RHS2.1, their config files are saved as ${configfile}.rpmsave since they will need to re-inspect the file using the new grizzly settings.
Adding keyword ZStream.
The change is awating review in GlusterFS upstream. http://review.gluster.org/#/c/5706/1
https://brewweb.devel.redhat.com/buildinfo?buildID=292018 RPM available
1.Upgrade from U4,U5(gluster-swift-plugin-1.0-5.noarch.rpm) --> to U6(gluster-swift-plugin-1.0-7.noarch.rpm) went well, all config files kept as it is, and new conf files are saved with -gluster extentions. 2. While upgrading from U6 - > RHS2.1 a.) all modified config files got saved with *.rpmsave extention and all unmodified config files got erased. b.) all ring related files and folder removed (*ring.gz & *builders ) , as these needed to be created afresh. 3. A proper upgrade documentation of RHS2.1 is all it need. U6 seems good to go from gluster-swift point of view. Marking the BZ as verfied based on above observations.
Hi Luis, Could you update the doc text for this bug which will help me write the errata text for this bug. Regards, Anjana
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2013-1205.html