| Summary: | Promotions not at all happening after attaching tier to a legacy volume with huge data(even on files where fix layout was complete) | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
| Component: | tier | Assignee: | hari gowtham <hgowtham> |
| Status: | CLOSED WONTFIX | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | rhgs-3.1 | CC: | mchangir, mzywusko, nbalacha, rhs-bugs, smohan |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | tier-migration | ||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-02-06 17:43:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
Thank you for your bug report. We are no longer working on any improvements for Tier. This bug will be set to CLOSED WONTFIX to reflect this. Please reopen if the rfe is deemed critical. |
I see that there are no promotions happening even after i heat files, when my legacy volume has huge data before attach tier. After attach tier, i know the fix layout takes a lot of time, but even files where fix layout was complete are not getting promoted NOTE: the query binary file in /var/run/gluster itself is not getting created ======== before heating file sql query========= # file: rhs/brick3/stress/kern.legacy/dir_rename.5/linux-4.3.3.tar.xz security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.bit-rot.version=0x0200000000000000567f9830000c6b92 trusted.ec.config=0x0000080602000200 trusted.ec.size=0x00000000052e5f9c trusted.ec.version=0x00000000000002980000000000000298 trusted.gfid=0x3bf6f6c187b640d7bc982718505b9d0a trusted.glusterfs.quota.ea4332aa-8cfe-448e-9da4-f76b7adcd5fc.contri.1=0x00000000014b98000000000000000001 trusted.pgfid.ea4332aa-8cfe-448e-9da4-f76b7adcd5fc=0x00000001 [root@zod stress-tier-dht]# [root@zod stress-tier-dht]# [root@zod stress-tier-dht]# ll /rhs/brick*/stress*/kern.legacy/dir_rename.5/linux-4.3.3.tar.xz -rw-r--r--. 2 root root 21731328 Dec 27 13:35 /rhs/brick1/stress/kern.legacy/dir_rename.5/linux-4.3.3.tar.xz -rw-r--r--. 2 root root 21731328 Dec 27 13:35 /rhs/brick2/stress/kern.legacy/dir_rename.5/linux-4.3.3.tar.xz -rw-r--r--. 2 root root 21731328 Dec 27 13:35 /rhs/brick3/stress/kern.legacy/dir_rename.5/linux-4.3.3.tar.xz [root@zod stress-tier-dht]# [root@zod stress-tier-dht]# [root@zod stress-tier-dht]# [root@zod stress-tier-dht]# echo "===========Date=====================";date; echo "=============ColdBrick#1 =========" ; echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /rhs/brick1/stress/.glusterfs/stress.db|grep 3bf6f6c;echo "=============ColdBrick#2 =========" ; echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /rhs/brick2/stress/.glusterfs/stress.db|grep 3bf6f6c; echo "=============ColdBrick#3 =========" ; echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /rhs/brick3/stress/.glusterfs/stress.db|grep 3bf6f6c; echo "=============ColdBrick#4 =========" ; echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /rhs/brick4/stress/.glusterfs/stress.db|grep 3bf6f6c;echo "=============ColdBrick#5 =========" ; echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /rhs/brick5/stress/.glusterfs/stress.db|grep 3bf6f6c;echo "=============ColdBrick#6 =========" ; echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /rhs/brick6/stress/.glusterfs/stress.db|grep 3bf6f6c;echo ">>>>>>>>>>>> HOTBRICK#1 <<<<<<<<==";echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /rhs/brick5/stress_hot/.glusterfs/stress_hot.db|grep 3bf6f6c;echo ">>>>>>>>>>>> HOTBRICK#2 <<<<<<<<==";echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /rhs/brick7/stress_hot/.glusterfs/stress_hot.db|grep 3bf6f6c;echo "###############################";date;ll /*/brick*/stress*;du -sh /var/run/gluster/stress-tier-dht/* ===========Date===================== Wed Dec 30 18:11:10 IST 2015 =============ColdBrick#1 ========= 3bf6f6c1-87b6-40d7-bc98-2718505b9d0a|0|0|0|0|0|0|0|0|1|1 38b4b6a7-5802-469f-9ed0-1ff3bf6f6cc5|0|0|0|0|0|0|0|0|1|1 ^CError: near line 1: interrupted ==========after heating file====== ===========Date===================== Wed Dec 30 18:11:35 IST 2015 =============ColdBrick#1 ========= 3bf6f6c1-87b6-40d7-bc98-2718505b9d0a|0|0|0|0|0|0|0|0|1|1 3bf6f6c1-87b6-40d7-bc98-2718505b9d0a|ea4332aa-8cfe-448e-9da4-f76b7adcd5fc|linux-4.3.3.tar.xz|0|0 =============ColdBrick#2 ========= 3bf6f6c1-87b6-40d7-bc98-2718505b9d0a|0|0|0|0|0|0|0|0|1|1 3bf6f6c1-87b6-40d7-bc98-2718505b9d0a|ea4332aa-8cfe-448e-9da4-f76b7adcd5fc|linux-4.3.3.tar.xz|0|0 =============ColdBrick#3 ========= 3bf6f6c1-87b6-40d7-bc98-2718505b9d0a|0|0|0|0|0|0|0|0|1|1 3bf6f6c1-87b6-40d7-bc98-2718505b9d0a|ea4332aa-8cfe-448e-9da4-f76b7adcd5fc|linux-4.3.3.tar.xz|0|0 =============ColdBrick#4 ========= =============ColdBrick#5 ========= =============ColdBrick#6 ========= >>>>>>>>>>>> HOTBRICK#1 <<<<<<<<== >>>>>>>>>>>> HOTBRICK#2 <<<<<<<<== ############################### Wed Dec 30 18:14:14 IST 2015 ================================================================================== More information about setup is as below: I have created a dist-EC volume and had following IOs happen on it: 1)create a parent dir, in this dir create other dir, copy linux kernel, untart the kernel. Again under the parent dir, create another dir, copy linux kernel, untart the kernel. and so on in a loop of about 1000 so dir.1, dir.2.....dir.1000 2)With an hour lag or so, start to rename or move the dir.1 to rename_dir.1 and so on for all the dirs in total created about 100GB of data Kept this IO pumping for about a day and then attached tier. After attaching tier i changed the some values wrt watermarks and other(look at volinfo) Now, I started to pump in more files creates and IOs, after a few hours(post attach) or so, I enabled quotas (but didnt set any limits) Then, I copied a number of audio files from my mount point to the volume. Once done, I started to play them by mounting on nfs from my desktop, using VLC player. While doing so, the first few files played well, and then started to see errors and errors and the files stopped playing. (for which quotas dev is debugging) Mounted volume on mulitple clients(4 clients)