Bug 1228643 - I/O failure on attaching tier
Summary: I/O failure on attaching tier
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.1.2
Assignee: Dan Lambright
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
: 1230692 (view as bug list)
Depends On: 1214289 1230692 1259081 1263549
Blocks: qe_tracker_everglades 1202842 1219547 1260783 1260923
TreeView+ depends on / blocked
 
Reported: 2015-06-05 12:02 UTC by Dan Lambright
Modified: 2016-09-17 15:35 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.7.5-0.3
Doc Type: Bug Fix
Doc Text:
Clone Of: 1214289
Environment:
Last Closed: 2016-03-01 05:25:31 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Dan Lambright 2015-06-05 12:02:31 UTC
+++ This bug was initially created as a clone of Bug #1214289 +++

Description of problem:
I/O failure on attaching tier

Version-Release number of selected component (if applicable):
glusterfs-server-3.7dev-0.994.git0d36d4f.el6.x86_64

How reproducible:


Steps to Reproduce:
1. Create a replica volume
2. Start 100% writes I/O on the volum
3. Attach a a tier while the I/O is in progress
4. Attach tier is successful, but I/O fails

Actual results:
See that the I/O's are failing. Here is the console o/p:

linux-2.6.31.1/arch/ia64/include/asm/sn/mspec.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/mspec.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/nodepda.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/nodepda.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/pcibr_provider.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/pcibr_provider.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/pcibus_provider_defs.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/pcibus_provider_defs.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/pcidev.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/pcidev.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/pda.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/pda.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/pic.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/pic.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/rw_mmr.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/rw_mmr.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/shub_mmr.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/shub_mmr.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/shubio.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/shubio.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/simulator.h
tar: linux-2.6.31.1/arch/ia64/include/asm/sn/simulator.h: Cannot open: Stale file handle
linux-2.6.31.1/arch/ia64/include/asm/sn/sn2/


Expected results:
I/O should continue normally while the tier is being added. Additionally, all the new writes post the tier addition should go to the hot tier.

Additional info:

--- Additional comment from Anoop on 2015-04-22 07:05:58 EDT ---

Volume info before attach:

Volume Name: vol1
Type: Replicate
Volume ID: b77d4050-7fdc-45ff-a084-f85eec2470fc
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.70.35.56:/rhs/brick1
Brick2: 10.70.35.67:/rhs/brick1

Volume Info post attach
Volume Name: vol1
Type: Tier
Volume ID: b77d4050-7fdc-45ff-a084-f85eec2470fc
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.35.67:/rhs/brick2
Brick2: 10.70.35.56:/rhs/brick2
Brick3: 10.70.35.56:/rhs/brick1
Brick4: 10.70.35.67:/rhs/brick1

--- Additional comment from Dan Lambright on 2015-04-22 15:46:08 EDT ---

When we attach a tier, the new added translator has no cached sub volume for IOs in flight. So IOs to open files fail. Solution is to recompute the cached sub volume for all open FDs with a lookup in tier_init, I believe, working on a fix.

--- Additional comment from Anand Avati on 2015-04-28 16:28:27 EDT ---

REVIEW: http://review.gluster.org/10435 (cluster/tier: don't use hot tier until subvolumes ready (WIP)) posted (#1) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-29 16:22:55 EDT ---

REVIEW: http://review.gluster.org/10435 (cluster/tier: don't use hot tier until subvolumes ready (WIP)) posted (#2) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-29 18:05:44 EDT ---

REVIEW: http://review.gluster.org/10435 (cluster/tier: don't use hot tier until subvolumes ready (WIP)) posted (#3) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-05-04 14:55:52 EDT ---

REVIEW: http://review.gluster.org/10435 (cluster/tier: don't use hot tier until subvolumes ready) posted (#4) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Dan Lambright on 2015-05-04 14:57:34 EDT ---

There may still be a window where an I/O error can happen, but this fix should close most of them. The window will be able to be completely close after BZ 1156637 is resolved.

--- Additional comment from Anand Avati on 2015-05-05 11:36:32 EDT ---

COMMIT: http://review.gluster.org/10435 committed in master by Kaleb KEITHLEY (kkeithle) 
------
commit 377505a101eede8943f5a345e11a6901c4f8f420
Author: Dan Lambright <dlambrig>
Date:   Tue Apr 28 16:26:33 2015 -0400

    cluster/tier: don't use hot tier until subvolumes ready
    
    When we attach a tier, the hot tier becomes the hashed
    subvolume. But directories may not yet have been replicated by
    the fix layout process. Hence lookups to those directories
    will fail on the hot subvolume. We should only go to the hashed
    subvolume once the layout has been fixed. This is known if the
    layout for the parent directory does not have an error. If
    there is an error, the cold tier is considered the hashed
    subvolume. The exception to this rules is ENOCON, in which
    case we do not know where the file is and must abort.
    
    Note we may revalidate a lookup for a directory even if the
    inode has not yet been populated by FUSE. This case can
    happen in tiering (where one tier has completed a lookup
    but the other has not, in which case we revalidate one tier
    when we call lookup the second time). Such inodes are
    still invalid and should not be consulted for validation.
    
    Change-Id: Ia2bc62e1d807bd70590bd2a8300496264d73c523
    BUG: 1214289
    Signed-off-by: Dan Lambright <dlambrig>
    Reviewed-on: http://review.gluster.org/10435
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>
    Reviewed-by: N Balachandran <nbalacha>

--- Additional comment from Anoop on 2015-05-13 08:31:00 EDT ---

Reproduced this ont the BETA2 build too, hence moving it to ASSIGNED.

--- Additional comment from nchilaka on 2015-06-02 11:56:53 EDT ---

Seeing the following issue on latest downstream build 
Following are the steps to reproduce:
1)create a dist-rep volume 
  gluster v create tiervol2 replica 2 10.70.46.233:/rhs/brick1/tiervol2 10.70.46.236:/rhs/brick1/tiervol2 10.70.46.240:/rhs/brick1/tiervol2 10.70.46.243:/rhs/brick1  /tiervol2
2)start and issue commands like info and status
3)Now mount using NFS
4) Trigger some IOs on this volume
5)While IOs are happening attach a tier

It can be seen that the tier gets attached successfully, but the IOs fail to write anymore

Some Observations worth noting:
1)This happens only when we mount using NFS. With glusterfs mount works well(Anoop, comment if you see issue even on glusterfs mount)
2)Seems to be some problem with tiering and NFS interaction as I see that NFS ports are all down when I run above scenario
3)This issue is hit only when IOs were in progress while attaching tier(although this will be the most valid case in customer site)


[root@rhsqa14-vm1 ~]# gluster v status tiervol2
Status of volume: tiervol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.233:/rhs/brick1/tiervol2     49153     0          Y       1973 
Brick 10.70.46.236:/rhs/brick1/tiervol2     49154     0          Y       24453
Brick 10.70.46.240:/rhs/brick1/tiervol2     49154     0          Y       32272
Brick 10.70.46.243:/rhs/brick1/tiervol2     49153     0          Y       31759
NFS Server on localhost                     2049      0          Y       1992 
Self-heal Daemon on localhost               N/A       N/A        Y       2017 
NFS Server on 10.70.46.243                  2049      0          Y       31778
Self-heal Daemon on 10.70.46.243            N/A       N/A        Y       31790
NFS Server on 10.70.46.236                  2049      0          Y       24472
Self-heal Daemon on 10.70.46.236            N/A       N/A        Y       24482
NFS Server on 10.70.46.240                  2049      0          Y       32292
Self-heal Daemon on 10.70.46.240            N/A       N/A        Y       32312
 
Task Status of Volume tiervol2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@rhsqa14-vm1 ~]# gluster v info tiervol2
 
Volume Name: tiervol2
Type: Distributed-Replicate
Volume ID: a98f39c2-03ed-4ec7-909f-573b89a2a3e8
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.233:/rhs/brick1/tiervol2
Brick2: 10.70.46.236:/rhs/brick1/tiervol2
Brick3: 10.70.46.240:/rhs/brick1/tiervol2
Brick4: 10.70.46.243:/rhs/brick1/tiervol2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]# #################Now i have mounted the regular dist-rep vol  https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.0.4.tar.xz
You have new mail in /var/spool/mail/root
[root@rhsqa14-vm1 ~]# #################Now i have mounted the regular dist-rep vol  tiervol2##########
[root@rhsqa14-vm1 ~]# ls /rhs/brick1/tiervol2
linux-4.0.4.tar.xz
[root@rhsqa14-vm1 ~]#  #################Next I will attach a tier while untaring the image, and will check status of vol, it will show nfs down###########
[root@rhsqa14-vm1 ~]# ls /rhs/brick1/tiervol2 ;gluster v attach-tier tiervol2 10.70.46.236:/rhs/brick2/tiervol2 10.70.46.240:/rhs/brick2/tiervol2
linux-4.0.4  linux-4.0.4.tar.xz
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: tiervol2: success: Rebalance on tiervol2 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 1e59a5cc-2ff0-48ce-a34e-0521cbe65d73

You have mail in /var/spool/mail/root
[root@rhsqa14-vm1 ~]# ls /rhs/brick1/tiervol2
linux-4.0.4  linux-4.0.4.tar.xz
[root@rhsqa14-vm1 ~]# gluster v info tiervol2
 
Volume Name: tiervol2
Type: Tier
Volume ID: a98f39c2-03ed-4ec7-909f-573b89a2a3e8
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.240:/rhs/brick2/tiervol2
Brick2: 10.70.46.236:/rhs/brick2/tiervol2
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.46.233:/rhs/brick1/tiervol2
Brick4: 10.70.46.236:/rhs/brick1/tiervol2
Brick5: 10.70.46.240:/rhs/brick1/tiervol2
Brick6: 10.70.46.243:/rhs/brick1/tiervol2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]# gluster v status tiervol2
Status of volume: tiervol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.46.240:/rhs/brick2/tiervol2     49155     0          Y       32411
Brick 10.70.46.236:/rhs/brick2/tiervol2     49155     0          Y       24590
Brick 10.70.46.233:/rhs/brick1/tiervol2     49153     0          Y       1973 
Brick 10.70.46.236:/rhs/brick1/tiervol2     49154     0          Y       24453
Brick 10.70.46.240:/rhs/brick1/tiervol2     49154     0          Y       32272
Brick 10.70.46.243:/rhs/brick1/tiervol2     49153     0          Y       31759
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on 10.70.46.236                  N/A       N/A        N       N/A  
NFS Server on 10.70.46.243                  N/A       N/A        N       N/A  
NFS Server on 10.70.46.240                  N/A       N/A        N       N/A  
 
Task Status of Volume tiervol2
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 1e59a5cc-2ff0-48ce-a34e-0521cbe65d73
Status               : in progress         
 



sosreport Logs attached

--- Additional comment from nchilaka on 2015-06-02 11:58:21 EDT ---



--- Additional comment from Anand Avati on 2015-06-04 14:01:07 EDT ---

REVIEW: http://review.gluster.org/11092 (cluster/tier: account for reordered layouts) posted (#1) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Dan Lambright on 2015-06-04 14:04:52 EDT ---

Will give Nag a special build with fix 11092 and we will try to confirm the problem is in a reasonable state.

Comment 4 Dan Lambright 2015-06-11 14:18:36 UTC
*** Bug 1214289 has been marked as a duplicate of this bug. ***

Comment 5 Dan Lambright 2015-06-11 14:18:37 UTC
*** Bug 1214289 has been marked as a duplicate of this bug. ***

Comment 6 Triveni Rao 2015-06-12 09:13:54 UTC
i checked that after attaching tier while IO was going on the volume, i still messages like 

linux-4.1-rc7/Documentation/devicetree/bindings/mfd/lp3943.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/lp3943.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max14577.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max14577.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max77686.txt

after this attach-tier is successful IO continues as well.


i tried to verify that if the lost files like mfd/lp3943.txt: or max77686 present or lost on the bricks.i found them on bricks of either of the nodes.

data is present on bricks but not available on mount point;


[root@rhsqa14-vm1 ~]# gluster v info mercury

Volume Name: mercury  
Type: Distributed-Replicate
Volume ID: 089ef306-7fcc-4780-a4d5-b6078128625f
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp   
Bricks:
Brick1: 10.70.47.165:/rhs/brick1/t0
Brick2: 10.70.47.163:/rhs/brick1/t0
Brick3: 10.70.47.165:/rhs/brick2/t0
Brick4: 10.70.47.163:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]#



[root@rhsqa14-vm1 ~]# gluster v attach-tier mercury replica 2 10.70.47.165:/rhs/brick5/d0 10.70.47.163:/rhs/brick5/d0
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: mercury: success: Rebalance on mercury has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 5adeab9f-f804-468d-889c-61928d8a7b7f

[root@rhsqa14-vm1 ~]# gluster v info mercury

Volume Name: mercury
Type: Tier
Volume ID: 089ef306-7fcc-4780-a4d5-b6078128625f
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: 10.70.47.163:/rhs/brick5/d0
Brick2: 10.70.47.165:/rhs/brick5/d0
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.47.165:/rhs/brick1/t0
Brick4: 10.70.47.163:/rhs/brick1/t0
Brick5: 10.70.47.165:/rhs/brick2/t0
Brick6: 10.70.47.163:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on





linux-4.1-rc7/Documentation/devicetree/bindings/mfd/lp3943.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/lp3943.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max14577.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max14577.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max77686.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max77686.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max77693.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max77693.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max8925.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max8925.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max8998.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max8998.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/mc13xxx.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/mc13xxx.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/mt6397.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/mt6397.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/omap-usb-host.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/omap-usb-host.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/omap-usb-tll.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/omap-usb-tll.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/palmas.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/palmas.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/qcom,spmi-pmic.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/qcom,spmi-pmic.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/qcom,tcsr.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/qcom,tcsr.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/qcom-pm8xxx.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/qcom-pm8xxx.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mfd/qcom-rpm.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mfd/qcom-rpm.txt: Cannot open: No such file or directory


tar: linux-4.1-rc7/Documentation/devicetree/bindings/mmc/sdhci-sirf.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/sdhci-spear.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mmc/sdhci-spear.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/sdhci-st.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mmc/sdhci-st.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/socfpga-dw-mshc.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mmc/socfpga-dw-mshc.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/sunxi-mmc.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mmc/sunxi-mmc.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/synopsys-dw-mshc.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mmc/synopsys-dw-mshc.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/ti-omap-hsmmc.txt
tar: linux-4.1-rc7/Documentation/devicetree/bindings/mmc/ti-omap-hsmmc.txt: Cannot open: No such file or directory
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/ti-omap.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/tmio_mmc.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/usdhi6rol0.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mmc/vt8500-sdmmc.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/arm-versatile.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/atmel-dataflash.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/atmel-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/davinci-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/denali-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/diskonchip.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/elm.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/flctl-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/fsl-quadspi.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/fsl-upm-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/fsmc-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/gpio-control-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/gpmc-nor.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/gpmc-onenand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/gpmi-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/hisi504-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/jedec,spi-nor.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/lpc32xx-mlc.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/lpc32xx-slc.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/mtd-physmap.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/mxc-nand.txt
linux-4.1-rc7/Documentation/devicetree/bindings/mtd/nand.txt


[root@rhsqa14-vm1 brick1]# find -name sdhci-sirf.txt
./m0/linux-4.1-rc7/Documentation/devicetree/bindings/mmc/sdhci-sirf.txt
[root@rhsqa14-vm1 brick1]# 
[root@rhsqa14-vm1 brick1]# 


[root@rhsqa14-vm2 brick1]# find -name max14577.txt
./m0/linux-4.1-rc7/Documentation/devicetree/bindings/mfd/max14577.txt
[root@rhsqa14-vm2 brick1]# cd /rhs/brick2
[root@rhsqa14-vm2 brick2]# find -name max14577.txt
[root@rhsqa14-vm2 brick2]# cd /rhs/brick3
[root@rhsqa14-vm2 brick3]# find -name max14577.txt
[root@rhsqa14-vm2 brick3]# 


root@rhsqa14-vm1 brick3]# find -name synopsys-dw-mshc.txt
[root@rhsqa14-vm1 brick3]# cd /rhs/brick1
[root@rhsqa14-vm1 brick1]# find -name synopsys-dw-mshc.txt
./m0/linux-4.1-rc7/Documentation/devicetree/bindings/mmc/synopsys-dw-mshc.txt
[root@rhsqa14-vm1 brick1]# 


Note: 

tar: linux-4.1-rc7/Documentation/devicetree/bindings/mmc/sdhci-st.txt: Cannot open: No such file or directory

though this file sdhci-st.txt present on bricks but not available on mount point.


[root@rhsqa14-vm5 qsnap_GMT-2015.06.12-08.24.03]# cd linux-4.1-rc7
[root@rhsqa14-vm5 linux-4.1-rc7]# ls -la
total 128
drwx------.  3 root root   164 Jun 12 04:23 .
drwxr-xr-x.  5 root root   263 Jun 12 04:23 ..
-rw-rw-r--.  1 root root 18693 Jun  7 23:23 COPYING
-rw-rw-r--.  1 root root 96960 Jun  7 23:23 CREDITS
drwx------. 30 root root  8192 Jun 12 04:24 Documentation
-rw-rw-r--.  1 root root  1226 Jun  7 23:23 .gitignore
-rw-rw-r--.  1 root root  5020 Jun  7 23:23 .mailmap
[root@rhsqa14-vm5 linux-4.1-rc7]# find -name synopsys-dw-mshc.txt
[root@rhsqa14-vm5 linux-4.1-rc7]# cd /disk1
[root@rhsqa14-vm5 disk1]# ls
linux-4.1-rc7  linux-4.1-rc7.tar.xz
[root@rhsqa14-vm5 disk1]# cd linux-4.1-rc7
[root@rhsqa14-vm5 linux-4.1-rc7]# find -name synopsys-dw-mshc.txt
[root@rhsqa14-vm5 linux-4.1-rc7]# cd Documentation/devicetree/bindings/mmc/
[root@rhsqa14-vm5 mmc]# find -name sdhci-st.txt
[root@rhsqa14-vm5 mmc]# ls
ti-omap.txt  tmio_mmc.txt  usdhi6rol0.txt  vt8500-sdmmc.txt
[root@rhsqa14-vm5 mmc]# pwd
/disk1/linux-4.1-rc7/Documentation/devicetree/bindings/mmc
[root@rhsqa14-vm5 mmc]#

Comment 7 Joseph Elwin Fernandes 2015-06-23 02:57:24 UTC
----- Original Message -----
From: "Dan Lambright" <dlambrig>
To: "Nagaprasad Sathyanarayana" <nsathyan>, "Joseph Fernandes" <josferna>, "Vivek Agarwal" <vagarwal>
Cc: "Alok Srivastava" <asrivast>
Sent: Tuesday, June 23, 2015 8:02:23 AM
Subject: attach tier

We had a meeting with Alok, Vijay, Satish, Shaym. We decided that for the tech preview, we will not support attach tier while I/O is running. This is because DHT will need some modifications to support this. 

The decision has two consequences:

1. Any blocker bugs related to attach tier while I/O is running are deferred.

2. This should not be tested by QE.

We can discuss more in scrum.

Comment 12 Vivek Agarwal 2015-10-30 17:45:59 UTC
*** Bug 1230692 has been marked as a duplicate of this bug. ***

Comment 13 Nag Pavan Chilakam 2015-11-20 11:19:07 UTC
Did IOs during attach tier and found it passing.
Check with both NFS and fuse mount. 
1)created an ec vol
2)mounted it
3)initiated a copy of files with many dir summing upto 24GB
4)while copy going on attached tier
5)tier attach passed and no io error seen
6)all files got copied.

hence moving to fixed, verified/





[root@zod distrep]# rpm -qa|grep gluster
glusterfs-libs-3.7.5-6.el7rhgs.x86_64
glusterfs-fuse-3.7.5-6.el7rhgs.x86_64
glusterfs-3.7.5-6.el7rhgs.x86_64
glusterfs-server-3.7.5-6.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-6.el7rhgs.x86_64
glusterfs-cli-3.7.5-6.el7rhgs.x86_64
glusterfs-api-3.7.5-6.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-6.el7rhgs.x86_64




NOte: there are other issues while atach tier which are tracked seperately like for eg:1275751 - Data Tiering:File create terminates with "Input/output error" as split brain is observed

Comment 15 errata-xmlrpc 2016-03-01 05:25:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.