Description of problem: ======================== untar of linux kernel from nfs mount failed on a replicate volume ( 1 x 2 ) with "Input/output error" when one of the brick goes offline and comes back online. Version-Release number of selected component (if applicable): =========================================================== glusterfs 3.4.0.30rhs built on Aug 30 2013 08:15:37 How reproducible: ================= Tried twice. Could observe it only once. Steps to Reproduce: ===================== 1.Create a replicate volume ( 1 x 2 ). Start the volume 2.Create nfs mount. 3.From mount point execute: a. "wget -c http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6-rc4.tar.gz" b. "mkdir linux_kernel ; tar -zxvf linux-3.6-rc4.tar.gz -C linux_kernel ;" 4. While untar is in progress , bring down one of the brick (kill -KILL <brick_pid>) 5. while untar is in progress, bring back the brick (service glusterd restart). restarted glusterd on both the nodes. Actual results: ================= linux-3.6-rc4/drivers/scsi/aic7xxx/aic7xxx_proc.c linux-3.6-rc4/drivers/scsi/aic7xxx/aic7xxx_reg.h_shipped linux-3.6-rc4/drivers/scsi/aic7xxx/aic7xxx_reg_print.c_shipped linux-3.6-rc4/drivers/scsi/aic7xxx/aic7xxx_seq.h_shipped tar: linux-3.6-rc4/drivers/scsi/aic7xxx/aic7xxx_seq.h_shipped: Cannot change mode to rw-rw-r--: Input/output error linux-3.6-rc4/drivers/scsi/aic7xxx/aicasm/ tar: linux-3.6-rc4/drivers/scsi/aic7xxx/aicasm: Cannot mkdir: Input/output error linux-3.6-rc4/drivers/scsi/aic7xxx/aicasm/Makefile tar: linux-3.6-rc4/drivers/scsi/aic7xxx/aicasm/Makefile: Cannot open: Input/output error linux-3.6-rc4/drivers/scsi/aic7xxx/aicasm/aicasm.c tar: linux-3.6-rc4/drivers/scsi/aic7xxx/aicasm/aicasm.c: Cannot open: Input/output error Expected results: ================= untar shouldn't fail Additional info: ================== root@hicks [Sep-04-2013-11:39:03] >gluster v info Volume Name: vol_dis_1_rep_2 Type: Replicate Volume ID: 8fd6bcd1-c3ee-472f-b4ca-10681876ab4a Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: hicks.lab.eng.blr.redhat.com:/rhs/bricks/vol_dis_1_rep_2_b0 Brick2: king.lab.eng.blr.redhat.com:/rhs/bricks/vol_dis_1_rep_2_b1 Options Reconfigured: cluster.self-heal-daemon: on performance.open-behind: off
Able to recreate the issue once again.
SOSReports : http://rhsqe-repo.lab.eng.blr.redhat.com/bugs_necessary_info/1004363/
Tested with 3.1.2 (afrv2.0) and not able to reproduce the reported problem and as per the Dev this is fixed as part of v2 implementation so marking this bug as verified
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.