Bug 1127341 - raid image syncing stuck after what appears to be successful device failure recovery
Summary: raid image syncing stuck after what appears to be successful device failure r...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-06 17:00 UTC by Corey Marthaler
Modified: 2018-04-09 14:44 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-15 21:53:27 UTC


Attachments (Terms of Use)
messages file during test run (665.86 KB, text/plain)
2014-08-06 17:10 UTC, Corey Marthaler
no flags Details

Description Corey Marthaler 2014-08-06 17:00:04 UTC
Description of problem:

./black_bird -l /home/msp/cmarthal/work/sts/sts-root -r /usr/tests/sts-rhel6.6 -o host-076.virt.lab.msp.redhat.com

[...]

================================================================================
Iteration 2.14 started at Tue Aug  5 01:04:00 CDT 2014
================================================================================
Scenario kill_random_non_synced_raid1_3legs: Kill random leg of NON synced 3 leg raid1 volume(s)

********* RAID hash info for this scenario *********
* names:              non_synced_random_raid1_3legs_1 non_synced_random_raid1_3legs_2 non_synced_random_raid1_3legs_3
* sync:               0
* type:               raid1
* -m |-i value:       3
* leg devices:        /dev/sdd1 /dev/sdh1 /dev/sdg1 /dev/sda1
* spanned legs:        0
* failpv(s):          /dev/sdh1
* failnode(s):        host-076.virt.lab.msp.redhat.com
* lvmetad:            0
* raid fault policy:  warn
******************************************************

Creating raids(s) on host-076.virt.lab.msp.redhat.com...
host-076.virt.lab.msp.redhat.com: lvcreate --type raid1 -m 3 -n non_synced_random_raid1_3legs_1 -L 3G black_bird /dev/sdd1:0-2400 /dev/sdh1:0-2400 /dev/sdg1:0-2400 /dev/sda1:0-2400
host-076.virt.lab.msp.redhat.com: lvcreate --type raid1 -m 3 -n non_synced_random_raid1_3legs_2 -L 3G black_bird /dev/sdd1:0-2400 /dev/sdh1:0-2400 /dev/sdg1:0-2400 /dev/sda1:0-2400
host-076.virt.lab.msp.redhat.com: lvcreate --type raid1 -m 3 -n non_synced_random_raid1_3legs_3 -L 3G black_bird /dev/sdd1:0-2400 /dev/sdh1:0-2400 /dev/sdg1:0-2400 /dev/sda1:0-2400

Current mirror/raid device structure(s):
  LV                                         Attr       LSize   Cpy%Sync Devices
                                                                                                                                                                        
   non_synced_random_raid1_3legs_1            rwi-a-r--- 3.00g 2.73     non_synced_random_raid1_3legs_1_rimage_0(0),non_synced_random_raid1_3legs_1_rimage_1(0),non_synced_random_raid1_3legs_1_rimage_2(0),non_synced_random_raid1_3legs_1_rimage_3(0)
   [non_synced_random_raid1_3legs_1_rimage_0] Iwi-aor--- 3.00g          /dev/sdd1(1)
   [non_synced_random_raid1_3legs_1_rimage_1] Iwi-aor--- 3.00g          /dev/sdh1(1)
   [non_synced_random_raid1_3legs_1_rimage_2] Iwi-aor--- 3.00g          /dev/sdg1(1)
   [non_synced_random_raid1_3legs_1_rimage_3] Iwi-aor--- 3.00g          /dev/sda1(1)
   [non_synced_random_raid1_3legs_1_rmeta_0]  ewi-aor--- 4.00m          /dev/sdd1(0)
   [non_synced_random_raid1_3legs_1_rmeta_1]  ewi-aor--- 4.00m          /dev/sdh1(0)
   [non_synced_random_raid1_3legs_1_rmeta_2]  ewi-aor--- 4.00m          /dev/sdg1(0)
   [non_synced_random_raid1_3legs_1_rmeta_3]  ewi-aor--- 4.00m          /dev/sda1(0)
   non_synced_random_raid1_3legs_2            rwi-a-r--- 3.00g 1.82     non_synced_random_raid1_3legs_2_rimage_0(0),non_synced_random_raid1_3legs_2_rimage_1(0),non_synced_random_raid1_3legs_2_rimage_2(0),non_synced_random_raid1_3legs_2_rimage_3(0)
   [non_synced_random_raid1_3legs_2_rimage_0] Iwi-aor--- 3.00g          /dev/sdd1(770)
   [non_synced_random_raid1_3legs_2_rimage_1] Iwi-aor--- 3.00g          /dev/sdh1(770)
   [non_synced_random_raid1_3legs_2_rimage_2] Iwi-aor--- 3.00g          /dev/sdg1(770)
   [non_synced_random_raid1_3legs_2_rimage_3] Iwi-aor--- 3.00g          /dev/sda1(770)
   [non_synced_random_raid1_3legs_2_rmeta_0]  ewi-aor--- 4.00m          /dev/sdd1(769)
   [non_synced_random_raid1_3legs_2_rmeta_1]  ewi-aor--- 4.00m          /dev/sdh1(769)
   [non_synced_random_raid1_3legs_2_rmeta_2]  ewi-aor--- 4.00m          /dev/sdg1(769)
   [non_synced_random_raid1_3legs_2_rmeta_3]  ewi-aor--- 4.00m          /dev/sda1(769)
   non_synced_random_raid1_3legs_3            rwi-a-r--- 3.00g 0.00     non_synced_random_raid1_3legs_3_rimage_0(0),non_synced_random_raid1_3legs_3_rimage_1(0),non_synced_random_raid1_3legs_3_rimage_2(0),non_synced_random_raid1_3legs_3_rimage_3(0)
   [non_synced_random_raid1_3legs_3_rimage_0] Iwi-aor--- 3.00g          /dev/sdd1(1539)
   [non_synced_random_raid1_3legs_3_rimage_1] Iwi-aor--- 3.00g          /dev/sdh1(1539)
   [non_synced_random_raid1_3legs_3_rimage_2] Iwi-aor--- 3.00g          /dev/sdg1(1539)
   [non_synced_random_raid1_3legs_3_rimage_3] Iwi-aor--- 3.00g          /dev/sda1(1539)
   [non_synced_random_raid1_3legs_3_rmeta_0]  ewi-aor--- 4.00m          /dev/sdd1(1538)
   [non_synced_random_raid1_3legs_3_rmeta_1]  ewi-aor--- 4.00m          /dev/sdh1(1538)
   [non_synced_random_raid1_3legs_3_rmeta_2]  ewi-aor--- 4.00m          /dev/sdg1(1538)
   [non_synced_random_raid1_3legs_3_rmeta_3]  ewi-aor--- 4.00m          /dev/sda1(1538)

Creating ext on top of mirror(s) on host-076.virt.lab.msp.redhat.com...
mke2fs 1.41.12 (17-May-2010)
mke2fs 1.41.12 (17-May-2010)
mke2fs 1.41.12 (17-May-2010)

Mounting mirrored ext filesystems on host-076.virt.lab.msp.redhat.com...

PV=/dev/sdh1
        non_synced_random_raid1_3legs_1_rimage_1: 2
        non_synced_random_raid1_3legs_1_rmeta_1: 2
        non_synced_random_raid1_3legs_2_rimage_1: 2
        non_synced_random_raid1_3legs_2_rmeta_1: 2
        non_synced_random_raid1_3legs_3_rimage_1: 2
        non_synced_random_raid1_3legs_3_rmeta_1: 2

Writing verification files (checkit) to mirror(s) on...
        ---- host-076.virt.lab.msp.redhat.com ----

Verifying files (checkit) on mirror(s) on...
        ---- host-076.virt.lab.msp.redhat.com ----

Current sync percent just before failure
        ( 15.61% 10.63% 8.14% )
Disabling device sdh on host-076.virt.lab.msp.redhat.com

Getting recovery check start time from /var/log/messages: Aug  5 01:06
Attempting I/O to cause mirror down conversion(s) on host-076.virt.lab.msp.redhat.com
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.380182 s, 110 MB/s
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.381459 s, 110 MB/s
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.621824 s, 67.5 MB/s

Verifying current sanity of lvm after the failure

Current mirror/raid device structure(s):
  /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error
  [...]
  /dev/sdh1: read failed after 0 of 512 at 4096: Input/output error
  Couldn't find device with uuid Y1ZV4l-kYj1-MOai-LkjR-pPrH-4oeu-y9ep0Y.
  LV                                         Attr       LSize   Cpy%Sync Devices
  non_synced_random_raid1_3legs_1            rwi-aor-p-   3.00g 26.04    non_synced_random_raid1_3legs_1_rimage_0(0),non_synced_random_raid1_3legs_1_rimage_1(0),non_synced_random_raid1_3legs_1_rimage_2(0),non_synced_random_raid1_3legs_1_rimage_3(0)
  [non_synced_random_raid1_3legs_1_rimage_0] iwi-aor---   3.00g          /dev/sdd1(1)
  [non_synced_random_raid1_3legs_1_rimage_1] Iwi-aor-p-   3.00g          unknown device(1)
  [non_synced_random_raid1_3legs_1_rimage_2] iwi-aor---   3.00g          /dev/sdg1(1)
  [non_synced_random_raid1_3legs_1_rimage_3] iwi-aor---   3.00g          /dev/sda1(1)
  [non_synced_random_raid1_3legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdd1(0)
  [non_synced_random_raid1_3legs_1_rmeta_1]  ewi-aor-p-   4.00m          unknown device(0)
  [non_synced_random_raid1_3legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdg1(0)
  [non_synced_random_raid1_3legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sda1(0)
  non_synced_random_raid1_3legs_2            rwi-aor-p-   3.00g 18.88    non_synced_random_raid1_3legs_2_rimage_0(0),non_synced_random_raid1_3legs_2_rimage_1(0),non_synced_random_raid1_3legs_2_rimage_2(0),non_synced_random_raid1_3legs_2_rimage_3(0)
  [non_synced_random_raid1_3legs_2_rimage_0] iwi-aor---   3.00g          /dev/sdd1(770)
  [non_synced_random_raid1_3legs_2_rimage_1] Iwi-aor-p-   3.00g          unknown device(770)
  [non_synced_random_raid1_3legs_2_rimage_2] iwi-aor---   3.00g          /dev/sdg1(770)
  [non_synced_random_raid1_3legs_2_rimage_3] iwi-aor---   3.00g          /dev/sda1(770)
  [non_synced_random_raid1_3legs_2_rmeta_0]  ewi-aor---   4.00m          /dev/sdd1(769)
  [non_synced_random_raid1_3legs_2_rmeta_1]  ewi-aor-p-   4.00m          unknown device(769)
  [non_synced_random_raid1_3legs_2_rmeta_2]  ewi-aor---   4.00m          /dev/sdg1(769)
  [non_synced_random_raid1_3legs_2_rmeta_3]  ewi-aor---   4.00m          /dev/sda1(769)
  non_synced_random_raid1_3legs_3            rwi-aor-p-   3.00g 8.59     non_synced_random_raid1_3legs_3_rimage_0(0),non_synced_random_raid1_3legs_3_rimage_1(0),non_synced_random_raid1_3legs_3_rimage_2(0),non_synced_random_raid1_3legs_3_rimage_3(0)
  [non_synced_random_raid1_3legs_3_rimage_0] iwi-aor---   3.00g          /dev/sdd1(1539)
  [non_synced_random_raid1_3legs_3_rimage_1] Iwi-aor-p-   3.00g          unknown device(1539)
  [non_synced_random_raid1_3legs_3_rimage_2] iwi-aor---   3.00g          /dev/sdg1(1539)
  [non_synced_random_raid1_3legs_3_rimage_3] iwi-aor---   3.00g          /dev/sda1(1539)
  [non_synced_random_raid1_3legs_3_rmeta_0]  ewi-aor---   4.00m          /dev/sdd1(1538)
  [non_synced_random_raid1_3legs_3_rmeta_1]  ewi-aor-p-   4.00m          unknown device(1538)
  [non_synced_random_raid1_3legs_3_rmeta_2]  ewi-aor---   4.00m          /dev/sdg1(1538)
  [non_synced_random_raid1_3legs_3_rmeta_3]  ewi-aor---   4.00m          /dev/sda1(1538)

Verifying FAILED device /dev/sdh1 is *NOT* in the volume(s)
Verifying IMAGE device /dev/sdd1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdg1 *IS* in the volume(s)
Verifying IMAGE device /dev/sda1 *IS* in the volume(s)
verify the rimage/rmeta dm devices remain after the failures
Checking EXISTENCE and STATE of non_synced_random_raid1_3legs_1_rimage_1 on: host-076.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of non_synced_random_raid1_3legs_1_rmeta_1 on: host-076.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of non_synced_random_raid1_3legs_2_rimage_1 on: host-076.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of non_synced_random_raid1_3legs_2_rmeta_1 on: host-076.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of non_synced_random_raid1_3legs_3_rimage_1 on: host-076.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of non_synced_random_raid1_3legs_3_rmeta_1 on: host-076.virt.lab.msp.redhat.com 

Verify the raid image order is what's expected based on raid fault policy
EXPECTED LEG ORDER: /dev/sdd1 unknown /dev/sdg1 /dev/sda1
ACTUAL LEG ORDER: /dev/sdd1 unknown /dev/sdg1 /dev/sda1
ACTUAL LEG ORDER: /dev/sdd1 unknown /dev/sdg1 /dev/sda1
ACTUAL LEG ORDER: /dev/sdd1 unknown /dev/sdg1 /dev/sda1

Waiting until all mirror|raid volumes become fully syncd...
   0/3 mirror(s) are fully synced: ( 76.00% 64.99% 8.72% )
   0/3 mirror(s) are fully synced: ( 78.85% 67.79% 8.72% )
   0/3 mirror(s) are fully synced: ( 85.15% 74.73% 8.72% )
   0/3 mirror(s) are fully synced: ( 89.41% 80.06% 8.72% )
   0/3 mirror(s) are fully synced: ( 92.70% 82.76% 8.72% )
   0/3 mirror(s) are fully synced: ( 95.59% 85.68% 8.72% )
sync percent for non_synced_random_raid1_3legs_3 hasn't changed in the past minute
HACKING AROUND BZ 464550 by dd'ing to this stale mirror in order to restart the sync
   0/3 mirror(s) are fully synced: ( 98.65% 88.76% 8.72% )
   1/3 mirror(s) are fully synced: ( 100.00% 91.24% 8.72% )
   1/3 mirror(s) are fully synced: ( 100.00% 94.90% 8.72% )
   1/3 mirror(s) are fully synced: ( 100.00% 98.25% 8.72% )
   2/3 mirror(s) are fully synced: ( 100.00% 100.00% 8.72% )
   2/3 mirror(s) are fully synced: ( 100.00% 100.00% 8.72% )
   2/3 mirror(s) are fully synced: ( 100.00% 100.00% 8.72% )
   2/3 mirror(s) are fully synced: ( 100.00% 100.00% 8.72% )
sync percent for non_synced_random_raid1_3legs_3 hasn't changed in the past 5 minutes
ADDITIONAL HACK AROUND BZ 474174 by suspending/resuming this mirror's
dm devices in order to restart the sync process

[root@host-076 ~]# lvs
  /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error
  [...]
  /dev/sdh1: read failed after 0 of 512 at 4096: Input/output error
  Couldn't find device with uuid Y1ZV4l-kYj1-MOai-LkjR-pPrH-4oeu-y9ep0Y.
  LV                              VG         Attr       LSize   Cpy%Sync
  non_synced_random_raid1_3legs_1 black_bird rwi-aor-p-   3.00g   100.00
  non_synced_random_raid1_3legs_2 black_bird rwi-aor-p-   3.00g   100.00
  non_synced_random_raid1_3legs_3 black_bird rwi-aor-p-   3.00g     8.59

qarshd[22104]: Running cmdline: dmsetup suspend black_bird-non_synced_random_raid1_3legs_3
kernel: INFO: task dmsetup:22105 blocked for more than 120 seconds.
kernel:      Not tainted 2.6.32-494.el6.x86_64 #1
kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kernel: dmsetup       D 0000000000000000     0 22105  22104 0x00000080
kernel: ffff88000a10f9b8 0000000000000082 ffff88000a10f958 ffffffff810097cc
kernel: ffff880038ff9538 0000000000000000 000000000010f968 ffff880002214400
kernel: ffff880002216928 ffff8800022168c0 ffff880014abd098 ffff88000a10ffd8
kernel: Call Trace:
kernel: [<ffffffff810097cc>] ? __switch_to+0x1ac/0x320
kernel: [<ffffffff8152918e>] ? thread_return+0x4e/0x7e0
kernel: [<ffffffff8152a0a5>] schedule_timeout+0x215/0x2e0
kernel: [<ffffffff810588b3>] ? set_next_buddy+0x43/0x50
kernel: [<ffffffff81529d23>] wait_for_common+0x123/0x180
kernel: [<ffffffff81064b90>] ? default_wake_function+0x0/0x20
kernel: [<ffffffff81529e3d>] wait_for_completion+0x1d/0x20
kernel: [<ffffffff8109e6db>] kthread_stop+0x4b/0xd0
kernel: [<ffffffff8123989e>] ? task_has_capability+0xfe/0x110
kernel: [<ffffffff814151f4>] md_unregister_thread+0x54/0xa0
kernel: [<ffffffff8141995d>] md_reap_sync_thread+0x1d/0x150
kernel: [<ffffffff81419ac0>] __md_stop_writes+0x30/0x90
kernel: [<ffffffff8141a455>] md_stop_writes+0x25/0x40
kernel: [<ffffffffa0232236>] raid_presuspend+0x16/0x20 [dm_raid]
kernel: [<ffffffffa00043a2>] suspend_targets+0x42/0x80 [dm_mod]
kernel: [<ffffffffa00043f5>] dm_table_presuspend_targets+0x15/0x20 [dm_mod]
kernel: [<ffffffffa0001343>] dm_suspend+0x63/0x1b0 [dm_mod]
kernel: [<ffffffffa0008e80>] ? dev_suspend+0x0/0x260 [dm_mod]
kernel: [<ffffffffa0008ef6>] dev_suspend+0x76/0x260 [dm_mod]
kernel: [<ffffffffa0008e80>] ? dev_suspend+0x0/0x260 [dm_mod]
kernel: [<ffffffffa0009e14>] ctl_ioctl+0x214/0x450 [dm_mod]
kernel: [<ffffffffa000a063>] dm_ctl_ioctl+0x13/0x20 [dm_mod]
kernel: [<ffffffff811a3502>] vfs_ioctl+0x22/0xa0
kernel: [<ffffffff811a36a4>] do_vfs_ioctl+0x84/0x580
kernel: [<ffffffff811a3c21>] sys_ioctl+0x81/0xa0
kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
kernel: INFO: task dmsetup:22105 blocked for more than 120 seconds.
kernel:      Not tainted 2.6.32-494.el6.x86_64 #1
kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kernel: dmsetup       D 0000000000000000     0 22105  22104 0x00000080
kernel: ffff88000a10f9b8 0000000000000082 ffff88000a10f958 ffffffff810097cc
kernel: ffff880038ff9538 0000000000000000 000000000010f968 ffff880002214400
kernel: ffff880002216928 ffff8800022168c0 ffff880014abd098 ffff88000a10ffd8
kernel: Call Trace:
kernel: [<ffffffff810097cc>] ? __switch_to+0x1ac/0x320
kernel: [<ffffffff8152918e>] ? thread_return+0x4e/0x7e0
kernel: [<ffffffff8152a0a5>] schedule_timeout+0x215/0x2e0
kernel: [<ffffffff810588b3>] ? set_next_buddy+0x43/0x50
kernel: [<ffffffff81529d23>] wait_for_common+0x123/0x180
kernel: [<ffffffff81064b90>] ? default_wake_function+0x0/0x20
kernel: [<ffffffff81529e3d>] wait_for_completion+0x1d/0x20
kernel: [<ffffffff8109e6db>] kthread_stop+0x4b/0xd0
kernel: [<ffffffff8123989e>] ? task_has_capability+0xfe/0x110
kernel: [<ffffffff814151f4>] md_unregister_thread+0x54/0xa0
kernel: [<ffffffff8141995d>] md_reap_sync_thread+0x1d/0x150
kernel: [<ffffffff81419ac0>] __md_stop_writes+0x30/0x90
kernel: [<ffffffff8141a455>] md_stop_writes+0x25/0x40
kernel: [<ffffffffa0232236>] raid_presuspend+0x16/0x20 [dm_raid]
kernel: [<ffffffffa00043a2>] suspend_targets+0x42/0x80 [dm_mod]
kernel: [<ffffffffa00043f5>] dm_table_presuspend_targets+0x15/0x20 [dm_mod]
kernel: [<ffffffffa0001343>] dm_suspend+0x63/0x1b0 [dm_mod]
kernel: [<ffffffffa0008e80>] ? dev_suspend+0x0/0x260 [dm_mod]
kernel: [<ffffffffa0008ef6>] dev_suspend+0x76/0x260 [dm_mod]
kernel: [<ffffffffa0008e80>] ? dev_suspend+0x0/0x260 [dm_mod]
kernel: [<ffffffffa0009e14>] ctl_ioctl+0x214/0x450 [dm_mod]
kernel: [<ffffffffa000a063>] dm_ctl_ioctl+0x13/0x20 [dm_mod]
kernel: [<ffffffff811a3502>] vfs_ioctl+0x22/0xa0
kernel: [<ffffffff811a36a4>] do_vfs_ioctl+0x84/0x580
kernel: [<ffffffff811a3c21>] sys_ioctl+0x81/0xa0
kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b



Version-Release number of selected component (if applicable):
2.6.32-494.el6.x86_64
lvm2-2.02.108-1.el6    BUILT: Thu Jul 24 10:29:50 CDT 2014
lvm2-libs-2.02.108-1.el6    BUILT: Thu Jul 24 10:29:50 CDT 2014
lvm2-cluster-2.02.108-1.el6    BUILT: Thu Jul 24 10:29:50 CDT 2014
udev-147-2.57.el6    BUILT: Thu Jul 24 08:48:47 CDT 2014
device-mapper-1.02.87-1.el6    BUILT: Thu Jul 24 10:29:50 CDT 2014
device-mapper-libs-1.02.87-1.el6    BUILT: Thu Jul 24 10:29:50 CDT 2014
device-mapper-event-1.02.87-1.el6    BUILT: Thu Jul 24 10:29:50 CDT 2014
device-mapper-event-libs-1.02.87-1.el6    BUILT: Thu Jul 24 10:29:50 CDT 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 08:43:06 CDT 2014
cmirror-2.02.108-1.el6    BUILT: Thu Jul 24 10:29:50 CDT 2014

Comment 2 Corey Marthaler 2014-08-06 17:10:05 UTC
Created attachment 924544 [details]
messages file during test run

Comment 3 Jonathan Earl Brassow 2014-08-27 04:46:38 UTC
I've been running black_bird for a few days now and have only been able to hit bug 1127231.  I'm moving this one to 6.7.

Comment 5 Jonathan Earl Brassow 2015-10-14 00:28:29 UTC
Corey, I haven't been able to reproduce this in the past (and I failed to set NEEDINFO).  Can you try to repo?  I will close this bug if it can't be reproduced.


Note You need to log in before you can comment on or make changes to this bug.