Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 308986 Details for
Bug 450939
panic in cluster_log_ser during resync of two legged core log mirrors
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
log from taft-01
taft-01.log (text/plain), 1018.61 KB, created by
Corey Marthaler
on 2008-06-11 20:19:11 UTC
(
hide
)
Description:
log from taft-01
Filename:
MIME Type:
Creator:
Corey Marthaler
Created:
2008-06-11 20:19:11 UTC
Size:
1018.61 KB
patch
obsolete
>Jun 11 14:04:51 taft-01 qarshd[19981]: Running cmdline: echo offline > /sys/block/sdf/device/state >Jun 11 14:04:51 taft-01 qarshd[19981]: That's enough >Jun 11 14:04:51 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:51 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:51 taft-01 last message repeated 2 times >Jun 11 14:04:51 taft-01 lvm[7565]: Mirror device, 253:6, has failed. >Jun 11 14:04:51 taft-01 lvm[7565]: Device failure in helter_skelter-syncd_secondary_core_2legs_2 >Jun 11 14:04:51 taft-01 lvm[7565]: Parsing: vgreduce --config devices{ignore_suspended_devices=1} --removemissing helter_skelter >Jun 11 14:04:51 taft-01 lvm[7565]: Reloading config files >Jun 11 14:04:51 taft-01 lvm[7565]: Wiping internal VG cache >Jun 11 14:04:51 taft-01 lvm[7565]: Loading config file: /etc/lvm/lvm.conf >Jun 11 14:04:51 taft-01 lvm[7565]: WARNING: dev_open(/etc/lvm/lvm.conf) called while suspended >Jun 11 14:04:51 taft-01 lvm[7565]: Opened /etc/lvm/lvm.conf RO >Jun 11 14:04:51 taft-01 lvm[7565]: Closed /etc/lvm/lvm.conf >Jun 11 14:04:51 taft-01 lvm[7565]: Setting log/syslog to 1 >Jun 11 14:04:51 taft-01 lvm[7565]: Setting log/level to 0 >Jun 11 14:04:51 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:04:51 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:51 taft-01 lvm[7565]: Setting log/verbose to 0 >Jun 11 14:04:51 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:04:51 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:51 taft-01 lvm[7565]: Setting log/indent to 1 >Jun 11 14:04:51 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:51 taft-01 lvm[7565]: Setting log/prefix to >Jun 11 14:04:51 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:51 taft-01 lvm[7565]: Setting log/command_names to 0 >Jun 11 14:04:51 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:51 taft-01 lvm[7565]: Another thread is handling an event. Waiting... >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: Setting global/test to 0 >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Setting log/overwrite to 0 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: log/activation not found in config: defaulting to 0 >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Logging initialised at Wed Jun 11 14:04:52 2008 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: Setting global/umask to 63 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: Setting devices/dir to /dev >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: Setting global/proc to /proc >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Setting global/activation to 1 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: global/suffix not found in config: defaulting to 1 >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Setting global/units to h >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: Setting activation/readahead to auto >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: devices/preferred_names not found in config file: using built-in preferences >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Matcher built with 3 dfa states >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: Setting devices/ignore_suspended_devices to 1 >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Setting devices/cache_dir to /etc/lvm/cache >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: Setting devices/write_cache_state to 1 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: Initialised format: lvm1 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: Initialised format: pool >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Initialised format: lvm2 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: global/format not found in config: defaulting to lvm2 >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm1 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_pool >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm2 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: Initialised segtype: striped >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Initialised segtype: zero >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: Initialised segtype: error >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Setting dmeventd/snapshot_library to libdevmapper-event-lvm2snapshot.so >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: Setting global/library_dir to /usr/lib64 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: Initialised segtype: snapshot >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 678/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: Initialised segtype: mirror >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Processing: vgreduce --config devices{ignore_suspended_devices=1} --removemissing helter_skelter >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 lvm[7565]: O_DIRECT will be used >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 lvm[7565]: Setting global/locking_type to 3 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: Cluster locking selected. >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: Finding volume group "helter_skelter" >Jun 11 14:04:52 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 lvm[7565]: Locking VG V_helter_skelter PW B (0x4) >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 679/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 680/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:52 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:52 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:52 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 681/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 682/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 683/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:53 taft-01 last message repeated 2 times >Jun 11 14:04:53 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:53 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 684/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 685/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:54 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:04:54 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:54 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:54 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 686/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 695/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 687/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 696/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:55 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:55 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 840 on node taft-02 >Jun 11 14:04:55 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:04:55 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:55 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:55 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 841 on node taft-02 >Jun 11 14:04:55 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6t7pxggjJqW4fE9TNMl9cuWTtk7qZP825', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x85 (PARTIAL DMEVENTD_MONITOR ) >Jun 11 14:04:55 taft-01 clvmd[7681]: do_suspend_lv, lock held at -1 >Jun 11 14:04:55 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:55 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 688/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 697/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:55 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 842 on node taft-02 >Jun 11 14:04:55 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6NEXcjqIJhrYLUaU3FsTftcBptNteYCKU', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x85 (PARTIAL DMEVENTD_MONITOR ) >Jun 11 14:04:55 taft-01 clvmd[7681]: do_suspend_lv, lock held at -1 >Jun 11 14:04:55 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:55 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:55 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:55 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:55 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 843 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6t7pxggjJqW4fE9TNMl9cuWTtk7qZP825', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 clvmd[7681]: do_resume_lv, lock not already held >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 689/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 698/xoT7UjpV >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 844 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6NEXcjqIJhrYLUaU3FsTftcBptNteYCKU', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 clvmd[7681]: do_resume_lv, lock not already held >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 699/xoT7UjpV >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 699/xoT7UjpV >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 699/xoT7UjpV >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 845 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 699/xoT7UjpV >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 699/xoT7UjpV >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 846 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 699/xoT7UjpV >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: LOG INFO: >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: uuid: LVM-1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: uuid_ref : 1 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: log type : core >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: ?region_count: 1600 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: ?sync_count : 0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: ?sync_search : 0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: in_sync : YES >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: suspended : NO >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: recovery_halted : NO >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: server_id : 1 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: server_valid: YES >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 699/xoT7UjpV >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: cluster_presuspend: recovery halted on xoT7UjpV(1) >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 699/xoT7UjpV >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: cluster_postsuspend >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: LRT_MASTER_LEAVING(13): (xoT7UjpV) >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: co-ordinator: 0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (xoT7UjpV) >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 847 on node taft-02 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (xoT7UjpV) >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Removing xoT7UjpV (1) >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: 0 region user structures freed >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 848 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 849 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6t7pxggjJqW4fE9TNMl9cuWTtk7qZP825', cmd = 0x19 LCK_LV_ACTIVATE (READ|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6t7pxggjJqW4fE9TNMl9cuWTtk7qZP825' mode:1 flags=1 >Jun 11 14:04:56 taft-01 clvmd[7681]: sync_lock: returning lkid 102a7 >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 690/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 850 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6t7pxggjJqW4fE9TNMl9cuWTtk7qZP825', cmd = 0x18 LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: sync_unlock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6t7pxggjJqW4fE9TNMl9cuWTtk7qZP825' lkid:102a7 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 851 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6t7pxggjJqW4fE9TNMl9cuWTtk7qZP825', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 clvmd[7681]: do_resume_lv, lock not already held >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 691/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 852 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 853 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6xafWpIF6VXGbSPTWeRFkKnyy7ibhiGjo', cmd = 0x19 LCK_LV_ACTIVATE (READ|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6xafWpIF6VXGbSPTWeRFkKnyy7ibhiGjo' mode:1 flags=1 >Jun 11 14:04:56 taft-01 clvmd[7681]: sync_lock: returning lkid 20351 >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 854 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6xafWpIF6VXGbSPTWeRFkKnyy7ibhiGjo', cmd = 0x18 LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: sync_unlock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6xafWpIF6VXGbSPTWeRFkKnyy7ibhiGjo' lkid:20351 >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 692/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 855 on node taft-02 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6xafWpIF6VXGbSPTWeRFkKnyy7ibhiGjo', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: do_resume_lv, lock not already held >Jun 11 14:04:56 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 856 on node taft-02 >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 857 on node taft-02 >Jun 11 14:04:56 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:04:56 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 693/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:56 taft-01 last message repeated 2 times >Jun 11 14:04:56 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:56 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:56 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 858 on node taft-02 >Jun 11 14:04:57 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:56 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 last message repeated 2 times >Jun 11 14:04:57 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:57 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:57 taft-01 last message repeated 2 times >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 last message repeated 2 times >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: LOG INFO: >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: uuid: LVM-1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: uuid_ref : 1 >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: log type : core >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: ?region_count: 1600 >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: ?sync_count : 0 >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: ?sync_search : 0 >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: in_sync : YES >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: suspended : NO >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: recovery_halted : NO >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: server_id : 1 >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: server_valid: YES >Jun 11 14:04:57 taft-01 qarshd[20007]: Talking to peer 10.15.80.47:47229 >Jun 11 14:04:57 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:57 taft-01 qarshd[20007]: Running cmdline: dd if=/dev/zero of=/mnt/syncd_secondary_core_2legs_1/ddfile count=10 bs=4M >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:57 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: cluster_presuspend: recovery halted on 90GcsfRZ(1) >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: Notifying server(1) of sync change: 694/90GcsfRZ >Jun 11 14:04:57 taft-01 kernel: dm-cmirror: cluster_postsuspend >Jun 11 14:04:57 taft-01 qarshd[20007]: That's enough >Jun 11 14:04:57 taft-01 qarshd[20009]: Talking to peer 10.15.80.47:47230 >Jun 11 14:04:57 taft-01 qarshd[20009]: Running cmdline: sync >Jun 11 14:04:57 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:04:57 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: LRT_MASTER_LEAVING(13): (90GcsfRZ) >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: co-ordinator: 0 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (90GcsfRZ) >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (90GcsfRZ) >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (90GcsfRZ) >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:04:59 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:04:59 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:04:59 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 859 on node taft-02 >Jun 11 14:04:59 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:04:59 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: Removing 90GcsfRZ (1) >Jun 11 14:04:59 taft-01 kernel: dm-cmirror: 0 region user structures freed >Jun 11 14:05:00 taft-01 qarshd[20009]: Nothing to do >Jun 11 14:05:00 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:00 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:03 taft-01 qarshd[20009]: Nothing to do >Jun 11 14:05:03 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:03 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:06 taft-01 qarshd[20009]: Nothing to do >Jun 11 14:05:06 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:06 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:09 taft-01 qarshd[20009]: Nothing to do >Jun 11 14:05:09 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:09 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:12 taft-01 qarshd[20009]: Nothing to do >Jun 11 14:05:12 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:12 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:14 taft-01 kernel: dm-cmirror: stop_server called >Jun 11 14:05:15 taft-01 qarshd[20009]: Nothing to do >Jun 11 14:05:15 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:15 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:18 taft-01 qarshd[20009]: Nothing to do >Jun 11 14:05:18 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:18 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:19 taft-01 kernel: dm-cmirror: Closing socket on server side >Jun 11 14:05:19 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 860 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 861 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6NEXcjqIJhrYLUaU3FsTftcBptNteYCKU', cmd = 0x19 LCK_LV_ACTIVATE (READ|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:05:19 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:05:19 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6NEXcjqIJhrYLUaU3FsTftcBptNteYCKU' mode:1 flags=1 >Jun 11 14:05:19 taft-01 clvmd[7681]: sync_lock: returning lkid 100a9 >Jun 11 14:05:19 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:05:19 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:05:19 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 qarshd[20009]: That's enough >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 862 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6NEXcjqIJhrYLUaU3FsTftcBptNteYCKU', cmd = 0x18 LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:05:19 taft-01 kernel: scsi1 (0:5): rejecting I/O to offline device >Jun 11 14:05:19 taft-01 clvmd[7681]: sync_unlock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6NEXcjqIJhrYLUaU3FsTftcBptNteYCKU' lkid:100a9 >Jun 11 14:05:19 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 863 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6NEXcjqIJhrYLUaU3FsTftcBptNteYCKU', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:05:19 taft-01 clvmd[7681]: do_resume_lv, lock not already held >Jun 11 14:05:19 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 864 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 865 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6kmKdzk1pa0Usep7oQhVUa3gkWHGFb3Pm', cmd = 0x19 LCK_LV_ACTIVATE (READ|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:05:19 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6kmKdzk1pa0Usep7oQhVUa3gkWHGFb3Pm' mode:1 flags=1 >Jun 11 14:05:19 taft-01 clvmd[7681]: sync_lock: returning lkid 1002e >Jun 11 14:05:19 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 866 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6kmKdzk1pa0Usep7oQhVUa3gkWHGFb3Pm', cmd = 0x18 LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:05:19 taft-01 clvmd[7681]: sync_unlock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6kmKdzk1pa0Usep7oQhVUa3gkWHGFb3Pm' lkid:1002e >Jun 11 14:05:19 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=85, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_LV (0x32) for clientid 0xa000000 XID 867 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6kmKdzk1pa0Usep7oQhVUa3gkWHGFb3Pm', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x86 (MIRROR_NOSYNC DMEVENTD_MONITOR ) >Jun 11 14:05:19 taft-01 clvmd[7681]: do_resume_lv, lock not already held >Jun 11 14:05:19 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 868 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x56cb60, msg=0x7fbfffe810, len=37, csid=0x7fbfffe7f4, xid=0 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: remote >Jun 11 14:05:19 taft-01 clvmd[7681]: process_remote_command LOCK_VG (0x33) for clientid 0xa000000 XID 869 on node taft-02 >Jun 11 14:05:19 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: sync_lock: returning lkid 20122 >Jun 11 14:05:19 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:19 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:19 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:19 taft-01 clvmd[7681]: distribute command: XID = 733 >Jun 11 14:05:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=733 >Jun 11 14:05:19 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:19 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:05:19 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:19 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:19 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:19 taft-01 clvmd[7681]: Waiting to do post command - state = 1 >Jun 11 14:05:19 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:19 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:19 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:19 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/arpd: Not a block device >Jun 11 14:05:19 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cdrom: Added to device cache >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cdwriter: Aliased to /dev/cdrom in device cache >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/console: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/core: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/0/cpuid: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/0/msr: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/1/cpuid: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/1/msr: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/2/cpuid: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/2/msr: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/3/cpuid: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/3/msr: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/msr0: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/msr1: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/msr2: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu/msr3: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu0: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu1: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu2: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/cpu3: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/device-mapper: Not a block device >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/diapered_dm-4: Added to device cache >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/diapered_dm-7: Added to device cache >Jun 11 14:05:19 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:00:1f.1-ide-0:0: Aliased to /dev/cdrom in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part2: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000-part1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000-part1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000-part1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000-part1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000-part1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000-part1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000-part1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/dm-0: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/dm-1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/dm-4: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/dm-7: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/dnrtmsg: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/dvd: Aliased to /dev/cdrom in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/event0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/fd: Symbolic link to directory >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/full: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/fwmonitor: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/gpmctl: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/hda: Aliased to /dev/cdrom in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/helter_skelter/syncd_secondary_core_2legs_1: Aliased to /dev/dm-4 in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/helter_skelter/syncd_secondary_core_2legs_2: Aliased to /dev/dm-7 in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/hw_random: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/initctl: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/input/event0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/input/mice: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ip6_fw: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/kmsg: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/log: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/loop0: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/loop1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/loop2: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/loop3: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/loop4: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/loop5: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/loop6: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/loop7: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/lp0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/lp1: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/lp2: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/lp3: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/MAKEDEV: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/mapper/control: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/mapper/helter_skelter-syncd_secondary_core_2legs_1: Aliased to /dev/dm-4 in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/mapper/helter_skelter-syncd_secondary_core_2legs_2: Aliased to /dev/dm-7 in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/mapper/VolGroup00-LogVol00: Aliased to /dev/dm-0 in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/mapper/VolGroup00-LogVol01: Aliased to /dev/dm-1 in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/mcelog: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md0: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md1: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md10: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md11: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md12: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md13: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md14: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md15: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md16: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md17: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md18: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md19: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md2: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md20: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md21: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md22: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md23: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md24: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md25: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md26: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md27: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md28: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md29: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md3: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md30: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md31: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md4: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md5: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md6: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md7: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md8: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md9: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/mem: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/mice: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/misc/dlm_clvmd: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/misc/dlm-control: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/msr0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/msr1: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/msr2: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/msr3: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/net/tun: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/nflog: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/null: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/parport0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/parport1: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/parport2: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/parport3: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/port: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ppp: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ptmx: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/pts/0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram0: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram1: Aliased to /dev/ram in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram10: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram11: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram12: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram13: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram14: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram15: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram2: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram3: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram4: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram5: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram6: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram7: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram8: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ram9: Added to device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ramdisk: Aliased to /dev/ram0 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/random: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/rawctl: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/root: Aliased to /dev/dm-0 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/route: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/route6: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/rtc: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sda: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sda1: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part1 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sda2: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part2 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdb: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdb1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000-part1 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdc: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdc1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000-part1 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdd: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdd1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000-part1 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sde: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sde1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000-part1 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdf: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdf1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000-part1 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdg: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdg1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000-part1 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdh: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sdh1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000-part1 in device cache (preferred name) >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg1: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg2: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg3: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg4: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg5: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg6: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg7: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg8: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sg9: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/skip: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/stderr: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/stdin: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/stdout: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/systty: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap1: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap10: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap11: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap12: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap13: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap14: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap15: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap2: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap3: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap4: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap5: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap6: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap7: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap8: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tap9: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tcpdiag: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty1: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty10: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty11: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty12: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty13: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty14: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty15: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty16: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty17: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty18: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty19: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty2: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty20: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty21: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty22: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty23: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty24: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty25: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty26: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty27: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty28: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty29: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty3: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty30: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty31: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty32: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty33: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty34: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty35: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty36: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty37: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty38: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty39: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty4: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty40: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty41: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty42: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty43: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty44: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty45: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty46: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty47: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty48: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty49: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty5: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty50: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty51: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty52: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty53: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty54: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty55: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty56: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty57: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty58: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty59: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty6: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty60: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty61: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty62: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty63: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty7: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty8: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/tty9: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS0: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS1: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS10: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS11: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS12: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS13: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS14: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS15: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS16: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS17: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS18: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS19: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS2: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS20: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS21: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS22: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS23: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS24: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS25: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS26: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS27: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS28: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS29: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS3: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS30: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS31: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS32: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS33: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS34: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS35: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS36: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS37: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS38: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS39: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS4: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS40: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS41: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS42: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS43: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS44: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS45: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS46: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS47: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS48: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS49: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS5: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS50: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS51: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS52: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS53: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS54: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS55: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS56: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS57: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS58: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS59: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS6: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS60: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS61: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS62: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS63: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS64: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS65: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS66: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS67: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS7: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS8: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ttyS9: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/urandom: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/usersock: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcs: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcs1: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcs2: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcs3: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcs4: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcs5: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcs6: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcsa: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcsa1: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcsa2: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcsa3: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcsa4: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcsa5: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/vcsa6: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/VolGroup00/LogVol00: Aliased to /dev/root in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/VolGroup00/LogVol01: Aliased to /dev/dm-1 in device cache >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/xfrm: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/XOR: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/zero: Not a block device >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: dev_open(/dev/ramdisk) called while suspended >Jun 11 14:05:20 taft-01 lvm[7565]: Opened /dev/ramdisk RO >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ramdisk: size is 32768 sectors >Jun 11 14:05:20 taft-01 lvm[7565]: Closed /dev/ramdisk >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ramdisk: size is 32768 sectors >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: dev_open(/dev/ramdisk) called while suspended >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ramdisk: Not using O_DIRECT >Jun 11 14:05:20 taft-01 lvm[7565]: Opened /dev/ramdisk RW >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ramdisk: block size is 1024 bytes >Jun 11 14:05:20 taft-01 lvm[7565]: Closed /dev/ramdisk >Jun 11 14:05:20 taft-01 lvm[7565]: Using /dev/ramdisk >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: dev_open(/dev/ramdisk) called while suspended >Jun 11 14:05:20 taft-01 lvm[7565]: Opened /dev/ramdisk RW >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ramdisk: block size is 1024 bytes >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/ramdisk: No label detected >Jun 11 14:05:20 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:20 taft-01 lvm[7565]: Closed /dev/ramdisk >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: dev_open(/dev/cdrom) called while suspended >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/cdrom: open failed: No medium found >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/cdrom: Skipping: open failed >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/loop0: Skipping (sysfs) >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda) called while suspended >Jun 11 14:05:20 taft-01 lvm[7565]: Opened /dev/sda RO >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sda: size is 143114240 sectors >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: /dev/sda already opened read-only >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sda: Immediate close attempt while still referenced >Jun 11 14:05:20 taft-01 lvm[7565]: Closed /dev/sda >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda) called while suspended >Jun 11 14:05:20 taft-01 lvm[7565]: Opened /dev/sda RW O_DIRECT >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sda: block size is 4096 bytes >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/sda: Skipping: Partition table signature found >Jun 11 14:05:20 taft-01 lvm[7565]: Closed /dev/sda >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: dev_open(/dev/md0) called while suspended >Jun 11 14:05:20 taft-01 lvm[7565]: Opened /dev/md0 RO >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md0: size is 0 sectors >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/md0: Skipping: Too small to hold a PV >Jun 11 14:05:20 taft-01 lvm[7565]: Closed /dev/md0 >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/diapered_dm-4: Skipping: Unrecognised LVM device type 252 >Jun 11 14:05:20 taft-01 lvm[7565]: dm status (253:0) OF [16384] >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: dev_open(/dev/root) called while suspended >Jun 11 14:05:20 taft-01 lvm[7565]: Opened /dev/root RO >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/root: size is 122355712 sectors >Jun 11 14:05:20 taft-01 lvm[7565]: Closed /dev/root >Jun 11 14:05:20 taft-01 lvm[7565]: /dev/root: size is 122355712 sectors >Jun 11 14:05:20 taft-01 lvm[7565]: WARNING: dev_open(/dev/root) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/root RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/root: block size is 4096 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/root >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/root >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/root) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/root RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/root: block size is 4096 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/root: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/root >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/loop1: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sda1 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda1: size is 208782 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sda1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda1: size is 208782 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sda1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda1: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sda1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/sda1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sda1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda1: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda1: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sda1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md1: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/diapered_dm-7: Skipping: Unrecognised LVM device type 252 >Jun 11 14:05:21 taft-01 lvm[7565]: dm status (253:1) OF [16384] >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/dm-1 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-1: size is 20447232 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/dm-1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-1: size is 20447232 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/dm-1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-1: block size is 4096 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/dm-1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/dm-1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/dm-1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-1: block size is 4096 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-1: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/dm-1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram2) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram2 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram2: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram2 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram2: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram2) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram2: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram2 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram2: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram2 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram2 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram2) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram2 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram2: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram2: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram2 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/loop2: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda2) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sda2 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda2: size is 142898175 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sda2 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda2: size is 142898175 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda2) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sda2 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda2: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sda2 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/sda2 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda2) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sda2 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda2: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda2: lvm2 label detected >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sda2: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sda2: Found metadata at 6656 size 1150 (in area at 4096 size 192512) for VolGroup00 (tvepdD-LeoF-D1GY-qhQL-Ld57-IVl0-y3N5pc) >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sda2: now in VG VolGroup00 >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sda2: setting VolGroup00 VGID to tvepdDLeoFD1GYqhQLLd57IVl0y3N5pc >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sda2: VG VolGroup00: Set creation host to localhost.localdomain. >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sda2 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md2: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram3) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram3 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram3: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram3 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram3: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram3) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram3: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram3 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram3: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram3 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram3 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram3) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram3 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram3: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram3: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram3 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/loop3: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md3: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram4) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram4 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram4: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram4 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram4: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram4) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram4: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram4 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram4: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram4 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram4 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram4) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram4 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram4: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram4: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram4 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/loop4: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md4: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: dm status (253:4) OF [16384] >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-4) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/dm-4 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-4: size is 1638400 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/dm-4 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-4: size is 1638400 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-4) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/dm-4 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-4: block size is 4096 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/dm-4 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/dm-4 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-4) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/dm-4 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-4: block size is 4096 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-4: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/dm-4 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram5) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram5 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram5: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram5 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram5: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram5) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram5: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram5 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram5: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram5 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram5 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram5) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram5 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram5: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram5: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram5 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/loop5: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md5: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram6) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram6 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram6: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram6 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram6: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram6) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram6: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram6 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram6: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram6 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram6 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram6) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram6 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram6: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram6: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram6 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/loop6: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md6: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram7) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram7 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram7: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram7 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram7: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram7) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram7: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram7 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram7: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram7 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram7 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram7) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram7 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram7: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram7: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram7 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/loop7: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md7: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: dm status (253:7) OF [16384] >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-7) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/dm-7 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-7: size is 1638400 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/dm-7 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-7: size is 1638400 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-7) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/dm-7 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-7: block size is 4096 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/dm-7 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/dm-7 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-7) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/dm-7 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-7: block size is 4096 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/dm-7: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/dm-7 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram8) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram8 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram8: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram8 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram8: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram8) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram8: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram8 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram8: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram8 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram8 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram8) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram8 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram8: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram8: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram8 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md8: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram9) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram9 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram9: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram9 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram9: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram9) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram9: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram9 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram9: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram9 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram9 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram9) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram9 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram9: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram9: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram9 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md9: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram10) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram10 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram10: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram10 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram10: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram10) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram10: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram10 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram10: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram10 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram10 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram10) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram10 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram10: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram10: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram10 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md10: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram11) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram11 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram11: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram11 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram11: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram11) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram11: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram11 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram11: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram11 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram11 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram11) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram11 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram11: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram11: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram11 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md11: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram12) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram12 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram12: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram12 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram12: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram12) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram12: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram12 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram12: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram12 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram12 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram12) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram12 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram12: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram12: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram12 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md12: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram13) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram13 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram13: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram13 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram13: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram13) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram13: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram13 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram13: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram13 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram13 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram13) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram13 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram13: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram13: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram13 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md13: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram14) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram14 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram14: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram14 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram14: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram14) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram14: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram14 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram14: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram14 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram14 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram14) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram14 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram14: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram14: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram14 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md14: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram15) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram15 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram15: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram15 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram15: size is 32768 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram15) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram15: Not using O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram15 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram15: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram15 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/ram15 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram15) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/ram15 RW >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram15: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/ram15: No label detected >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/ram15 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md15: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdb RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdb: size is 284511150 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: /dev/sdb already opened read-only >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdb: Immediate close attempt while still referenced >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdb >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdb RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdb: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdb: Skipping: Partition table signature found >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdb >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md16: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdb1 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdb1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdb1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdb1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdb1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdb1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdb1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/sdb1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdb1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdb1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdb1: lvm2 label detected >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sdb1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdb1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md17: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md18: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md19: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md20: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md21: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md22: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md23: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md24: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md25: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md26: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md27: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md28: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md29: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md30: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/md31: Skipping (sysfs) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdc RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdc: size is 284511150 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: /dev/sdc already opened read-only >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdc: Immediate close attempt while still referenced >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdc >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdc RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdc: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdc: Skipping: Partition table signature found >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdc >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdc1 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdc1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdc1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdc1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdc1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdc1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdc1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/sdc1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdc1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdc1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdc1: lvm2 label detected >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sdc1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdc1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdd RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdd: size is 284511150 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: /dev/sdd already opened read-only >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdd: Immediate close attempt while still referenced >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdd >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdd RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdd: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdd: Skipping: Partition table signature found >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdd >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdd1 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdd1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdd1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdd1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdd1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdd1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdd1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/sdd1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdd1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdd1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdd1: lvm2 label detected >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sdd1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:21 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdd1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sde RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde: size is 284511150 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: /dev/sde already opened read-only >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde: Immediate close attempt while still referenced >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sde >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sde RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde: Skipping: Partition table signature found >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sde >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sde1 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sde1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sde1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sde1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/sde1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sde1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde1: lvm2 label detected >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sde1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde1: Found metadata at 22016 size 1497 (in area at 4096 size 192512) for helter_skelter (1pP81X-IQLO-yvZh-CW5V-ZqNy-FEbm-pMYLl6) >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sde1: now in VG helter_skelter >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sde1: setting helter_skelter VGID to 1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6 >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sde1: VG helter_skelter: Set creation host to taft-02. >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdf) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdf: open failed: No such device or address >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdf: Skipping: open failed >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdf1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdf1: open failed: No such device or address >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdf1: Skipping: open failed >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdg RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg: size is 284511150 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: /dev/sdg already opened read-only >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg: Immediate close attempt while still referenced >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdg >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdg RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg: Skipping: Partition table signature found >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdg >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdg1 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdg1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdg1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdg1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/sdg1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdg1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg1: lvm2 label detected >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sdg1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg1: Found metadata at 48128 size 1497 (in area at 4096 size 192512) for helter_skelter (1pP81X-IQLO-yvZh-CW5V-ZqNy-FEbm-pMYLl6) >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sdg1: now in VG helter_skelter (1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6) >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdh RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh: size is 284511150 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: /dev/sdh already opened read-only >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh: Immediate close attempt while still referenced >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdh >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdh RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh: block size is 1024 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh: Skipping: Partition table signature found >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdh >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdh1 RO >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdh1 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh1: size is 284511087 sectors >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdh1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: Closed /dev/sdh1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using /dev/sdh1 >Jun 11 14:05:21 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh1) called while suspended >Jun 11 14:05:21 taft-01 lvm[7565]: Opened /dev/sdh1 RW O_DIRECT >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh1: block size is 512 bytes >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh1: lvm2 label detected >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sdh1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh1: Found metadata at 84992 size 1497 (in area at 4096 size 192512) for helter_skelter (1pP81X-IQLO-yvZh-CW5V-ZqNy-FEbm-pMYLl6) >Jun 11 14:05:21 taft-01 lvm[7565]: lvmcache: /dev/sdh1: now in VG helter_skelter (1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6) >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sde1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sdg1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sdh1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sdg1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sde1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sdh1 >Jun 11 14:05:21 taft-01 lvm[7565]: description not found in config: defaulting to >Jun 11 14:05:21 taft-01 lvm[7565]: Read helter_skelter metadata (209) from /dev/sde1 at 22016 size 1497 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sdg1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sde1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sdh1 >Jun 11 14:05:21 taft-01 lvm[7565]: description not found in config: defaulting to >Jun 11 14:05:21 taft-01 lvm[7565]: Read helter_skelter metadata (209) from /dev/sdg1 at 48128 size 1497 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sdg1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sde1 >Jun 11 14:05:21 taft-01 lvm[7565]: Using cached label for /dev/sdh1 >Jun 11 14:05:21 taft-01 lvm[7565]: description not found in config: defaulting to >Jun 11 14:05:21 taft-01 lvm[7565]: Read helter_skelter metadata (209) from /dev/sdh1 at 84992 size 1497 >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg1 0: 0 200: syncd_secondary_core_2legs_1(0:0) >Jun 11 14:05:21 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg1 1: 200 200: syncd_secondary_core_2legs_2(0:0) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdg1 2: 400 34330: NULL(0:0) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sde1 0: 0 34730: NULL(0:0) >Jun 11 14:05:21 taft-01 lvm[7565]: /dev/sdh1 0: 0 34730: NULL(0:0) >Jun 11 14:05:21 taft-01 lvm[7565]: Volume group "helter_skelter" is already consistent >Jun 11 14:05:21 taft-01 lvm[7565]: Locking VG V_helter_skelter UN B (0x6) >Jun 11 14:05:21 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:21 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:05:21 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:21 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:05:21 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:20122 >Jun 11 14:05:22 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:22 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:22 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:22 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:22 taft-01 clvmd[7681]: distribute command: XID = 734 >Jun 11 14:05:22 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=734 >Jun 11 14:05:22 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:22 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:05:22 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:22 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:22 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:22 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:22 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:22 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:22 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:22 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:22 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:22 taft-01 lvm[7565]: Closed /dev/sde1 >Jun 11 14:05:22 taft-01 lvm[7565]: Closed /dev/sdg1 >Jun 11 14:05:22 taft-01 lvm[7565]: Closed /dev/sdh1 >Jun 11 14:05:22 taft-01 lvm[7565]: Reloading config files >Jun 11 14:05:22 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Wiping internal VG cache >Jun 11 14:05:22 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:05:22 taft-01 lvm[7565]: Loading config file: /etc/lvm/lvm.conf >Jun 11 14:05:22 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:05:22 taft-01 lvm[7565]: WARNING: dev_open(/etc/lvm/lvm.conf) called while suspended >Jun 11 14:05:22 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:22 taft-01 lvm[7565]: Opened /etc/lvm/lvm.conf RO >Jun 11 14:05:22 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:05:22 taft-01 lvm[7565]: Closed /etc/lvm/lvm.conf >Jun 11 14:05:22 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/syslog to 1 >Jun 11 14:05:22 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/level to 0 >Jun 11 14:05:22 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=734 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/verbose to 0 >Jun 11 14:05:22 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/indent to 1 >Jun 11 14:05:22 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/prefix to >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/command_names to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/test to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/overwrite to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: log/activation not found in config: defaulting to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Logging initialised at Wed Jun 11 14:05:22 2008 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/umask to 63 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting devices/dir to /dev >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/proc to /proc >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/activation to 1 >Jun 11 14:05:22 taft-01 lvm[7565]: global/suffix not found in config: defaulting to 1 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/units to h >Jun 11 14:05:22 taft-01 lvm[7565]: Setting activation/readahead to auto >Jun 11 14:05:22 taft-01 lvm[7565]: devices/preferred_names not found in config file: using built-in preferences >Jun 11 14:05:22 taft-01 lvm[7565]: Matcher built with 3 dfa states >Jun 11 14:05:22 taft-01 lvm[7565]: Setting devices/ignore_suspended_devices to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting devices/cache_dir to /etc/lvm/cache >Jun 11 14:05:22 taft-01 lvm[7565]: Setting devices/write_cache_state to 1 >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised format: lvm1 >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised format: pool >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised format: lvm2 >Jun 11 14:05:22 taft-01 lvm[7565]: global/format not found in config: defaulting to lvm2 >Jun 11 14:05:22 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm1 >Jun 11 14:05:22 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_pool >Jun 11 14:05:22 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm2 >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: striped >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: zero >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: error >Jun 11 14:05:22 taft-01 lvm[7565]: Setting dmeventd/snapshot_library to libdevmapper-event-lvm2snapshot.so >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/library_dir to /usr/lib64 >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: snapshot >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: mirror >Jun 11 14:05:22 taft-01 lvm[7565]: Completed: vgreduce --config devices{ignore_suspended_devices=1} --removemissing helter_skelter >Jun 11 14:05:22 taft-01 lvm[7565]: Mirror device, 253:3, has failed. >Jun 11 14:05:22 taft-01 lvm[7565]: Device failure in helter_skelter-syncd_secondary_core_2legs_1 >Jun 11 14:05:22 taft-01 lvm[7565]: Parsing: vgreduce --config devices{ignore_suspended_devices=1} --removemissing helter_skelter >Jun 11 14:05:22 taft-01 lvm[7565]: Reloading config files >Jun 11 14:05:22 taft-01 lvm[7565]: Wiping internal VG cache >Jun 11 14:05:22 taft-01 lvm[7565]: Loading config file: /etc/lvm/lvm.conf >Jun 11 14:05:22 taft-01 lvm[7565]: WARNING: dev_open(/etc/lvm/lvm.conf) called while suspended >Jun 11 14:05:22 taft-01 lvm[7565]: Opened /etc/lvm/lvm.conf RO >Jun 11 14:05:22 taft-01 lvm[7565]: Closed /etc/lvm/lvm.conf >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/syslog to 1 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/level to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/verbose to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/indent to 1 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/prefix to >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/command_names to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/test to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting log/overwrite to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: log/activation not found in config: defaulting to 0 >Jun 11 14:05:22 taft-01 lvm[7565]: Logging initialised at Wed Jun 11 14:05:22 2008 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/umask to 63 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting devices/dir to /dev >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/proc to /proc >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/activation to 1 >Jun 11 14:05:22 taft-01 lvm[7565]: global/suffix not found in config: defaulting to 1 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/units to h >Jun 11 14:05:22 taft-01 lvm[7565]: Setting activation/readahead to auto >Jun 11 14:05:22 taft-01 lvm[7565]: devices/preferred_names not found in config file: using built-in preferences >Jun 11 14:05:22 taft-01 lvm[7565]: Matcher built with 3 dfa states >Jun 11 14:05:22 taft-01 lvm[7565]: Setting devices/ignore_suspended_devices to 1 >Jun 11 14:05:22 taft-01 lvm[7565]: Setting devices/cache_dir to /etc/lvm/cache >Jun 11 14:05:22 taft-01 lvm[7565]: Setting devices/write_cache_state to 1 >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised format: lvm1 >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised format: pool >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised format: lvm2 >Jun 11 14:05:22 taft-01 lvm[7565]: global/format not found in config: defaulting to lvm2 >Jun 11 14:05:22 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm1 >Jun 11 14:05:22 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_pool >Jun 11 14:05:22 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm2 >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: striped >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: zero >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: error >Jun 11 14:05:22 taft-01 lvm[7565]: Setting dmeventd/snapshot_library to libdevmapper-event-lvm2snapshot.so >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/library_dir to /usr/lib64 >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: snapshot >Jun 11 14:05:22 taft-01 lvm[7565]: Initialised segtype: mirror >Jun 11 14:05:22 taft-01 lvm[7565]: Processing: vgreduce --config devices{ignore_suspended_devices=1} --removemissing helter_skelter >Jun 11 14:05:22 taft-01 lvm[7565]: O_DIRECT will be used >Jun 11 14:05:22 taft-01 lvm[7565]: Setting global/locking_type to 3 >Jun 11 14:05:22 taft-01 lvm[7565]: Cluster locking selected. >Jun 11 14:05:22 taft-01 lvm[7565]: Finding volume group "helter_skelter" >Jun 11 14:05:22 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:05:22 taft-01 lvm[7565]: Locking VG V_helter_skelter PW B (0x4) >Jun 11 14:05:22 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:05:22 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:05:22 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:05:22 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:05:22 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:05:22 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:05:22 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:05:22 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:05:24 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:24 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:25 taft-01 qarshd[20040]: Talking to peer 10.15.80.47:47257 >Jun 11 14:05:25 taft-01 qarshd[20040]: Running cmdline: lvs >Jun 11 14:05:25 taft-01 clvmd[7681]: Got new connection on fd 12 >Jun 11 14:05:25 taft-01 clvmd[7681]: Read on local socket 12, len = 37 >Jun 11 14:05:25 taft-01 clvmd[7681]: creating pipe, [13, 14] >Jun 11 14:05:25 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:05:25 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:05:25 taft-01 clvmd[7681]: in sub thread: client = 0x2a98503260 >Jun 11 14:05:25 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:05:25 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98503260) >Jun 11 14:05:25 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:05:26 taft-01 clvmd[7681]: sync_lock: returning lkid 102d2 >Jun 11 14:05:26 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:26 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:26 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:26 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:26 taft-01 clvmd[7681]: distribute command: XID = 735 >Jun 11 14:05:26 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=735 >Jun 11 14:05:26 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:26 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a985033a0, msglen =37, client=0x2a98502dc0 >Jun 11 14:05:26 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:26 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:26 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:26 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:26 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:26 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:26 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:26 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:26 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/arpd: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cdrom: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cdwriter: Aliased to /dev/cdrom in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/console: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/core: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/0/cpuid: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/0/msr: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/1/cpuid: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/1/msr: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/2/cpuid: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/2/msr: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/3/cpuid: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/3/msr: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/msr0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/msr1: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/msr2: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu/msr3: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu1: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu2: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/cpu3: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/device-mapper: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/diapered_dm-4: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/diapered_dm-7: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:00:1f.1-ide-0:0: Aliased to /dev/cdrom in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part2: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000-part1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000-part1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000-part1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000-part1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000-part1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000-part1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000-part1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/dm-0: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/dm-1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/dm-4: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/dm-7: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/dnrtmsg: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/dvd: Aliased to /dev/cdrom in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/event0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/fd: Symbolic link to directory >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/full: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/fwmonitor: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/gpmctl: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/hda: Aliased to /dev/cdrom in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/helter_skelter/syncd_secondary_core_2legs_1: Aliased to /dev/dm-4 in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/helter_skelter/syncd_secondary_core_2legs_2: Aliased to /dev/dm-7 in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/hw_random: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/initctl: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/input/event0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/input/mice: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ip6_fw: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/kmsg: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/log: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/loop0: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/loop1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/loop2: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/loop3: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/loop4: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/loop5: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/loop6: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/loop7: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/lp0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/lp1: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/lp2: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/lp3: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/MAKEDEV: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/mapper/control: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/mapper/helter_skelter-syncd_secondary_core_2legs_1: Aliased to /dev/dm-4 in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/mapper/helter_skelter-syncd_secondary_core_2legs_2: Aliased to /dev/dm-7 in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/mapper/VolGroup00-LogVol00: Aliased to /dev/dm-0 in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/mapper/VolGroup00-LogVol01: Aliased to /dev/dm-1 in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/mcelog: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md0: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md1: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md10: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md11: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md12: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md13: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md14: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md15: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md16: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md17: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md18: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md19: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md2: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md20: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md21: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md22: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md23: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md24: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md25: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md26: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md27: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md28: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md29: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md3: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md30: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md31: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md4: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md5: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md6: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md7: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md8: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/md9: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/mem: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/mice: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/misc/dlm_clvmd: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/misc/dlm-control: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/msr0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/msr1: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/msr2: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/msr3: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/net/tun: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/nflog: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/null: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/parport0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/parport1: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/parport2: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/parport3: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/port: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ppp: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ptmx: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/pts/0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram0: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram1: Aliased to /dev/ram in device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram10: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram11: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram12: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram13: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram14: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram15: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram2: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram3: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram4: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram5: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram6: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram7: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram8: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ram9: Added to device cache >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/ramdisk: Aliased to /dev/ram0 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/random: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/rawctl: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/root: Aliased to /dev/dm-0 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/route: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/route6: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/rtc: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sda: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sda1: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part1 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sda2: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part2 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdb: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdb1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000-part1 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdc: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdc1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000-part1 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdd: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdd1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000-part1 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sde: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sde1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000-part1 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdf: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdf1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000-part1 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdg: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdg1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000-part1 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdh: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sdh1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000-part1 in device cache (preferred name) >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg1: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg2: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg3: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg4: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg5: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg6: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg7: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg8: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/sg9: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/skip: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/stderr: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/stdin: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/stdout: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/systty: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap1: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap10: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap11: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap12: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap13: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap14: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap15: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap2: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap3: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap4: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap5: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap6: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap7: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap8: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tap9: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tcpdiag: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty0: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty1: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty10: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty11: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty12: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty13: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty14: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty15: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty16: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty17: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty18: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty19: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty2: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty20: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty21: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty22: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty23: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty24: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty25: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty26: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty27: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty28: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty29: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty3: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty30: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty31: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty32: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty33: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty34: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty35: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty36: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty37: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty38: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty39: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty4: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty40: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty41: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty42: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty43: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty44: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty45: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty46: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty47: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty48: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty49: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty5: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty50: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty51: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty52: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty53: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty54: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty55: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty56: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty57: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty58: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty59: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty6: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty60: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty61: Not a block device >Jun 11 14:05:26 taft-01 lvm[7565]: /dev/tty62: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/tty63: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/tty7: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/tty8: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/tty9: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS0: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS1: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS10: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS11: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS12: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS13: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS14: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS15: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS16: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS17: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS18: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS19: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS2: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS20: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS21: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS22: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS23: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS24: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS25: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS26: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS27: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS28: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS29: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS3: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS30: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS31: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS32: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS33: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS34: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS35: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS36: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS37: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS38: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS39: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS4: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS40: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS41: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS42: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS43: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS44: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS45: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS46: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS47: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS48: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS49: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS5: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS50: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS51: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS52: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS53: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS54: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS55: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS56: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS57: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS58: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS59: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS6: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS60: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS61: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS62: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS63: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS64: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS65: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS66: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS67: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS7: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS8: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ttyS9: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/urandom: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/usersock: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcs: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcs1: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcs2: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcs3: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcs4: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcs5: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcs6: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcsa: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcsa1: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcsa2: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcsa3: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcsa4: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcsa5: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/vcsa6: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/VolGroup00/LogVol00: Aliased to /dev/root in device cache >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/VolGroup00/LogVol01: Aliased to /dev/dm-1 in device cache >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/xfrm: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/XOR: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/zero: Not a block device >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ramdisk) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ramdisk RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ramdisk: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ramdisk >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ramdisk: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ramdisk) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ramdisk: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ramdisk RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ramdisk: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ramdisk >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ramdisk >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ramdisk) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ramdisk RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ramdisk: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ramdisk: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ramdisk >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/cdrom) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/cdrom: open failed: No medium found >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/cdrom: Skipping: open failed >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/loop0: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/sda RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda: size is 143114240 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: /dev/sda already opened read-only >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda: Immediate close attempt while still referenced >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/sda >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/sda RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda: block size is 4096 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda: Skipping: Partition table signature found >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/sda >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/md0) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/md0 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md0: size is 0 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md0: Skipping: Too small to hold a PV >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/md0 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/diapered_dm-4: Skipping: Unrecognised LVM device type 252 >Jun 11 14:05:27 taft-01 lvm[7565]: dm status (253:0) OF [16384] >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/root) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/root RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/root: size is 122355712 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/root >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/root: size is 122355712 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/root) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/root RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/root: block size is 4096 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/root >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/root >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/root) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/root RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/root: block size is 4096 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/root: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/root >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/loop1: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda1) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/sda1 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda1: size is 208782 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/sda1 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda1: size is 208782 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda1) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/sda1 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda1: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/sda1 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/sda1 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda1) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/sda1 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda1: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda1: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/sda1 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md1: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/diapered_dm-7: Skipping: Unrecognised LVM device type 252 >Jun 11 14:05:27 taft-01 lvm[7565]: dm status (253:1) OF [16384] >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-1) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/dm-1 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-1: size is 20447232 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/dm-1 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-1: size is 20447232 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-1) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/dm-1 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-1: block size is 4096 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/dm-1 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/dm-1 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-1) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/dm-1 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-1: block size is 4096 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-1: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/dm-1 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram2) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram2 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram2: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram2 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram2: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram2) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram2: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram2 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram2: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram2 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram2 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram2) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram2 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram2: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram2: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram2 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/loop2: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda2) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/sda2 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda2: size is 142898175 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/sda2 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda2: size is 142898175 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda2) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/sda2 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda2: block size is 512 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/sda2 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/sda2 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/sda2) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/sda2 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda2: block size is 512 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda2: lvm2 label detected >Jun 11 14:05:27 taft-01 lvm[7565]: lvmcache: /dev/sda2: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/sda2: Found metadata at 6656 size 1150 (in area at 4096 size 192512) for VolGroup00 (tvepdD-LeoF-D1GY-qhQL-Ld57-IVl0-y3N5pc) >Jun 11 14:05:27 taft-01 lvm[7565]: lvmcache: /dev/sda2: now in VG VolGroup00 >Jun 11 14:05:27 taft-01 lvm[7565]: lvmcache: /dev/sda2: setting VolGroup00 VGID to tvepdDLeoFD1GYqhQLLd57IVl0y3N5pc >Jun 11 14:05:27 taft-01 lvm[7565]: lvmcache: /dev/sda2: VG VolGroup00: Set creation host to localhost.localdomain. >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/sda2 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md2: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram3) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram3 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram3: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram3 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram3: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram3) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram3: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram3 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram3: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram3 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram3 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram3) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram3 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram3: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram3: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram3 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/loop3: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md3: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram4) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram4 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram4: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram4 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram4: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram4) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram4: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram4 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram4: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram4 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram4 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram4) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram4 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram4: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram4: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram4 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/loop4: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md4: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: dm status (253:4) OF [16384] >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-4) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/dm-4 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-4: size is 1638400 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/dm-4 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-4: size is 1638400 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-4) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/dm-4 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-4: block size is 4096 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/dm-4 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/dm-4 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-4) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/dm-4 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-4: block size is 4096 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-4: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/dm-4 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram5) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram5 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram5: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram5 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram5: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram5) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram5: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram5 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram5: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram5 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram5 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram5) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram5 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram5: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram5: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram5 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/loop5: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md5: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram6) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram6 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram6: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram6 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram6: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram6) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram6: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram6 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram6: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram6 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram6 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram6) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram6 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram6: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram6: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram6 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/loop6: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md6: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram7) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram7 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram7: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram7 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram7: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram7) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram7: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram7 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram7: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram7 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram7 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram7) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram7 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram7: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram7: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram7 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/loop7: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md7: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: dm status (253:7) OF [16384] >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-7) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/dm-7 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-7: size is 1638400 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/dm-7 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-7: size is 1638400 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-7) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/dm-7 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-7: block size is 4096 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/dm-7 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/dm-7 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/dm-7) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/dm-7 RW O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-7: block size is 4096 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/dm-7: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/dm-7 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram8) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram8 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram8: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram8 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram8: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram8) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram8: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram8 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram8: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram8 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram8 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram8) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram8 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram8: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram8: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram8 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md8: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram9) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram9 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram9: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram9 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram9: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram9) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram9: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram9 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram9: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram9 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram9 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram9) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram9 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram9: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram9: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram9 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md9: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram10) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram10 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram10: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram10 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram10: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram10) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram10: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram10 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram10: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram10 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram10 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram10) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram10 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram10: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram10: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram10 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md10: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram11) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram11 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram11: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram11 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram11: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram11) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram11: Not using O_DIRECT >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram11 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram11: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram11 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram11 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram11) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram11 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram11: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram11: No label detected >Jun 11 14:05:27 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram11 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/md11: Skipping (sysfs) >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram12) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram12 RO >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram12: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram12 >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram12: size is 32768 sectors >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram12) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram12: Not using O_DIRECT >Jun 11 14:05:27 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram12 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram12: block size is 1024 bytes >Jun 11 14:05:27 taft-01 lvm[7565]: Closed /dev/ram12 >Jun 11 14:05:27 taft-01 lvm[7565]: Using /dev/ram12 >Jun 11 14:05:27 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram12) called while suspended >Jun 11 14:05:27 taft-01 lvm[7565]: Opened /dev/ram12 RW >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram12: block size is 1024 bytes >Jun 11 14:05:27 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:27 taft-01 lvm[7565]: /dev/ram12: No label detected >Jun 11 14:05:28 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram12 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md12: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram13) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/ram13 RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram13: size is 32768 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram13 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram13: size is 32768 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram13) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram13: Not using O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/ram13 RW >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram13: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram13 >Jun 11 14:05:28 taft-01 lvm[7565]: Using /dev/ram13 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram13) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/ram13 RW >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram13: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram13: No label detected >Jun 11 14:05:28 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram13 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md13: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram14) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/ram14 RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram14: size is 32768 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram14 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram14: size is 32768 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram14) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram14: Not using O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/ram14 RW >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram14: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram14 >Jun 11 14:05:28 taft-01 lvm[7565]: Using /dev/ram14 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram14) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/ram14 RW >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram14: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram14: No label detected >Jun 11 14:05:28 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram14 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md14: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram15) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/ram15 RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram15: size is 32768 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram15 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram15: size is 32768 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram15) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram15: Not using O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/ram15 RW >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram15: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram15 >Jun 11 14:05:28 taft-01 lvm[7565]: Using /dev/ram15 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/ram15) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/ram15 RW >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram15: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/ram15: No label detected >Jun 11 14:05:28 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/ram15 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md15: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdb RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdb: size is 284511150 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: /dev/sdb already opened read-only >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdb: Immediate close attempt while still referenced >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdb >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdb RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdb: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdb: Skipping: Partition table signature found >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdb >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md16: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdb1 RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdb1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdb1 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdb1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdb1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdb1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdb1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using /dev/sdb1 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdb1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdb1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdb1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdb1: lvm2 label detected >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sdb1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:28 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdb1 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md17: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md18: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md19: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md20: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md21: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md22: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md23: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md24: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md25: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md26: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md27: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md28: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md29: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md30: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/md31: Skipping (sysfs) >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdc RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdc: size is 284511150 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: /dev/sdc already opened read-only >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdc: Immediate close attempt while still referenced >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdc >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdc RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdc: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdc: Skipping: Partition table signature found >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdc >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdc1 RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdc1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdc1 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdc1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdc1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdc1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdc1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using /dev/sdc1 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdc1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdc1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdc1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdc1: lvm2 label detected >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sdc1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:28 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdc1 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdd RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdd: size is 284511150 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: /dev/sdd already opened read-only >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdd: Immediate close attempt while still referenced >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdd >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdd RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdd: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdd: Skipping: Partition table signature found >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdd >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdd1 RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdd1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdd1 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdd1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdd1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdd1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdd1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using /dev/sdd1 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdd1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdd1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdd1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdd1: lvm2 label detected >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sdd1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:28 taft-01 lvm[7565]: <backtrace> >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdd1 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sde RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde: size is 284511150 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: /dev/sde already opened read-only >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde: Immediate close attempt while still referenced >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sde >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sde RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde: Skipping: Partition table signature found >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sde >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sde1 RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sde1 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sde1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sde1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using /dev/sde1 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sde1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sde1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde1: lvm2 label detected >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sde1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde1: Found metadata at 22016 size 1497 (in area at 4096 size 192512) for helter_skelter (1pP81X-IQLO-yvZh-CW5V-ZqNy-FEbm-pMYLl6) >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sde1: now in VG helter_skelter >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sde1: setting helter_skelter VGID to 1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6 >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sde1: VG helter_skelter: Set creation host to taft-02. >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdf) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdf: open failed: No such device or address >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdf: Skipping: open failed >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdf1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdf1: open failed: No such device or address >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdf1: Skipping: open failed >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdg RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg: size is 284511150 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: /dev/sdg already opened read-only >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg: Immediate close attempt while still referenced >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdg >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdg RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg: Skipping: Partition table signature found >Jun 11 14:05:28 taft-01 qarshd[20040]: Nothing to do >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdg >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdg1 RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdg1 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdg1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdg1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using /dev/sdg1 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdg1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdg1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg1: lvm2 label detected >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sdg1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg1: Found metadata at 48128 size 1497 (in area at 4096 size 192512) for helter_skelter (1pP81X-IQLO-yvZh-CW5V-ZqNy-FEbm-pMYLl6) >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sdg1: now in VG helter_skelter (1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6) >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdh RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh: size is 284511150 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: /dev/sdh already opened read-only >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh: Immediate close attempt while still referenced >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdh >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdh RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh: block size is 1024 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh: Skipping: Partition table signature found >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdh >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdh1 RO >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdh1 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh1: size is 284511087 sectors >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdh1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdh1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using /dev/sdh1 >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/dev/sdh1) called while suspended >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /dev/sdh1 RW O_DIRECT >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh1: block size is 512 bytes >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh1: lvm2 label detected >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sdh1: now in VG #orphans_lvm2 (#orphans_lvm2) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh1: Found metadata at 84992 size 1497 (in area at 4096 size 192512) for helter_skelter (1pP81X-IQLO-yvZh-CW5V-ZqNy-FEbm-pMYLl6) >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: /dev/sdh1: now in VG helter_skelter (1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6) >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sde1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sdg1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sdh1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sdg1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sde1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sdh1 >Jun 11 14:05:28 taft-01 lvm[7565]: description not found in config: defaulting to >Jun 11 14:05:28 taft-01 lvm[7565]: Read helter_skelter metadata (209) from /dev/sde1 at 22016 size 1497 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sdg1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sde1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sdh1 >Jun 11 14:05:28 taft-01 lvm[7565]: description not found in config: defaulting to >Jun 11 14:05:28 taft-01 lvm[7565]: Read helter_skelter metadata (209) from /dev/sdg1 at 48128 size 1497 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sdg1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sde1 >Jun 11 14:05:28 taft-01 lvm[7565]: Using cached label for /dev/sdh1 >Jun 11 14:05:28 taft-01 lvm[7565]: description not found in config: defaulting to >Jun 11 14:05:28 taft-01 lvm[7565]: Read helter_skelter metadata (209) from /dev/sdh1 at 84992 size 1497 >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg1 0: 0 200: syncd_secondary_core_2legs_1(0:0) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg1 1: 200 200: syncd_secondary_core_2legs_2(0:0) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdg1 2: 400 34330: NULL(0:0) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sde1 0: 0 34730: NULL(0:0) >Jun 11 14:05:28 taft-01 lvm[7565]: /dev/sdh1 0: 0 34730: NULL(0:0) >Jun 11 14:05:28 taft-01 lvm[7565]: Volume group "helter_skelter" is already consistent >Jun 11 14:05:28 taft-01 lvm[7565]: Locking VG V_helter_skelter UN B (0x6) >Jun 11 14:05:28 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:05:28 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:28 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:05:28 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:102d2 >Jun 11 14:05:28 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:28 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:28 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:28 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:28 taft-01 clvmd[7681]: distribute command: XID = 736 >Jun 11 14:05:28 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=736 >Jun 11 14:05:28 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:28 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a985033a0, msglen =37, client=0x2a98502dc0 >Jun 11 14:05:28 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:28 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:28 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:28 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:28 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:28 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:28 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:28 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:28 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sde1 >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdg1 >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /dev/sdh1 >Jun 11 14:05:28 taft-01 lvm[7565]: Reloading config files >Jun 11 14:05:28 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:05:28 taft-01 lvm[7565]: Wiping internal VG cache >Jun 11 14:05:28 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:05:28 taft-01 lvm[7565]: Loading config file: /etc/lvm/lvm.conf >Jun 11 14:05:28 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:05:28 taft-01 lvm[7565]: WARNING: dev_open(/etc/lvm/lvm.conf) called while suspended >Jun 11 14:05:28 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:28 taft-01 lvm[7565]: Opened /etc/lvm/lvm.conf RO >Jun 11 14:05:28 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:05:28 taft-01 lvm[7565]: Closed /etc/lvm/lvm.conf >Jun 11 14:05:28 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:05:28 taft-01 lvm[7565]: Setting log/syslog to 1 >Jun 11 14:05:28 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:05:28 taft-01 lvm[7565]: Setting log/level to 0 >Jun 11 14:05:28 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=736 >Jun 11 14:05:28 taft-01 lvm[7565]: Setting log/verbose to 0 >Jun 11 14:05:28 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:05:28 taft-01 lvm[7565]: Setting log/indent to 1 >Jun 11 14:05:28 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:28 taft-01 lvm[7565]: Setting log/prefix to >Jun 11 14:05:28 taft-01 lvm[7565]: Setting log/command_names to 0 >Jun 11 14:05:28 taft-01 lvm[7565]: Setting global/test to 0 >Jun 11 14:05:28 taft-01 lvm[7565]: Setting log/overwrite to 0 >Jun 11 14:05:28 taft-01 lvm[7565]: log/activation not found in config: defaulting to 0 >Jun 11 14:05:28 taft-01 lvm[7565]: Logging initialised at Wed Jun 11 14:05:28 2008 >Jun 11 14:05:28 taft-01 lvm[7565]: Setting global/umask to 63 >Jun 11 14:05:28 taft-01 lvm[7565]: Setting devices/dir to /dev >Jun 11 14:05:28 taft-01 lvm[7565]: Setting global/proc to /proc >Jun 11 14:05:28 taft-01 lvm[7565]: Setting global/activation to 1 >Jun 11 14:05:28 taft-01 lvm[7565]: global/suffix not found in config: defaulting to 1 >Jun 11 14:05:28 taft-01 lvm[7565]: Setting global/units to h >Jun 11 14:05:28 taft-01 lvm[7565]: Setting activation/readahead to auto >Jun 11 14:05:28 taft-01 lvm[7565]: devices/preferred_names not found in config file: using built-in preferences >Jun 11 14:05:28 taft-01 lvm[7565]: Matcher built with 3 dfa states >Jun 11 14:05:28 taft-01 lvm[7565]: Setting devices/ignore_suspended_devices to 0 >Jun 11 14:05:28 taft-01 lvm[7565]: Setting devices/cache_dir to /etc/lvm/cache >Jun 11 14:05:28 taft-01 lvm[7565]: Setting devices/write_cache_state to 1 >Jun 11 14:05:28 taft-01 lvm[7565]: Initialised format: lvm1 >Jun 11 14:05:28 taft-01 lvm[7565]: Initialised format: pool >Jun 11 14:05:28 taft-01 lvm[7565]: Initialised format: lvm2 >Jun 11 14:05:28 taft-01 lvm[7565]: global/format not found in config: defaulting to lvm2 >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm1 >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_pool >Jun 11 14:05:28 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm2 >Jun 11 14:05:28 taft-01 lvm[7565]: Initialised segtype: striped >Jun 11 14:05:28 taft-01 lvm[7565]: Initialised segtype: zero >Jun 11 14:05:28 taft-01 lvm[7565]: Initialised segtype: error >Jun 11 14:05:28 taft-01 lvm[7565]: Setting dmeventd/snapshot_library to libdevmapper-event-lvm2snapshot.so >Jun 11 14:05:28 taft-01 lvm[7565]: Setting global/library_dir to /usr/lib64 >Jun 11 14:05:28 taft-01 lvm[7565]: Initialised segtype: snapshot >Jun 11 14:05:28 taft-01 lvm[7565]: Initialised segtype: mirror >Jun 11 14:05:28 taft-01 lvm[7565]: Completed: vgreduce --config devices{ignore_suspended_devices=1} --removemissing helter_skelter >Jun 11 14:05:30 taft-01 lvm[7565]: No longer monitoring mirror device helter_skelter-syncd_secondary_core_2legs_1 for events >Jun 11 14:05:30 taft-01 clvmd[7681]: sync_lock: returning lkid 10250 >Jun 11 14:05:30 taft-01 clvmd[7681]: Writing status 0 down pipe 14 >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 13: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: distribute command: XID = 737 >Jun 11 14:05:30 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503260, msg=0x2a98503370, len=37, csid=(nil), xid=737 >Jun 11 14:05:30 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:30 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a985033c0, msglen =37, client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:30 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:30 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:30 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:30 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 13: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:30 taft-01 clvmd[7681]: Read on local socket 12, len = 37 >Jun 11 14:05:30 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98503260) >Jun 11 14:05:30 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:10250 >Jun 11 14:05:30 taft-01 clvmd[7681]: Writing status 0 down pipe 14 >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 13: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: distribute command: XID = 738 >Jun 11 14:05:30 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503260, msg=0x2a98503370, len=37, csid=(nil), xid=738 >Jun 11 14:05:30 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:30 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a985033c0, msglen =37, client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:30 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:30 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:30 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:30 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 13: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:30 taft-01 clvmd[7681]: Read on local socket 12, len = 33 >Jun 11 14:05:30 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 1 (client=0x2a98503260) >Jun 11 14:05:30 taft-01 clvmd[7681]: sync_lock: 'V_VolGroup00' mode:3 flags=0 >Jun 11 14:05:30 taft-01 clvmd[7681]: sync_lock: returning lkid 10285 >Jun 11 14:05:30 taft-01 clvmd[7681]: Writing status 0 down pipe 14 >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 13: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: distribute command: XID = 739 >Jun 11 14:05:30 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503260, msg=0x2a98503370, len=33, csid=(nil), xid=739 >Jun 11 14:05:30 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:30 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a985033c0, msglen =33, client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:05:30 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:30 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:30 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:30 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 13: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:30 taft-01 clvmd[7681]: Read on local socket 12, len = 33 >Jun 11 14:05:30 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 6 (client=0x2a98503260) >Jun 11 14:05:30 taft-01 clvmd[7681]: sync_unlock: 'V_VolGroup00' lkid:10285 >Jun 11 14:05:30 taft-01 clvmd[7681]: Writing status 0 down pipe 14 >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 13: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: distribute command: XID = 740 >Jun 11 14:05:30 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503260, msg=0x2a98503370, len=33, csid=(nil), xid=740 >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:30 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a985033c0, msglen =33, client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:05:30 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:30 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:30 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:30 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 13: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503260 >Jun 11 14:05:30 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:30 taft-01 clvmd[7681]: Read on local socket 12, len = 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:05:30 taft-01 qarshd[20040]: That's enough >Jun 11 14:05:30 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:05:30 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:05:30 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:05:30 taft-01 qarshd[20044]: Talking to peer 10.15.80.47:47258 >Jun 11 14:05:30 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503260, msg=(nil), len=0, csid=(nil), xid=740 >Jun 11 14:05:30 taft-01 qarshd[20044]: Running cmdline: lvs -a -o +devices >Jun 11 14:05:30 taft-01 clvmd[7681]: process_work_item: free fd 12 >Jun 11 14:05:30 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:30 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:05:30 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:05:30 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:05:30 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:05:30 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:05:30 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:05:30 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98502dc0) >Jun 11 14:05:30 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:05:30 taft-01 clvmd[7681]: sync_lock: returning lkid 202ee >Jun 11 14:05:30 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:30 taft-01 clvmd[7681]: distribute command: XID = 741 >Jun 11 14:05:30 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=741 >Jun 11 14:05:30 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:30 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:05:30 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:30 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:30 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:30 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:30 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:30 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:05:30 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:05:30 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:202ee >Jun 11 14:05:30 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:30 taft-01 clvmd[7681]: distribute command: XID = 742 >Jun 11 14:05:30 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=742 >Jun 11 14:05:30 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:30 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503260, msglen =37, client=0x2a98502dc0 >Jun 11 14:05:30 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:30 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:30 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:30 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:30 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:30 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:30 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:30 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:31 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:31 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:31 taft-01 clvmd[7681]: Read on local socket 5, len = 33 >Jun 11 14:05:31 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 1 (client=0x2a98502dc0) >Jun 11 14:05:31 taft-01 clvmd[7681]: sync_lock: 'V_VolGroup00' mode:3 flags=0 >Jun 11 14:05:31 taft-01 clvmd[7681]: sync_lock: returning lkid 101e7 >Jun 11 14:05:31 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:31 taft-01 clvmd[7681]: distribute command: XID = 743 >Jun 11 14:05:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=33, csid=(nil), xid=743 >Jun 11 14:05:31 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:31 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =33, client=0x2a98502dc0 >Jun 11 14:05:31 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:05:31 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:31 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:31 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:31 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:31 taft-01 clvmd[7681]: Read on local socket 5, len = 33 >Jun 11 14:05:31 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 6 (client=0x2a98502dc0) >Jun 11 14:05:31 taft-01 clvmd[7681]: sync_unlock: 'V_VolGroup00' lkid:101e7 >Jun 11 14:05:31 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:31 taft-01 clvmd[7681]: distribute command: XID = 744 >Jun 11 14:05:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=33, csid=(nil), xid=744 >Jun 11 14:05:31 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:31 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =33, client=0x2a98502dc0 >Jun 11 14:05:31 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:05:31 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:31 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:31 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:05:31 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:31 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:05:31 taft-01 qarshd[20044]: That's enough >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:31 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:05:31 taft-01 qarshd[20047]: Talking to peer 10.15.80.47:47259 >Jun 11 14:05:31 taft-01 qarshd[20047]: Running cmdline: lvs -a -o +devices >Jun 11 14:05:31 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:05:31 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:05:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=744 >Jun 11 14:05:31 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:05:31 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:05:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:31 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:05:31 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:05:31 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:05:31 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: in sub thread: client = 0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:05:31 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98503020) >Jun 11 14:05:31 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:05:31 taft-01 clvmd[7681]: sync_lock: returning lkid 20078 >Jun 11 14:05:31 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: distribute command: XID = 745 >Jun 11 14:05:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503020, msg=0x2a98503130, len=37, csid=(nil), xid=745 >Jun 11 14:05:31 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:31 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503260, msglen =37, client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:31 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:31 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:31 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:31 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:05:31 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98503020) >Jun 11 14:05:31 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:20078 >Jun 11 14:05:31 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: distribute command: XID = 746 >Jun 11 14:05:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503020, msg=0x2a98502850, len=37, csid=(nil), xid=746 >Jun 11 14:05:31 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:31 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503130, msglen =37, client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:05:31 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:31 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:31 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:31 taft-01 clvmd[7681]: Read on local socket 5, len = 33 >Jun 11 14:05:31 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 1 (client=0x2a98503020) >Jun 11 14:05:31 taft-01 clvmd[7681]: sync_lock: 'V_VolGroup00' mode:3 flags=0 >Jun 11 14:05:31 taft-01 clvmd[7681]: sync_lock: returning lkid 10327 >Jun 11 14:05:31 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: distribute command: XID = 747 >Jun 11 14:05:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503020, msg=0x2a98502850, len=33, csid=(nil), xid=747 >Jun 11 14:05:31 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:31 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503130, msglen =33, client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:05:31 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:31 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:31 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:31 taft-01 clvmd[7681]: Read on local socket 5, len = 33 >Jun 11 14:05:31 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 6 (client=0x2a98503020) >Jun 11 14:05:31 taft-01 clvmd[7681]: sync_unlock: 'V_VolGroup00' lkid:10327 >Jun 11 14:05:31 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: distribute command: XID = 748 >Jun 11 14:05:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503020, msg=0x2a98502850, len=33, csid=(nil), xid=748 >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:05:31 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503130, msglen =33, client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:05:31 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:05:31 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:05:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:31 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:05:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:05:31 taft-01 clvmd[7681]: Send local reply >Jun 11 14:05:31 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:05:31 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:05:31 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:05:31 taft-01 qarshd[20047]: That's enough >Jun 11 14:05:31 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:05:31 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:05:31 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:05:31 taft-01 qarshd[20050]: Talking to peer 10.15.80.47:47260 >Jun 11 14:05:31 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:05:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98503020, msg=(nil), len=0, csid=(nil), xid=748 >Jun 11 14:05:31 taft-01 qarshd[20050]: Running cmdline: dmsetup ls >Jun 11 14:05:31 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:05:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:05:31 taft-01 qarshd[20050]: That's enough >Jun 11 14:05:31 taft-01 qarshd[20052]: Talking to peer 10.15.80.47:47261 >Jun 11 14:05:31 taft-01 qarshd[20052]: Running cmdline: dmsetup ls >Jun 11 14:05:31 taft-01 qarshd[20052]: That's enough >Jun 11 14:05:32 taft-01 lvm[7565]: No longer monitoring mirror device helter_skelter-syncd_secondary_core_2legs_2 for events >Jun 11 14:05:32 taft-01 lvm[7565]: Unlocking memory >Jun 11 14:05:32 taft-01 lvm[7565]: memlock_count dec to 0 >Jun 11 14:05:32 taft-01 lvm[7565]: Internal persistent device cache empty - not writing to /etc/lvm/cache/.cache >Jun 11 14:05:32 taft-01 lvm[7565]: Wiping internal VG cache >Jun 11 14:05:33 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:34 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:36 taft-01 qarshd[20055]: Talking to peer 10.15.80.47:47268 >Jun 11 14:05:36 taft-01 qarshd[20055]: Running cmdline: /usr/tests/sts-rhel4.7/bin/checkit -w /mnt/syncd_secondary_core_2legs_1/checkit -f /tmp/checkit_syncd_secondary_core_2legs_1 -v >Jun 11 14:05:36 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:37 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:39 taft-01 qarshd[20055]: Nothing to do >Jun 11 14:05:39 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:40 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:41 taft-01 qarshd[20055]: That's enough >Jun 11 14:05:41 taft-01 qarshd[20058]: Talking to peer 10.15.80.47:47269 >Jun 11 14:05:41 taft-01 qarshd[20058]: Running cmdline: /usr/tests/sts-rhel4.7/bin/checkit -w /mnt/syncd_secondary_core_2legs_2/checkit -f /tmp/checkit_syncd_secondary_core_2legs_2 -v >Jun 11 14:05:42 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:43 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:44 taft-01 qarshd[20058]: Nothing to do >Jun 11 14:05:45 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:46 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:46 taft-01 qarshd[20058]: That's enough >Jun 11 14:05:48 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:49 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:51 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:52 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:54 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:55 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:05:57 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:05:58 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:00 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:01 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:03 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:04 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:06 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:07 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:09 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:10 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:12 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:13 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:15 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:16 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:18 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:19 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:19 taft-01 qarshd[20068]: Talking to peer 10.15.80.47:51671 >Jun 11 14:06:19 taft-01 qarshd[20068]: Running cmdline: echo running > /sys/block/sdf/device/state >Jun 11 14:06:19 taft-01 qarshd[20068]: That's enough >Jun 11 14:06:19 taft-01 qarshd[20070]: Talking to peer 10.15.80.47:51675 >Jun 11 14:06:19 taft-01 qarshd[20070]: Running cmdline: pvcreate /dev/sdf1 >Jun 11 14:06:19 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:06:19 taft-01 clvmd[7681]: Read on local socket 5, len = 31 >Jun 11 14:06:19 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:06:19 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:06:19 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:06:19 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:06:19 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:06:19 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:06:19 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'P_#orphans' at 4 (client=0x2a98502dc0) >Jun 11 14:06:19 taft-01 clvmd[7681]: sync_lock: 'P_#orphans' mode:4 flags=0 >Jun 11 14:06:19 taft-01 clvmd[7681]: sync_lock: returning lkid 10063 >Jun 11 14:06:19 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:19 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:19 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:19 taft-01 clvmd[7681]: distribute command: XID = 749 >Jun 11 14:06:19 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=31, csid=(nil), xid=749 >Jun 11 14:06:19 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:06:19 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:19 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503260, msglen =31, client=0x2a98502dc0 >Jun 11 14:06:19 taft-01 clvmd[7681]: Dropping metadata for VG #orphans >Jun 11 14:06:19 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:06:19 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:06:19 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:06:19 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:06:19 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:06:19 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:06:19 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:19 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:06:19 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:19 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:19 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:19 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:19 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:19 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:19 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 31 >Jun 11 14:06:20 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:06:20 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'P_#orphans' at 6 (client=0x2a98502dc0) >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_unlock: 'P_#orphans' lkid:10063 >Jun 11 14:06:20 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: distribute command: XID = 750 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=31, csid=(nil), xid=750 >Jun 11 14:06:20 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:20 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =31, client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Dropping metadata for VG #orphans >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:06:20 taft-01 qarshd[20070]: That's enough >Jun 11 14:06:20 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:06:20 taft-01 qarshd[20073]: Talking to peer 10.15.80.47:51676 >Jun 11 14:06:20 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:06:20 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:06:20 taft-01 qarshd[20073]: Running cmdline: vgextend helter_skelter /dev/sdf1 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=750 >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 31 >Jun 11 14:06:20 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:06:20 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:06:20 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:06:20 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:06:20 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'P_#orphans' at 4 (client=0x2a98502dc0) >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_lock: 'P_#orphans' mode:4 flags=0 >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_lock: returning lkid 1037f >Jun 11 14:06:20 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: distribute command: XID = 751 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=31, csid=(nil), xid=751 >Jun 11 14:06:20 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:20 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503260, msglen =31, client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Dropping metadata for VG #orphans >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:20 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_lock: returning lkid 20104 >Jun 11 14:06:20 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: distribute command: XID = 752 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=752 >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:20 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:20 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:06:20 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: distribute command: XID = 753 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=753 >Jun 11 14:06:20 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:20 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503260, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:20 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:20104 >Jun 11 14:06:20 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: distribute command: XID = 754 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=754 >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:20 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 31 >Jun 11 14:06:20 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:06:20 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'P_#orphans' at 6 (client=0x2a98502dc0) >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_unlock: 'P_#orphans' lkid:1037f >Jun 11 14:06:20 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: distribute command: XID = 755 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=31, csid=(nil), xid=755 >Jun 11 14:06:20 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:20 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503260, msglen =31, client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Dropping metadata for VG #orphans >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:06:20 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:06:20 taft-01 qarshd[20073]: That's enough >Jun 11 14:06:20 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:06:20 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=755 >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 qarshd[20076]: Talking to peer 10.15.80.47:51677 >Jun 11 14:06:20 taft-01 qarshd[20076]: Running cmdline: lvconvert --corelog -m 1 helter_skelter/syncd_secondary_core_2legs_1 /dev/sdg1:0-1000 /dev/sdf1:0-1000 >Jun 11 14:06:20 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:20 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:06:20 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:06:20 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:06:20 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_lock: returning lkid 10116 >Jun 11 14:06:20 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: distribute command: XID = 756 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=756 >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:20 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:20 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:06:20 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: distribute command: XID = 757 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=757 >Jun 11 14:06:20 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:20 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503260, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:20 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:06:20 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:20 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:20 taft-01 clvmd[7681]: Read on local socket 5, len = 85 >Jun 11 14:06:20 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:06:20 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:20 taft-01 clvmd[7681]: pre_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV' mode:4 flags=5 >Jun 11 14:06:20 taft-01 clvmd[7681]: sync_lock: returning lkid 103bf >Jun 11 14:06:20 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:20 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:20 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: distribute command: XID = 758 >Jun 11 14:06:20 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98502b30, len=85, csid=(nil), xid=758 >Jun 11 14:06:20 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:20 taft-01 clvmd[7681]: process_local_command: LOCK_LV (0x32) msg=0x2a98502b90, msglen =85, client=0x2a98502dc0 >Jun 11 14:06:20 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:06:20 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:06:21 taft-01 kernel: dm-cmirror: Creating xoT7UjpV (1) >Jun 11 14:06:21 taft-01 kernel: dm-cmirror: start_server called >Jun 11 14:06:21 taft-01 kernel: dm-cmirror: cluster_log_serverd ready for work >Jun 11 14:06:21 taft-01 kernel: dm-cmirror: Node joining >Jun 11 14:06:21 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:06:21 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:06:21 taft-01 kernel: dm-cmirror: server_id=dead, server_valid=0, xoT7UjpV >Jun 11 14:06:21 taft-01 kernel: dm-cmirror: trigger = LRT_GET_SYNC_COUNT >Jun 11 14:06:21 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:06:21 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:21 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:06:21 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:21 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:22 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:23 taft-01 kernel: dm-cmirror: Node joining >Jun 11 14:06:23 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:06:23 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:06:23 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:24 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:25 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:26 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: Node joining >Jun 11 14:06:27 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:06:27 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:06:27 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:27 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:27 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:27 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:27 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:27 taft-01 clvmd[7681]: Read on local socket 5, len = 85 >Jun 11 14:06:27 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:06:27 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:27 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:27 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:27 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:27 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:27 taft-01 clvmd[7681]: distribute command: XID = 759 >Jun 11 14:06:27 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98502850, len=85, csid=(nil), xid=759 >Jun 11 14:06:27 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:27 taft-01 clvmd[7681]: process_local_command: LOCK_LV (0x32) msg=0x2a985028b0, msglen =85, client=0x2a98502dc0 >Jun 11 14:06:27 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: cluster_resume: Setting recovery_halted = 0 >Jun 11 14:06:27 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 2 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 0 >Jun 11 14:06:27 taft-01 [7565]: Monitoring mirror device helter_skelter-syncd_secondary_core_2legs_1 for events >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: Loading config file: /etc/lvm/lvm.conf >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Opened /etc/lvm/lvm.conf RO >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Closed /etc/lvm/lvm.conf >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting log/syslog to 1 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: Setting log/level to 0 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting log/verbose to 0 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting log/indent to 1 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 2 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting log/prefix to >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: Setting log/command_names to 0 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting global/test to 0 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting log/overwrite to 0 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:06:27 taft-01 lvm[7565]: log/activation not found in config: defaulting to 0 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: Logging initialised at Wed Jun 11 14:06:27 2008 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting global/umask to 63 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting devices/dir to /dev >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 4 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting global/proc to /proc >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: Setting global/activation to 1 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:06:27 taft-01 lvm[7565]: global/suffix not found in config: defaulting to 1 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting global/units to h >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting activation/readahead to auto >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: devices/preferred_names not found in config file: using built-in preferences >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:06:27 taft-01 lvm[7565]: Matcher built with 3 dfa states >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting devices/ignore_suspended_devices to 0 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 2 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting devices/cache_dir to /etc/lvm/cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: Setting devices/write_cache_state to 1 >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:06:27 taft-01 lvm[7565]: Opened /etc/lvm/cache/.cache RO >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000-part1: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram11: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/VolGroup00/LogVol01: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/mapper/helter_skelter-syncd_secondary_core_2legs_2: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram10: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 4 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/helter_skelter/syncd_secondary_core_2legs_2: Aliased to /dev/mapper/helter_skelter-syncd_secondary_core_2legs_2 in device cache (preferred name) >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000-part1: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/VolGroup00/LogVol00: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram12: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (xoT7UjpV) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram6: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000-part1: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:06:27 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/mapper/helter_skelter-syncd_secondary_core_2legs_1: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:06:27 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram13: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: Received recovery work from 1: 708/xoT7UjpV >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram14: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 708/xoT7UjpV >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000-part1: Added to device cache >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: Received recovery work from 1: 709/xoT7UjpV >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/helter_skelter/syncd_secondary_core_2legs_1: Aliased to /dev/mapper/helter_skelter-syncd_secondary_core_2legs_1 in device cache (preferred name) >Jun 11 14:06:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 709/xoT7UjpV >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram5: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/mapper/VolGroup00-LogVol00: Aliased to /dev/VolGroup00/LogVol00 in device cache >Jun 11 14:06:27 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part1: Added to device cache >Jun 11 14:06:27 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram1: Aliased to /dev/ram in device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/sda1: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part1 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/sdb1: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/sdc1: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/sdg1: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/sdd1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000-part1 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/sde1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000-part1 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/sdf1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000-part1 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/sdh1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000-part1 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ramdisk: Added to device cache >Jun 11 14:06:27 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/mapper/VolGroup00-LogVol01: Aliased to /dev/VolGroup00/LogVol01 in device cache >Jun 11 14:06:27 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/dm-4: Aliased to /dev/helter_skelter/syncd_secondary_core_2legs_1 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/dm-7: Aliased to /dev/helter_skelter/syncd_secondary_core_2legs_2 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram0: Aliased to /dev/ramdisk in device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/root: Aliased to /dev/VolGroup00/LogVol00 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/dm-1: Aliased to /dev/VolGroup00/LogVol01 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part2: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram2: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/sda2: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part2 in device cache (preferred name) >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram8: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram9: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/dm-0: Aliased to /dev/root in device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram15: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000-part1: Aliased to /dev/sdb1 in device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000-part1: Aliased to /dev/sdg1 in device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram3: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram4: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/ram7: Added to device cache >Jun 11 14:06:27 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000-part1: Aliased to /dev/sdc1 in device cache >Jun 11 14:06:27 taft-01 lvm[7565]: Loaded persistent filter cache from /etc/lvm/cache/.cache >Jun 11 14:06:27 taft-01 lvm[7565]: Closed /etc/lvm/cache/.cache >Jun 11 14:06:27 taft-01 lvm[7565]: Setting activation/reserved_stack to 256 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting activation/reserved_memory to 8192 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting activation/process_priority to -18 >Jun 11 14:06:27 taft-01 lvm[7565]: Initialised format: lvm1 >Jun 11 14:06:27 taft-01 lvm[7565]: Initialised format: pool >Jun 11 14:06:27 taft-01 lvm[7565]: Initialised format: lvm2 >Jun 11 14:06:27 taft-01 lvm[7565]: global/format not found in config: defaulting to lvm2 >Jun 11 14:06:27 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm1 >Jun 11 14:06:27 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_pool >Jun 11 14:06:27 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm2 >Jun 11 14:06:27 taft-01 lvm[7565]: Initialised segtype: striped >Jun 11 14:06:27 taft-01 lvm[7565]: Initialised segtype: zero >Jun 11 14:06:27 taft-01 lvm[7565]: Initialised segtype: error >Jun 11 14:06:27 taft-01 lvm[7565]: Setting dmeventd/snapshot_library to libdevmapper-event-lvm2snapshot.so >Jun 11 14:06:27 taft-01 lvm[7565]: Setting global/library_dir to /usr/lib64 >Jun 11 14:06:27 taft-01 lvm[7565]: Initialised segtype: snapshot >Jun 11 14:06:27 taft-01 lvm[7565]: Initialised segtype: mirror >Jun 11 14:06:27 taft-01 lvm[7565]: Setting backup/retain_days to 30 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting backup/retain_min to 10 >Jun 11 14:06:27 taft-01 lvm[7565]: Setting backup/archive_dir to /etc/lvm/archive >Jun 11 14:06:27 taft-01 lvm[7565]: Setting backup/backup_dir to /etc/lvm/backup >Jun 11 14:06:27 taft-01 lvm[7565]: Locking memory >Jun 11 14:06:27 taft-01 lvm[7565]: memlock_count inc to 1 >Jun 11 14:06:27 taft-01 lvm[7565]: Parsing: _memlock_inc >Jun 11 14:06:27 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:06:27 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:27 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:06:27 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:27 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:27 taft-01 clvmd[7681]: post_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:06:27 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV' mode:1 flags=4 >Jun 11 14:06:27 taft-01 clvmd[7681]: sync_lock: returning lkid 103bf >Jun 11 14:06:27 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:27 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:27 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:27 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:27 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:27 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:27 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:06:27 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:10116 >Jun 11 14:06:27 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:27 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:27 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:27 taft-01 clvmd[7681]: distribute command: XID = 760 >Jun 11 14:06:27 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=760 >Jun 11 14:06:27 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:27 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:27 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:27 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:27 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:06:27 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:27 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:27 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:27 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:27 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:27 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:27 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:27 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:28 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:29 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:30 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:31 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:32 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:32 taft-01 kernel: dm-cmirror: Received recovery work from 1: 779/xoT7UjpV >Jun 11 14:06:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 779/xoT7UjpV >Jun 11 14:06:32 taft-01 kernel: dm-cmirror: Received recovery work from 1: 713/xoT7UjpV >Jun 11 14:06:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 713/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 716/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 716/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 84/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 84/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 717/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 717/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 718/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 718/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 786/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 786/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 803/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 803/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 87/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 87/xoT7UjpV >Jun 11 14:06:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 720/xoT7UjpV >Jun 11 14:06:33 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 720/xoT7UjpV >Jun 11 14:06:34 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 787/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 787/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 721/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 721/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 788/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 788/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 723/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 723/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 806/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 806/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 90/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 90/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 724/xoT7UjpV >Jun 11 14:06:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 724/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 794/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 794/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 728/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 728/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 96/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 96/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 731/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 731/xoT7UjpV >Jun 11 14:06:35 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 732/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 732/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 799/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 799/xoT7UjpV >Jun 11 14:06:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 99/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 99/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 736/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 736/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 734/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 734/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 101/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 101/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 672/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 672/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 739/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 739/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 818/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 818/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 674/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 674/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 741/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 741/xoT7UjpV >Jun 11 14:06:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 103/xoT7UjpV >Jun 11 14:06:36 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:37 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 103/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Received recovery work from 1: 742/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 742/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Received recovery work from 1: 678/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 678/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Received recovery work from 1: 679/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 679/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Received recovery work from 1: 680/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 680/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Received recovery work from 1: 681/xoT7UjpV >Jun 11 14:06:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 681/xoT7UjpV >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Received recovery work from 1: 109/xoT7UjpV >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Client finishing recovery: 109/xoT7UjpV >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Received recovery work from 1: 683/xoT7UjpV >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Client finishing recovery: 683/xoT7UjpV >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Received recovery work from 1: 684/xoT7UjpV >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Client finishing recovery: 684/xoT7UjpV >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Received recovery work from 1: 686/xoT7UjpV >Jun 11 14:06:38 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Client finishing recovery: 686/xoT7UjpV >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Received recovery work from 1: 687/xoT7UjpV >Jun 11 14:06:38 taft-01 kernel: dm-cmirror: Client finishing recovery: 687/xoT7UjpV >Jun 11 14:06:39 taft-01 kernel: dm-cmirror: Received recovery work from 1: 690/xoT7UjpV >Jun 11 14:06:39 taft-01 kernel: dm-cmirror: Client finishing recovery: 690/xoT7UjpV >Jun 11 14:06:39 taft-01 kernel: dm-cmirror: Received recovery work from 1: 117/xoT7UjpV >Jun 11 14:06:39 taft-01 kernel: dm-cmirror: Client finishing recovery: 117/xoT7UjpV >Jun 11 14:06:39 taft-01 kernel: dm-cmirror: Received recovery work from 1: 118/xoT7UjpV >Jun 11 14:06:39 taft-01 kernel: dm-cmirror: Client finishing recovery: 118/xoT7UjpV >Jun 11 14:06:39 taft-01 kernel: dm-cmirror: Received recovery work from 1: 692/xoT7UjpV >Jun 11 14:06:39 taft-01 kernel: dm-cmirror: Client finishing recovery: 692/xoT7UjpV >Jun 11 14:06:39 taft-01 kernel: dm-cmirror: Received recovery work from 1: 119/xoT7UjpV >Jun 11 14:06:39 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:40 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 119/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Received recovery work from 1: 693/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 693/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Received recovery work from 1: 120/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 120/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Received recovery work from 1: 694/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 694/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Received recovery work from 1: 695/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 695/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Received recovery work from 1: 696/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 696/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Received recovery work from 1: 123/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 123/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Received recovery work from 1: 124/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 124/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Received recovery work from 1: 697/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 697/xoT7UjpV >Jun 11 14:06:40 taft-01 kernel: dm-cmirror: Received recovery work from 1: 126/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Client finishing recovery: 126/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Received recovery work from 1: 698/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Client finishing recovery: 698/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Received recovery work from 1: 127/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Client finishing recovery: 127/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Received recovery work from 1: 128/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Client finishing recovery: 128/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Received recovery work from 1: 699/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Client finishing recovery: 699/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Received recovery work from 1: 131/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Client finishing recovery: 131/xoT7UjpV >Jun 11 14:06:41 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Received recovery work from 1: 702/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Client finishing recovery: 702/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Received recovery work from 1: 134/xoT7UjpV >Jun 11 14:06:41 taft-01 kernel: dm-cmirror: Client finishing recovery: 134/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Received recovery work from 1: 705/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Client finishing recovery: 705/xoT7UjpV >Jun 11 14:06:42 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:42 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:42 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:06:42 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:06:42 taft-01 clvmd[7681]: sync_lock: returning lkid 10126 >Jun 11 14:06:42 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:42 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:42 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:42 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:42 taft-01 clvmd[7681]: distribute command: XID = 761 >Jun 11 14:06:42 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=761 >Jun 11 14:06:42 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:42 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:42 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:42 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:42 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:06:42 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:42 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:42 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:42 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:42 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:42 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Received recovery work from 1: 142/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Client finishing recovery: 142/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Received recovery work from 1: 707/xoT7UjpV >Jun 11 14:06:42 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:42 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:42 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:06:42 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:10126 >Jun 11 14:06:42 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:42 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:42 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:42 taft-01 clvmd[7681]: distribute command: XID = 762 >Jun 11 14:06:42 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=762 >Jun 11 14:06:42 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:42 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:42 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:42 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:42 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:42 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:06:42 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:42 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:42 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:42 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:42 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:42 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Client finishing recovery: 707/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Received recovery work from 1: 143/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Client finishing recovery: 143/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Received recovery work from 1: 144/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Client finishing recovery: 144/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Received recovery work from 1: 145/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Client finishing recovery: 145/xoT7UjpV >Jun 11 14:06:42 taft-01 kernel: dm-cmirror: Received recovery work from 1: 146/xoT7UjpV >Jun 11 14:06:42 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:43 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Client finishing recovery: 146/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Received recovery work from 1: 147/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Client finishing recovery: 147/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Received recovery work from 1: 148/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Client finishing recovery: 148/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Received recovery work from 1: 149/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Client finishing recovery: 149/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Received recovery work from 1: 150/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Client finishing recovery: 150/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Received recovery work from 1: 151/xoT7UjpV >Jun 11 14:06:43 taft-01 kernel: dm-cmirror: Client finishing recovery: 151/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Received recovery work from 1: 156/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 156/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Received recovery work from 1: 157/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 157/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Received recovery work from 1: 158/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 158/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Received recovery work from 1: 159/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 159/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Received recovery work from 1: 160/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 160/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Received recovery work from 1: 161/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 161/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Received recovery work from 1: 162/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 162/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Received recovery work from 1: 163/xoT7UjpV >Jun 11 14:06:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 163/xoT7UjpV >Jun 11 14:06:44 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Received recovery work from 1: 175/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Client finishing recovery: 175/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Received recovery work from 1: 176/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Client finishing recovery: 176/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Received recovery work from 1: 177/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Client finishing recovery: 177/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Received recovery work from 1: 180/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Client finishing recovery: 180/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Received recovery work from 1: 181/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Client finishing recovery: 181/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Received recovery work from 1: 182/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Client finishing recovery: 182/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Received recovery work from 1: 183/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Client finishing recovery: 183/xoT7UjpV >Jun 11 14:06:45 taft-01 kernel: dm-cmirror: Received recovery work from 1: 184/xoT7UjpV >Jun 11 14:06:45 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:46 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 184/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Received recovery work from 1: 185/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 185/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Received recovery work from 1: 186/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 186/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Received recovery work from 1: 187/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 187/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Received recovery work from 1: 188/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 188/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Received recovery work from 1: 189/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 189/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Received recovery work from 1: 190/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 190/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Received recovery work from 1: 191/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 191/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Received recovery work from 1: 192/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 192/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Received recovery work from 1: 193/xoT7UjpV >Jun 11 14:06:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 193/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Received recovery work from 1: 208/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 208/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Received recovery work from 1: 209/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 209/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Received recovery work from 1: 210/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 210/xoT7UjpV >Jun 11 14:06:47 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Received recovery work from 1: 213/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 213/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Received recovery work from 1: 214/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 214/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Received recovery work from 1: 215/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 215/xoT7UjpV >Jun 11 14:06:47 taft-01 kernel: dm-cmirror: Received recovery work from 1: 216/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Client finishing recovery: 216/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Received recovery work from 1: 217/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Client finishing recovery: 217/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Received recovery work from 1: 218/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Client finishing recovery: 218/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Received recovery work from 1: 219/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Client finishing recovery: 219/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Received recovery work from 1: 220/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Client finishing recovery: 220/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Received recovery work from 1: 221/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Client finishing recovery: 221/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Received recovery work from 1: 226/xoT7UjpV >Jun 11 14:06:48 taft-01 kernel: dm-cmirror: Client finishing recovery: 226/xoT7UjpV >Jun 11 14:06:48 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:49 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:49 taft-01 kernel: dm-cmirror: Received recovery work from 1: 231/xoT7UjpV >Jun 11 14:06:49 taft-01 kernel: dm-cmirror: Client finishing recovery: 231/xoT7UjpV >Jun 11 14:06:49 taft-01 kernel: dm-cmirror: Received recovery work from 1: 232/xoT7UjpV >Jun 11 14:06:49 taft-01 kernel: dm-cmirror: Client finishing recovery: 232/xoT7UjpV >Jun 11 14:06:49 taft-01 kernel: dm-cmirror: Received recovery work from 1: 233/xoT7UjpV >Jun 11 14:06:49 taft-01 kernel: dm-cmirror: Client finishing recovery: 233/xoT7UjpV >Jun 11 14:06:50 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:51 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:52 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:53 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:54 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:55 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 275/xoT7UjpV >Jun 11 14:06:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 275/xoT7UjpV >Jun 11 14:06:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 276/xoT7UjpV >Jun 11 14:06:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 276/xoT7UjpV >Jun 11 14:06:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 277/xoT7UjpV >Jun 11 14:06:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 277/xoT7UjpV >Jun 11 14:06:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 279/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 279/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 280/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 280/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 281/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 281/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 282/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 282/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 286/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 286/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 287/xoT7UjpV >Jun 11 14:06:56 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 287/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 288/xoT7UjpV >Jun 11 14:06:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 288/xoT7UjpV >Jun 11 14:06:57 taft-01 kernel: dm-cmirror: Received recovery work from 1: 293/xoT7UjpV >Jun 11 14:06:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 293/xoT7UjpV >Jun 11 14:06:57 taft-01 kernel: dm-cmirror: Received recovery work from 1: 294/xoT7UjpV >Jun 11 14:06:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 294/xoT7UjpV >Jun 11 14:06:57 taft-01 kernel: dm-cmirror: Received recovery work from 1: 295/xoT7UjpV >Jun 11 14:06:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 295/xoT7UjpV >Jun 11 14:06:57 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:57 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:57 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:06:57 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:06:57 taft-01 clvmd[7681]: sync_lock: returning lkid 1038b >Jun 11 14:06:57 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:57 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:57 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:57 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:57 taft-01 clvmd[7681]: distribute command: XID = 763 >Jun 11 14:06:57 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=763 >Jun 11 14:06:57 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:57 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:57 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:57 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:57 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:06:57 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:57 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:57 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:57 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:57 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:57 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:57 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:06:57 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:06:57 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:06:57 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:1038b >Jun 11 14:06:57 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:06:57 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:57 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:06:57 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:57 taft-01 clvmd[7681]: distribute command: XID = 764 >Jun 11 14:06:57 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=764 >Jun 11 14:06:57 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:06:57 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:06:57 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:06:57 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:06:57 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:06:57 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:06:57 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:06:57 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:06:57 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:06:57 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:06:57 taft-01 clvmd[7681]: Send local reply >Jun 11 14:06:57 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:06:58 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:06:59 taft-01 kernel: dm-cmirror: Received recovery work from 1: 311/xoT7UjpV >Jun 11 14:06:59 taft-01 kernel: dm-cmirror: Client finishing recovery: 311/xoT7UjpV >Jun 11 14:06:59 taft-01 kernel: dm-cmirror: Received recovery work from 1: 312/xoT7UjpV >Jun 11 14:06:59 taft-01 kernel: dm-cmirror: Client finishing recovery: 312/xoT7UjpV >Jun 11 14:06:59 taft-01 kernel: dm-cmirror: Received recovery work from 1: 313/xoT7UjpV >Jun 11 14:06:59 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:06:59 taft-01 kernel: dm-cmirror: Client finishing recovery: 313/xoT7UjpV >Jun 11 14:06:59 taft-01 kernel: dm-cmirror: Received recovery work from 1: 316/xoT7UjpV >Jun 11 14:07:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 316/xoT7UjpV >Jun 11 14:07:00 taft-01 kernel: dm-cmirror: Received recovery work from 1: 321/xoT7UjpV >Jun 11 14:07:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 321/xoT7UjpV >Jun 11 14:07:00 taft-01 kernel: dm-cmirror: Received recovery work from 1: 322/xoT7UjpV >Jun 11 14:07:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 322/xoT7UjpV >Jun 11 14:07:00 taft-01 kernel: dm-cmirror: Received recovery work from 1: 323/xoT7UjpV >Jun 11 14:07:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 323/xoT7UjpV >Jun 11 14:07:00 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:01 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:01 taft-01 kernel: dm-cmirror: Received recovery work from 1: 328/xoT7UjpV >Jun 11 14:07:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 328/xoT7UjpV >Jun 11 14:07:01 taft-01 kernel: dm-cmirror: Received recovery work from 1: 329/xoT7UjpV >Jun 11 14:07:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 329/xoT7UjpV >Jun 11 14:07:01 taft-01 kernel: dm-cmirror: Received recovery work from 1: 330/xoT7UjpV >Jun 11 14:07:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 330/xoT7UjpV >Jun 11 14:07:02 taft-01 kernel: dm-cmirror: Received recovery work from 1: 344/xoT7UjpV >Jun 11 14:07:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 344/xoT7UjpV >Jun 11 14:07:02 taft-01 kernel: dm-cmirror: Received recovery work from 1: 345/xoT7UjpV >Jun 11 14:07:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 345/xoT7UjpV >Jun 11 14:07:02 taft-01 kernel: dm-cmirror: Received recovery work from 1: 346/xoT7UjpV >Jun 11 14:07:02 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 346/xoT7UjpV >Jun 11 14:07:02 taft-01 kernel: dm-cmirror: Received recovery work from 1: 347/xoT7UjpV >Jun 11 14:07:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 347/xoT7UjpV >Jun 11 14:07:03 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:04 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:05 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:06 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:07 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Received recovery work from 1: 375/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 375/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Received recovery work from 1: 376/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 376/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Received recovery work from 1: 377/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 377/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Received recovery work from 1: 378/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 378/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Received recovery work from 1: 379/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 379/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Received recovery work from 1: 380/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 380/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Received recovery work from 1: 381/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 381/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Received recovery work from 1: 382/xoT7UjpV >Jun 11 14:07:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 382/xoT7UjpV >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 390/xoT7UjpV >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 390/xoT7UjpV >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 391/xoT7UjpV >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 391/xoT7UjpV >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 392/xoT7UjpV >Jun 11 14:07:08 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 392/xoT7UjpV >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 393/xoT7UjpV >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 393/xoT7UjpV >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 394/xoT7UjpV >Jun 11 14:07:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 394/xoT7UjpV >Jun 11 14:07:09 taft-01 kernel: dm-cmirror: Received recovery work from 1: 402/xoT7UjpV >Jun 11 14:07:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 402/xoT7UjpV >Jun 11 14:07:09 taft-01 kernel: dm-cmirror: Received recovery work from 1: 403/xoT7UjpV >Jun 11 14:07:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 403/xoT7UjpV >Jun 11 14:07:09 taft-01 kernel: dm-cmirror: Received recovery work from 1: 409/xoT7UjpV >Jun 11 14:07:09 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 409/xoT7UjpV >Jun 11 14:07:10 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 413/xoT7UjpV >Jun 11 14:07:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 413/xoT7UjpV >Jun 11 14:07:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 414/xoT7UjpV >Jun 11 14:07:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 414/xoT7UjpV >Jun 11 14:07:11 taft-01 kernel: dm-cmirror: Received recovery work from 1: 426/xoT7UjpV >Jun 11 14:07:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 426/xoT7UjpV >Jun 11 14:07:11 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:12 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:07:12 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:07:12 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:07:12 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:07:12 taft-01 clvmd[7681]: sync_lock: returning lkid 1034b >Jun 11 14:07:12 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:07:12 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:12 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:07:12 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:12 taft-01 clvmd[7681]: distribute command: XID = 765 >Jun 11 14:07:12 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=765 >Jun 11 14:07:12 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:07:12 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:07:12 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:07:12 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:07:12 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:07:12 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:07:12 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:07:12 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:07:12 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:12 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:12 taft-01 clvmd[7681]: Send local reply >Jun 11 14:07:12 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:07:12 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:07:12 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:07:12 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:1034b >Jun 11 14:07:12 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:07:12 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:07:12 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:12 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:12 taft-01 clvmd[7681]: distribute command: XID = 766 >Jun 11 14:07:12 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=766 >Jun 11 14:07:12 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:07:12 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:07:12 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:07:12 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:07:12 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:07:12 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:07:12 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:07:12 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:07:12 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:12 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:12 taft-01 clvmd[7681]: Send local reply >Jun 11 14:07:12 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:13 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:13 taft-01 kernel: dm-cmirror: Received recovery work from 1: 447/xoT7UjpV >Jun 11 14:07:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 447/xoT7UjpV >Jun 11 14:07:13 taft-01 kernel: dm-cmirror: Received recovery work from 1: 448/xoT7UjpV >Jun 11 14:07:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 448/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Received recovery work from 1: 455/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 455/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Received recovery work from 1: 456/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 456/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Received recovery work from 1: 457/xoT7UjpV >Jun 11 14:07:14 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 457/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Received recovery work from 1: 458/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 458/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Received recovery work from 1: 459/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 459/xoT7UjpV >Jun 11 14:07:14 taft-01 kernel: dm-cmirror: Received recovery work from 1: 460/xoT7UjpV >Jun 11 14:07:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 460/xoT7UjpV >Jun 11 14:07:15 taft-01 kernel: dm-cmirror: Received recovery work from 1: 461/xoT7UjpV >Jun 11 14:07:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 461/xoT7UjpV >Jun 11 14:07:15 taft-01 kernel: dm-cmirror: Received recovery work from 1: 462/xoT7UjpV >Jun 11 14:07:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 462/xoT7UjpV >Jun 11 14:07:15 taft-01 kernel: dm-cmirror: Received recovery work from 1: 474/xoT7UjpV >Jun 11 14:07:15 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:16 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 474/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Received recovery work from 1: 475/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 475/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Received recovery work from 1: 478/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 478/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Received recovery work from 1: 479/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 479/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Received recovery work from 1: 480/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 480/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Received recovery work from 1: 481/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 481/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Received recovery work from 1: 482/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 482/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Received recovery work from 1: 483/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 483/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Received recovery work from 1: 484/xoT7UjpV >Jun 11 14:07:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 484/xoT7UjpV >Jun 11 14:07:17 taft-01 kernel: dm-cmirror: Received recovery work from 1: 873/xoT7UjpV >Jun 11 14:07:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 873/xoT7UjpV >Jun 11 14:07:17 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:18 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:19 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:20 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:21 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:22 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:23 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:23 taft-01 kernel: dm-cmirror: Received recovery work from 1: 586/xoT7UjpV >Jun 11 14:07:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 586/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Received recovery work from 1: 596/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 596/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Received recovery work from 1: 597/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 597/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Received recovery work from 1: 598/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 598/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Received recovery work from 1: 599/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 599/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Received recovery work from 1: 600/xoT7UjpV >Jun 11 14:07:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 600/xoT7UjpV >Jun 11 14:07:24 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:25 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:25 taft-01 kernel: dm-cmirror: Received recovery work from 1: 609/xoT7UjpV >Jun 11 14:07:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 609/xoT7UjpV >Jun 11 14:07:25 taft-01 kernel: dm-cmirror: Received recovery work from 1: 610/xoT7UjpV >Jun 11 14:07:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 610/xoT7UjpV >Jun 11 14:07:26 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:27 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:07:27 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:07:27 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:07:27 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:07:27 taft-01 clvmd[7681]: sync_lock: returning lkid 1039f >Jun 11 14:07:27 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:07:27 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:07:27 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:27 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:27 taft-01 clvmd[7681]: distribute command: XID = 767 >Jun 11 14:07:27 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=767 >Jun 11 14:07:27 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:07:27 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:07:27 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:07:27 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:07:27 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:07:27 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:07:27 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:07:27 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:07:27 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:27 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:27 taft-01 clvmd[7681]: Send local reply >Jun 11 14:07:27 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:07:27 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:07:27 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:07:27 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:1039f >Jun 11 14:07:27 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:07:27 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:07:27 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:27 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:27 taft-01 clvmd[7681]: distribute command: XID = 768 >Jun 11 14:07:27 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=768 >Jun 11 14:07:27 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:07:27 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:07:27 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:07:27 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:07:27 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:07:27 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:07:27 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:07:27 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:27 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:27 taft-01 clvmd[7681]: Send local reply >Jun 11 14:07:27 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:07:27 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:28 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:29 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:30 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:31 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Received recovery work from 1: 978/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Client finishing recovery: 978/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Received recovery work from 1: 979/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Client finishing recovery: 979/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Received recovery work from 1: 980/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Client finishing recovery: 980/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Received recovery work from 1: 981/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Client finishing recovery: 981/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Received recovery work from 1: 982/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Client finishing recovery: 982/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Received recovery work from 1: 983/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Client finishing recovery: 983/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Received recovery work from 1: 984/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Client finishing recovery: 984/xoT7UjpV >Jun 11 14:07:31 taft-01 kernel: dm-cmirror: Received recovery work from 1: 985/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 985/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Received recovery work from 1: 986/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 986/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Received recovery work from 1: 987/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 987/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Received recovery work from 1: 991/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 991/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Received recovery work from 1: 992/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 992/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Received recovery work from 1: 993/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 993/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Received recovery work from 1: 994/xoT7UjpV >Jun 11 14:07:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 994/xoT7UjpV >Jun 11 14:07:32 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1010/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1010/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1011/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1011/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1012/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1012/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1013/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1013/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1014/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1014/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1015/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1015/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1016/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1016/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1017/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1017/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1018/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1018/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1019/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1019/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1020/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 1020/xoT7UjpV >Jun 11 14:07:33 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1021/xoT7UjpV >Jun 11 14:07:33 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:34 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 1021/xoT7UjpV >Jun 11 14:07:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1022/xoT7UjpV >Jun 11 14:07:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 1022/xoT7UjpV >Jun 11 14:07:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1023/xoT7UjpV >Jun 11 14:07:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 1023/xoT7UjpV >Jun 11 14:07:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1025/xoT7UjpV >Jun 11 14:07:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 1025/xoT7UjpV >Jun 11 14:07:34 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1026/xoT7UjpV >Jun 11 14:07:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 1026/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1038/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1038/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1039/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1039/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1040/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1040/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1041/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1041/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1042/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1042/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1043/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1043/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1044/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1044/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1045/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1045/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1046/xoT7UjpV >Jun 11 14:07:35 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1046/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1047/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1047/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1048/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1048/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1049/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1049/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1050/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1050/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1051/xoT7UjpV >Jun 11 14:07:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 1051/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1061/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 1061/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1062/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 1062/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1063/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 1063/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1064/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 1064/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1065/xoT7UjpV >Jun 11 14:07:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 1065/xoT7UjpV >Jun 11 14:07:36 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:37 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:38 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:39 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:40 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:41 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:42 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:07:42 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:07:42 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:07:42 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:07:42 taft-01 clvmd[7681]: sync_lock: returning lkid 10146 >Jun 11 14:07:42 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:07:42 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:07:42 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:42 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:42 taft-01 clvmd[7681]: distribute command: XID = 769 >Jun 11 14:07:42 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=769 >Jun 11 14:07:42 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:07:42 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:07:42 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:07:42 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:07:42 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:07:42 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:07:42 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:07:42 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:07:42 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:42 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:42 taft-01 clvmd[7681]: Send local reply >Jun 11 14:07:42 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:43 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:07:43 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:07:43 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:07:43 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:10146 >Jun 11 14:07:43 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:07:43 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:07:43 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:43 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:43 taft-01 clvmd[7681]: distribute command: XID = 770 >Jun 11 14:07:43 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=770 >Jun 11 14:07:43 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:07:43 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:07:43 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:07:43 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:07:43 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:07:43 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:07:43 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:07:43 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:07:43 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:43 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:43 taft-01 clvmd[7681]: Send local reply >Jun 11 14:07:43 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:43 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1250/xoT7UjpV >Jun 11 14:07:43 taft-01 kernel: dm-cmirror: Client finishing recovery: 1250/xoT7UjpV >Jun 11 14:07:44 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:46 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:46 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:47 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:49 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:49 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:49 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1344/xoT7UjpV >Jun 11 14:07:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 1344/xoT7UjpV >Jun 11 14:07:50 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1346/xoT7UjpV >Jun 11 14:07:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 1346/xoT7UjpV >Jun 11 14:07:50 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1347/xoT7UjpV >Jun 11 14:07:50 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 1347/xoT7UjpV >Jun 11 14:07:51 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1351/xoT7UjpV >Jun 11 14:07:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 1351/xoT7UjpV >Jun 11 14:07:51 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1352/xoT7UjpV >Jun 11 14:07:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 1352/xoT7UjpV >Jun 11 14:07:51 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1353/xoT7UjpV >Jun 11 14:07:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 1353/xoT7UjpV >Jun 11 14:07:51 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1354/xoT7UjpV >Jun 11 14:07:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 1354/xoT7UjpV >Jun 11 14:07:52 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:52 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:53 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1375/xoT7UjpV >Jun 11 14:07:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 1375/xoT7UjpV >Jun 11 14:07:53 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1384/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 1384/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1385/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 1385/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1386/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 1386/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1387/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 1387/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1388/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 1388/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1389/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 1389/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1390/xoT7UjpV >Jun 11 14:07:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 1390/xoT7UjpV >Jun 11 14:07:55 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:55 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1392/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 1392/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1393/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 1393/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1394/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 1394/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1395/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 1395/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1396/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 1396/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1397/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 1397/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1400/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 1400/xoT7UjpV >Jun 11 14:07:55 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1401/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1401/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1402/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1402/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1403/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1403/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1404/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1404/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1406/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1406/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1407/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1407/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1408/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1408/xoT7UjpV >Jun 11 14:07:56 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1413/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1413/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1414/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1414/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1415/xoT7UjpV >Jun 11 14:07:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 1415/xoT7UjpV >Jun 11 14:07:57 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1428/xoT7UjpV >Jun 11 14:07:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 1428/xoT7UjpV >Jun 11 14:07:57 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1429/xoT7UjpV >Jun 11 14:07:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 1429/xoT7UjpV >Jun 11 14:07:57 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1430/xoT7UjpV >Jun 11 14:07:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 1430/xoT7UjpV >Jun 11 14:07:57 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1431/xoT7UjpV >Jun 11 14:07:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 1431/xoT7UjpV >Jun 11 14:07:58 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:07:58 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:07:58 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:07:58 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:07:58 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:07:58 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:07:58 taft-01 clvmd[7681]: sync_lock: returning lkid 20253 >Jun 11 14:07:58 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:07:58 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:07:58 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:58 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:58 taft-01 clvmd[7681]: distribute command: XID = 771 >Jun 11 14:07:58 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=771 >Jun 11 14:07:58 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:07:58 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:07:58 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:07:58 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:07:58 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:07:58 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:07:58 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:07:58 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:07:58 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:58 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:58 taft-01 clvmd[7681]: Send local reply >Jun 11 14:07:58 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:07:58 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:07:58 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:07:58 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:20253 >Jun 11 14:07:58 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:07:58 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:07:58 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:58 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:58 taft-01 clvmd[7681]: distribute command: XID = 772 >Jun 11 14:07:58 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=772 >Jun 11 14:07:58 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:07:58 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:07:58 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:07:58 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:07:58 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:07:58 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:07:58 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:07:58 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:07:58 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:07:58 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:07:58 taft-01 clvmd[7681]: Send local reply >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1439/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Client finishing recovery: 1439/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1440/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Client finishing recovery: 1440/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1444/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Client finishing recovery: 1444/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1445/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Client finishing recovery: 1445/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1446/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Client finishing recovery: 1446/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1447/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Client finishing recovery: 1447/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1448/xoT7UjpV >Jun 11 14:07:58 taft-01 kernel: dm-cmirror: Client finishing recovery: 1448/xoT7UjpV >Jun 11 14:07:59 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1460/xoT7UjpV >Jun 11 14:07:59 taft-01 kernel: dm-cmirror: Client finishing recovery: 1460/xoT7UjpV >Jun 11 14:07:59 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1461/xoT7UjpV >Jun 11 14:07:59 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:07:59 taft-01 kernel: dm-cmirror: Client finishing recovery: 1461/xoT7UjpV >Jun 11 14:07:59 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1462/xoT7UjpV >Jun 11 14:07:59 taft-01 kernel: dm-cmirror: Client finishing recovery: 1462/xoT7UjpV >Jun 11 14:07:59 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1465/xoT7UjpV >Jun 11 14:08:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 1465/xoT7UjpV >Jun 11 14:08:00 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1466/xoT7UjpV >Jun 11 14:08:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 1466/xoT7UjpV >Jun 11 14:08:01 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:01 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:02 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:08:04 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:04 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:05 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:08:07 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:07 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1544/xoT7UjpV >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 1544/xoT7UjpV >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1545/xoT7UjpV >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 1545/xoT7UjpV >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1546/xoT7UjpV >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 1546/xoT7UjpV >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1547/xoT7UjpV >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 1547/xoT7UjpV >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1548/xoT7UjpV >Jun 11 14:08:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 1548/xoT7UjpV >Jun 11 14:08:08 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1570/xoT7UjpV >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 1570/xoT7UjpV >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1571/xoT7UjpV >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 1571/xoT7UjpV >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1575/xoT7UjpV >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 1575/xoT7UjpV >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1576/xoT7UjpV >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 1576/xoT7UjpV >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1577/xoT7UjpV >Jun 11 14:08:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 1577/xoT7UjpV >Jun 11 14:08:10 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:10 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1580/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 1580/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1581/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 1581/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1582/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 1582/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1583/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 1583/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1584/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 1584/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1585/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 1585/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1586/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 1586/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1587/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 1587/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1590/xoT7UjpV >Jun 11 14:08:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 1590/xoT7UjpV >Jun 11 14:08:11 taft-01 kernel: dm-cmirror: Received recovery work from 1: 1594/xoT7UjpV >Jun 11 14:08:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 1594/xoT7UjpV >Jun 11 14:08:11 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:08:11 taft-01 lvm[7565]: helter_skelter-syncd_secondary_core_2legs_1 is now in-sync >Jun 11 14:08:13 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:13 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:13 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:13 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:13 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:08:13 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:08:13 taft-01 clvmd[7681]: sync_lock: returning lkid 10342 >Jun 11 14:08:13 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:13 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:13 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:13 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:13 taft-01 clvmd[7681]: distribute command: XID = 773 >Jun 11 14:08:13 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=773 >Jun 11 14:08:13 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:13 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:13 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:13 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:13 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:08:13 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:13 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:13 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:13 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:13 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:13 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:13 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:13 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:08:13 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:13 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:13 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:13 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:13 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:13 taft-01 clvmd[7681]: distribute command: XID = 774 >Jun 11 14:08:13 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=774 >Jun 11 14:08:13 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:08:13 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:13 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:08:13 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:08:13 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:13 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:13 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:08:13 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:08:13 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:08:13 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:08:13 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:13 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:08:13 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:13 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:13 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:13 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:13 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:13 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:13 taft-01 clvmd[7681]: Read on local socket 5, len = 85 >Jun 11 14:08:13 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:08:13 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:13 taft-01 clvmd[7681]: pre_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:08:13 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV' mode:4 flags=5 >Jun 11 14:08:13 taft-01 clvmd[7681]: sync_lock: returning lkid 103bf >Jun 11 14:08:13 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:13 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:13 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:13 taft-01 clvmd[7681]: distribute command: XID = 775 >Jun 11 14:08:13 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98502850, len=85, csid=(nil), xid=775 >Jun 11 14:08:13 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:08:13 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:13 taft-01 clvmd[7681]: process_local_command: LOCK_LV (0x32) msg=0x2a985028b0, msglen =85, client=0x2a98502dc0 >Jun 11 14:08:13 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:08:13 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: LOG INFO: >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: uuid: LVM-1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: uuid_ref : 1 >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: log type : core >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: ?region_count: 1600 >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: ?sync_count : 0 >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: ?sync_search : 0 >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: in_sync : YES >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: suspended : NO >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: recovery_halted : NO >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: server_id : 1 >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: server_valid: YES >Jun 11 14:08:13 taft-01 lvm[7565]: No longer monitoring mirror device helter_skelter-syncd_secondary_core_2legs_1 for events >Jun 11 14:08:13 taft-01 lvm[7565]: Unlocking memory >Jun 11 14:08:13 taft-01 lvm[7565]: memlock_count dec to 0 >Jun 11 14:08:13 taft-01 lvm[7565]: Dumping persistent device cache to /etc/lvm/cache/.cache >Jun 11 14:08:13 taft-01 lvm[7565]: Locking /etc/lvm/cache/.cache (F_WRLCK, 1) >Jun 11 14:08:13 taft-01 lvm[7565]: Unlocking fd 11 >Jun 11 14:08:13 taft-01 lvm[7565]: Wiping internal VG cache >Jun 11 14:08:13 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:08:13 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: cluster_presuspend: recovery halted on xoT7UjpV(1) >Jun 11 14:08:13 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:08:13 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:08:13 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:08:13 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:13 taft-01 kernel: dm-cmirror: cluster_postsuspend >Jun 11 14:08:13 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:08:13 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:14 taft-01 qarshd[20076]: Nothing to do >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_MASTER_LEAVING(13): (xoT7UjpV) >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 0 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (xoT7UjpV) >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (xoT7UjpV) >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:08:15 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:15 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:08:15 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:15 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:15 taft-01 clvmd[7681]: Read on local socket 5, len = 85 >Jun 11 14:08:15 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:08:15 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:15 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:15 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: distribute command: XID = 776 >Jun 11 14:08:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98502b30, len=85, csid=(nil), xid=776 >Jun 11 14:08:15 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:08:15 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:15 taft-01 clvmd[7681]: process_local_command: LOCK_LV (0x32) msg=0x2a98502b90, msglen =85, client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 4 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 2 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 3 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:08:15 taft-01 [7565]: Monitoring mirror device helter_skelter-syncd_secondary_core_2legs_1 for events >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (xoT7UjpV) >Jun 11 14:08:15 taft-01 lvm[7565]: Loading config file: /etc/lvm/lvm.conf >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:08:15 taft-01 lvm[7565]: Opened /etc/lvm/lvm.conf RO >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 3 >Jun 11 14:08:15 taft-01 lvm[7565]: Closed /etc/lvm/lvm.conf >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 2 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting log/syslog to 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (xoT7UjpV) >Jun 11 14:08:15 taft-01 lvm[7565]: Setting log/level to 0 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting log/verbose to 0 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 3 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting log/indent to 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting log/prefix to >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (xoT7UjpV) >Jun 11 14:08:15 taft-01 lvm[7565]: Setting log/command_names to 0 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting global/test to 0 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 3 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting log/overwrite to 0 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:15 taft-01 lvm[7565]: log/activation not found in config: defaulting to 0 >Jun 11 14:08:15 taft-01 lvm[7565]: Logging initialised at Wed Jun 11 14:08:15 2008 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting global/umask to 63 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting devices/dir to /dev >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (xoT7UjpV) >Jun 11 14:08:15 taft-01 lvm[7565]: Setting global/proc to /proc >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting global/activation to 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 3 >Jun 11 14:08:15 taft-01 lvm[7565]: global/suffix not found in config: defaulting to 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting global/units to h >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LOG INFO: >Jun 11 14:08:15 taft-01 lvm[7565]: Setting activation/readahead to auto >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: uuid: LVM-1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV >Jun 11 14:08:15 taft-01 lvm[7565]: devices/preferred_names not found in config file: using built-in preferences >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: uuid_ref : 1 >Jun 11 14:08:15 taft-01 lvm[7565]: Matcher built with 3 dfa states >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: log type : core >Jun 11 14:08:15 taft-01 lvm[7565]: Setting devices/ignore_suspended_devices to 0 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: ?region_count: 1600 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting devices/cache_dir to /etc/lvm/cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: ?sync_count : 0 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting devices/write_cache_state to 1 >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: ?sync_search : 0 >Jun 11 14:08:15 taft-01 lvm[7565]: Opened /etc/lvm/cache/.cache RO >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: in_sync : YES >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000-part1: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: suspended : YES >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram11: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: recovery_halted : YES >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/VolGroup00/LogVol01: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: server_id : 57005 >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/mapper/helter_skelter-syncd_secondary_core_2legs_2: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: server_valid: NO >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram10: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: cluster_resume: Setting recovery_halted = 0 >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: server_id=dead, server_valid=1, xoT7UjpV >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/helter_skelter/syncd_secondary_core_2legs_2: Aliased to /dev/mapper/helter_skelter-syncd_secondary_core_2legs_2 in device cache (preferred name) >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: trigger = LRT_GET_RESYNC_WORK >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000-part1: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/VolGroup00/LogVol00: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram12: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 2 >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram6: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 0 >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000-part1: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (xoT7UjpV) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/mapper/helter_skelter-syncd_secondary_core_2legs_1: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram13: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram14: Added to device cache >Jun 11 14:08:15 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000-part1: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/helter_skelter/syncd_secondary_core_2legs_1: Aliased to /dev/mapper/helter_skelter-syncd_secondary_core_2legs_1 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram5: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/mapper/VolGroup00-LogVol00: Aliased to /dev/VolGroup00/LogVol00 in device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part1: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram1: Aliased to /dev/ram in device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/sda1: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part1 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/sdb1: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/sdc1: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/sdg1: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/sdd1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0003000000000000-part1 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/sde1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0004000000000000-part1 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/sdf1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0005000000000000-part1 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/sdh1: Aliased to /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0007000000000000-part1 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ramdisk: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/mapper/VolGroup00-LogVol01: Aliased to /dev/VolGroup00/LogVol01 in device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/dm-4: Aliased to /dev/helter_skelter/syncd_secondary_core_2legs_1 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/dm-7: Aliased to /dev/helter_skelter/syncd_secondary_core_2legs_2 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram0: Aliased to /dev/ramdisk in device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/root: Aliased to /dev/VolGroup00/LogVol00 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/dm-1: Aliased to /dev/VolGroup00/LogVol01 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part2: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram2: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/sda2: Aliased to /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:2:0:0-part2 in device cache (preferred name) >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram8: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram9: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/dm-0: Aliased to /dev/root in device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram15: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0001000000000000-part1: Aliased to /dev/sdb1 in device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0006000000000000-part1: Aliased to /dev/sdg1 in device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram3: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram4: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/ram7: Added to device cache >Jun 11 14:08:15 taft-01 lvm[7565]: /dev/disk/by-path/pci-0000:0b:02.0-fc-0x500805f3000a05b1:0x0002000000000000-part1: Aliased to /dev/sdc1 in device cache >Jun 11 14:08:15 taft-01 lvm[7565]: Loaded persistent filter cache from /etc/lvm/cache/.cache >Jun 11 14:08:15 taft-01 lvm[7565]: Closed /etc/lvm/cache/.cache >Jun 11 14:08:15 taft-01 lvm[7565]: Setting activation/reserved_stack to 256 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting activation/reserved_memory to 8192 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting activation/process_priority to -18 >Jun 11 14:08:15 taft-01 lvm[7565]: Initialised format: lvm1 >Jun 11 14:08:15 taft-01 lvm[7565]: Initialised format: pool >Jun 11 14:08:15 taft-01 lvm[7565]: Initialised format: lvm2 >Jun 11 14:08:15 taft-01 lvm[7565]: global/format not found in config: defaulting to lvm2 >Jun 11 14:08:15 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm1 >Jun 11 14:08:15 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_pool >Jun 11 14:08:15 taft-01 lvm[7565]: lvmcache: initialised VG #orphans_lvm2 >Jun 11 14:08:15 taft-01 lvm[7565]: Initialised segtype: striped >Jun 11 14:08:15 taft-01 lvm[7565]: Initialised segtype: zero >Jun 11 14:08:15 taft-01 lvm[7565]: Initialised segtype: error >Jun 11 14:08:15 taft-01 lvm[7565]: Setting dmeventd/snapshot_library to libdevmapper-event-lvm2snapshot.so >Jun 11 14:08:15 taft-01 lvm[7565]: Setting global/library_dir to /usr/lib64 >Jun 11 14:08:15 taft-01 lvm[7565]: Initialised segtype: snapshot >Jun 11 14:08:15 taft-01 lvm[7565]: Initialised segtype: mirror >Jun 11 14:08:15 taft-01 lvm[7565]: Setting backup/retain_days to 30 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting backup/retain_min to 10 >Jun 11 14:08:15 taft-01 lvm[7565]: Setting backup/archive_dir to /etc/lvm/archive >Jun 11 14:08:15 taft-01 lvm[7565]: Setting backup/backup_dir to /etc/lvm/backup >Jun 11 14:08:15 taft-01 lvm[7565]: Locking memory >Jun 11 14:08:15 taft-01 lvm[7565]: memlock_count inc to 1 >Jun 11 14:08:15 taft-01 lvm[7565]: Parsing: _memlock_inc >Jun 11 14:08:15 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:15 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:08:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:15 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:08:15 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:08:15 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:08:15 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:08:15 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:08:15 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:08:15 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:15 taft-01 clvmd[7681]: post_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:08:15 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6W7S06cTNgT3HYd4eor9U75sdxoT7UjpV' mode:1 flags=4 >Jun 11 14:08:15 taft-01 clvmd[7681]: sync_lock: returning lkid 103bf >Jun 11 14:08:15 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:15 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:15 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:15 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:08:15 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:10342 >Jun 11 14:08:15 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:15 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: distribute command: XID = 777 >Jun 11 14:08:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=777 >Jun 11 14:08:15 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:15 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:15 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:15 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:08:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:15 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:15 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:15 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:08:15 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:08:15 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:15 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:08:15 taft-01 qarshd[20076]: That's enough >Jun 11 14:08:15 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:08:15 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:08:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=777 >Jun 11 14:08:15 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:08:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:15 taft-01 qarshd[20131]: Talking to peer 10.15.80.47:51709 >Jun 11 14:08:15 taft-01 qarshd[20131]: Running cmdline: lvconvert --corelog -m 1 helter_skelter/syncd_secondary_core_2legs_2 /dev/sdg1:0-1000 /dev/sdf1:0-1000 >Jun 11 14:08:15 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:08:15 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:15 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:08:15 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:08:15 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:08:15 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:08:15 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:08:15 taft-01 clvmd[7681]: sync_lock: returning lkid 2039d >Jun 11 14:08:15 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: distribute command: XID = 778 >Jun 11 14:08:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=778 >Jun 11 14:08:15 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:15 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:15 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:15 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:08:15 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:15 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:15 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:15 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:16 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:16 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:16 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:16 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:08:16 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:16 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:16 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: distribute command: XID = 779 >Jun 11 14:08:16 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=779 >Jun 11 14:08:16 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:08:16 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:16 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:08:16 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:16 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:16 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:16 taft-01 clvmd[7681]: Read on local socket 5, len = 85 >Jun 11 14:08:16 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:08:16 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:16 taft-01 clvmd[7681]: pre_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:08:16 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ' mode:4 flags=5 >Jun 11 14:08:16 taft-01 clvmd[7681]: sync_lock: returning lkid 20389 >Jun 11 14:08:16 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:16 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: distribute command: XID = 780 >Jun 11 14:08:16 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98502b30, len=85, csid=(nil), xid=780 >Jun 11 14:08:16 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:08:16 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:16 taft-01 clvmd[7681]: process_local_command: LOCK_LV (0x32) msg=0x2a98502b90, msglen =85, client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (none) >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 4 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 2 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (none) >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (none) >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 3 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 2 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (none) >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (none) >Jun 11 14:08:16 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: Creating 90GcsfRZ (1) >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: server_id=dead, server_valid=0, 90GcsfRZ >Jun 11 14:08:16 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: trigger = LRT_GET_SYNC_COUNT >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (90GcsfRZ) >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 2 >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (90GcsfRZ) >Jun 11 14:08:16 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:08:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 3 >Jun 11 14:08:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:16 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (90GcsfRZ) >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (90GcsfRZ) >Jun 11 14:08:16 taft-01 clvmd[7681]: Read on local socket 5, len = 85 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:08:16 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:08:16 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:08:16 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (90GcsfRZ) >Jun 11 14:08:16 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:08:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:08:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: node_count : 4 >Jun 11 14:08:16 taft-01 clvmd[7681]: distribute command: XID = 781 >Jun 11 14:08:16 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98502850, len=85, csid=(nil), xid=781 >Jun 11 14:08:16 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:16 taft-01 clvmd[7681]: process_local_command: LOCK_LV (0x32) msg=0x2a985028b0, msglen =85, client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: cluster_resume: Setting recovery_halted = 0 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 0/90GcsfRZ >Jun 11 14:08:16 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:08:16 taft-01 lvm[7565]: Monitoring mirror device helter_skelter-syncd_secondary_core_2legs_2 for events >Jun 11 14:08:16 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:08:16 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:08:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 0/90GcsfRZ >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:08:16 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:16 taft-01 clvmd[7681]: post_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:08:16 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ' mode:1 flags=4 >Jun 11 14:08:16 taft-01 clvmd[7681]: sync_lock: returning lkid 20389 >Jun 11 14:08:16 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:16 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:16 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:16 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:08:16 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:2039d >Jun 11 14:08:16 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:16 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: distribute command: XID = 782 >Jun 11 14:08:16 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=782 >Jun 11 14:08:16 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:16 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:16 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:16 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:08:16 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:16 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:16 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:16 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:18 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:19 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:19 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:20 taft-01 kernel: dm-cmirror: Received recovery work from 3: 679/90GcsfRZ >Jun 11 14:08:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 679/90GcsfRZ >Jun 11 14:08:20 taft-01 kernel: dm-cmirror: Received recovery work from 3: 72/90GcsfRZ >Jun 11 14:08:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 72/90GcsfRZ >Jun 11 14:08:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 762/90GcsfRZ >Jun 11 14:08:21 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 762/90GcsfRZ >Jun 11 14:08:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 74/90GcsfRZ >Jun 11 14:08:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 74/90GcsfRZ >Jun 11 14:08:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 763/90GcsfRZ >Jun 11 14:08:22 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:22 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 763/90GcsfRZ >Jun 11 14:08:22 taft-01 kernel: dm-cmirror: Received recovery work from 3: 686/90GcsfRZ >Jun 11 14:08:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 686/90GcsfRZ >Jun 11 14:08:22 taft-01 kernel: dm-cmirror: Received recovery work from 3: 687/90GcsfRZ >Jun 11 14:08:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 687/90GcsfRZ >Jun 11 14:08:22 taft-01 kernel: dm-cmirror: Received recovery work from 3: 689/90GcsfRZ >Jun 11 14:08:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 689/90GcsfRZ >Jun 11 14:08:23 taft-01 kernel: dm-cmirror: Received recovery work from 3: 180/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 180/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 181/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 181/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 182/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 182/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 183/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 183/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 184/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 184/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 185/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 185/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 186/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 186/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 187/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 187/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 188/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 188/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 189/xoT7UjpV >Jun 11 14:08:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 189/xoT7UjpV >Jun 11 14:08:24 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:25 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:25 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Received recovery work from 3: 203/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 203/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Received recovery work from 3: 204/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 204/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Received recovery work from 3: 205/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Received recovery work from 3: 698/90GcsfRZ >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 205/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 698/90GcsfRZ >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Received recovery work from 3: 206/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 206/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Received recovery work from 3: 207/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 207/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Received recovery work from 3: 208/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Received recovery work from 3: 87/90GcsfRZ >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 208/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Received recovery work from 3: 209/xoT7UjpV >Jun 11 14:08:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 87/90GcsfRZ >Jun 11 14:08:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 209/xoT7UjpV >Jun 11 14:08:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 210/xoT7UjpV >Jun 11 14:08:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 210/xoT7UjpV >Jun 11 14:08:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 211/xoT7UjpV >Jun 11 14:08:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 211/xoT7UjpV >Jun 11 14:08:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 212/xoT7UjpV >Jun 11 14:08:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 212/xoT7UjpV >Jun 11 14:08:27 taft-01 kernel: dm-cmirror: Received recovery work from 3: 820/90GcsfRZ >Jun 11 14:08:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 820/90GcsfRZ >Jun 11 14:08:27 taft-01 kernel: dm-cmirror: Received recovery work from 3: 703/90GcsfRZ >Jun 11 14:08:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 703/90GcsfRZ >Jun 11 14:08:27 taft-01 kernel: dm-cmirror: Received recovery work from 3: 704/90GcsfRZ >Jun 11 14:08:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 704/90GcsfRZ >Jun 11 14:08:27 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:28 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:28 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:28 taft-01 kernel: dm-cmirror: Received recovery work from 3: 708/90GcsfRZ >Jun 11 14:08:28 taft-01 kernel: dm-cmirror: Client finishing recovery: 708/90GcsfRZ >Jun 11 14:08:29 taft-01 kernel: dm-cmirror: Received recovery work from 3: 709/90GcsfRZ >Jun 11 14:08:29 taft-01 kernel: dm-cmirror: Client finishing recovery: 709/90GcsfRZ >Jun 11 14:08:29 taft-01 kernel: dm-cmirror: Received recovery work from 3: 710/90GcsfRZ >Jun 11 14:08:29 taft-01 kernel: dm-cmirror: Client finishing recovery: 710/90GcsfRZ >Jun 11 14:08:29 taft-01 kernel: dm-cmirror: Received recovery work from 3: 711/90GcsfRZ >Jun 11 14:08:29 taft-01 kernel: dm-cmirror: Client finishing recovery: 711/90GcsfRZ >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Received recovery work from 3: 273/xoT7UjpV >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Client finishing recovery: 273/xoT7UjpV >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Received recovery work from 3: 274/xoT7UjpV >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Client finishing recovery: 274/xoT7UjpV >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Received recovery work from 3: 275/xoT7UjpV >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Client finishing recovery: 275/xoT7UjpV >Jun 11 14:08:30 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Received recovery work from 3: 714/90GcsfRZ >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Received recovery work from 3: 276/xoT7UjpV >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Client finishing recovery: 714/90GcsfRZ >Jun 11 14:08:30 taft-01 kernel: dm-cmirror: Client finishing recovery: 276/xoT7UjpV >Jun 11 14:08:31 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:31 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:31 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:31 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:31 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:08:31 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:08:31 taft-01 clvmd[7681]: sync_lock: returning lkid 200dd >Jun 11 14:08:31 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:31 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:31 taft-01 clvmd[7681]: distribute command: XID = 783 >Jun 11 14:08:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=783 >Jun 11 14:08:31 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:31 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:31 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:31 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:31 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:08:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:31 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:31 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:31 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:31 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:31 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:31 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:08:31 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:200dd >Jun 11 14:08:31 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:31 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:31 taft-01 clvmd[7681]: distribute command: XID = 784 >Jun 11 14:08:31 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=784 >Jun 11 14:08:31 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:31 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:31 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:31 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:31 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:08:31 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:31 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:31 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:31 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:31 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:31 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:31 taft-01 kernel: dm-cmirror: Received recovery work from 3: 716/90GcsfRZ >Jun 11 14:08:31 taft-01 kernel: dm-cmirror: Client finishing recovery: 716/90GcsfRZ >Jun 11 14:08:31 taft-01 kernel: dm-cmirror: Received recovery work from 3: 717/90GcsfRZ >Jun 11 14:08:31 taft-01 kernel: dm-cmirror: Client finishing recovery: 717/90GcsfRZ >Jun 11 14:08:32 taft-01 kernel: dm-cmirror: Received recovery work from 3: 718/90GcsfRZ >Jun 11 14:08:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 718/90GcsfRZ >Jun 11 14:08:32 taft-01 kernel: dm-cmirror: Received recovery work from 3: 719/90GcsfRZ >Jun 11 14:08:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 719/90GcsfRZ >Jun 11 14:08:32 taft-01 kernel: dm-cmirror: Received recovery work from 3: 720/90GcsfRZ >Jun 11 14:08:32 taft-01 kernel: dm-cmirror: Client finishing recovery: 720/90GcsfRZ >Jun 11 14:08:33 taft-01 kernel: dm-cmirror: Received recovery work from 3: 722/90GcsfRZ >Jun 11 14:08:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 722/90GcsfRZ >Jun 11 14:08:33 taft-01 kernel: dm-cmirror: Received recovery work from 3: 723/90GcsfRZ >Jun 11 14:08:33 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 723/90GcsfRZ >Jun 11 14:08:33 taft-01 kernel: dm-cmirror: Received recovery work from 3: 841/90GcsfRZ >Jun 11 14:08:33 taft-01 kernel: dm-cmirror: Client finishing recovery: 841/90GcsfRZ >Jun 11 14:08:34 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:34 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:34 taft-01 kernel: dm-cmirror: Received recovery work from 3: 724/90GcsfRZ >Jun 11 14:08:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 724/90GcsfRZ >Jun 11 14:08:34 taft-01 kernel: dm-cmirror: Received recovery work from 3: 843/90GcsfRZ >Jun 11 14:08:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 843/90GcsfRZ >Jun 11 14:08:34 taft-01 kernel: dm-cmirror: Received recovery work from 3: 903/90GcsfRZ >Jun 11 14:08:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 903/90GcsfRZ >Jun 11 14:08:34 taft-01 kernel: dm-cmirror: Received recovery work from 3: 110/90GcsfRZ >Jun 11 14:08:34 taft-01 kernel: dm-cmirror: Client finishing recovery: 110/90GcsfRZ >Jun 11 14:08:34 taft-01 kernel: dm-cmirror: Received recovery work from 3: 727/90GcsfRZ >Jun 11 14:08:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 727/90GcsfRZ >Jun 11 14:08:35 taft-01 kernel: dm-cmirror: Received recovery work from 3: 845/90GcsfRZ >Jun 11 14:08:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 845/90GcsfRZ >Jun 11 14:08:35 taft-01 kernel: dm-cmirror: Received recovery work from 3: 728/90GcsfRZ >Jun 11 14:08:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 728/90GcsfRZ >Jun 11 14:08:35 taft-01 kernel: dm-cmirror: Received recovery work from 3: 847/90GcsfRZ >Jun 11 14:08:35 taft-01 kernel: dm-cmirror: Client finishing recovery: 847/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Received recovery work from 3: 908/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 908/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Received recovery work from 3: 731/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 731/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Received recovery work from 3: 849/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 849/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Received recovery work from 3: 850/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 850/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Received recovery work from 3: 125/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 125/90GcsfRZ >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Received recovery work from 3: 733/90GcsfRZ >Jun 11 14:08:36 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:36 taft-01 kernel: dm-cmirror: Client finishing recovery: 733/90GcsfRZ >Jun 11 14:08:37 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:37 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Received recovery work from 3: 734/90GcsfRZ >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 734/90GcsfRZ >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Received recovery work from 3: 735/90GcsfRZ >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 735/90GcsfRZ >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Received recovery work from 3: 672/90GcsfRZ >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 672/90GcsfRZ >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Received recovery work from 3: 139/90GcsfRZ >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 139/90GcsfRZ >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Received recovery work from 3: 854/90GcsfRZ >Jun 11 14:08:37 taft-01 kernel: dm-cmirror: Client finishing recovery: 854/90GcsfRZ >Jun 11 14:08:38 taft-01 kernel: dm-cmirror: Received recovery work from 3: 916/90GcsfRZ >Jun 11 14:08:38 taft-01 kernel: dm-cmirror: Client finishing recovery: 916/90GcsfRZ >Jun 11 14:08:38 taft-01 kernel: dm-cmirror: Received recovery work from 3: 677/90GcsfRZ >Jun 11 14:08:39 taft-01 kernel: dm-cmirror: Client finishing recovery: 677/90GcsfRZ >Jun 11 14:08:39 taft-01 kernel: dm-cmirror: Received recovery work from 3: 154/90GcsfRZ >Jun 11 14:08:39 taft-01 kernel: dm-cmirror: Client finishing recovery: 154/90GcsfRZ >Jun 11 14:08:39 taft-01 kernel: dm-cmirror: Received recovery work from 3: 155/90GcsfRZ >Jun 11 14:08:39 taft-01 kernel: dm-cmirror: Client finishing recovery: 155/90GcsfRZ >Jun 11 14:08:39 taft-01 kernel: dm-cmirror: Received recovery work from 3: 156/90GcsfRZ >Jun 11 14:08:39 taft-01 kernel: dm-cmirror: Client finishing recovery: 156/90GcsfRZ >Jun 11 14:08:39 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:40 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:40 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Received recovery work from 3: 466/xoT7UjpV >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 466/xoT7UjpV >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Received recovery work from 3: 467/xoT7UjpV >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 467/xoT7UjpV >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Received recovery work from 3: 468/xoT7UjpV >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 468/xoT7UjpV >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Received recovery work from 3: 469/xoT7UjpV >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 469/xoT7UjpV >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Received recovery work from 3: 470/xoT7UjpV >Jun 11 14:08:40 taft-01 kernel: dm-cmirror: Client finishing recovery: 470/xoT7UjpV >Jun 11 14:08:41 taft-01 kernel: dm-cmirror: Received recovery work from 3: 501/xoT7UjpV >Jun 11 14:08:41 taft-01 kernel: dm-cmirror: Client finishing recovery: 501/xoT7UjpV >Jun 11 14:08:42 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:42 taft-01 kernel: dm-cmirror: Received recovery work from 3: 532/xoT7UjpV >Jun 11 14:08:42 taft-01 kernel: dm-cmirror: Client finishing recovery: 532/xoT7UjpV >Jun 11 14:08:42 taft-01 kernel: dm-cmirror: Received recovery work from 3: 533/xoT7UjpV >Jun 11 14:08:42 taft-01 kernel: dm-cmirror: Client finishing recovery: 533/xoT7UjpV >Jun 11 14:08:42 taft-01 kernel: dm-cmirror: Received recovery work from 3: 534/xoT7UjpV >Jun 11 14:08:42 taft-01 kernel: dm-cmirror: Client finishing recovery: 534/xoT7UjpV >Jun 11 14:08:43 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:43 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:44 taft-01 kernel: dm-cmirror: Received recovery work from 3: 561/xoT7UjpV >Jun 11 14:08:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 561/xoT7UjpV >Jun 11 14:08:44 taft-01 kernel: dm-cmirror: Received recovery work from 3: 564/xoT7UjpV >Jun 11 14:08:44 taft-01 kernel: dm-cmirror: Client finishing recovery: 564/xoT7UjpV >Jun 11 14:08:45 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:45 taft-01 kernel: dm-cmirror: Received recovery work from 3: 292/90GcsfRZ >Jun 11 14:08:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 292/90GcsfRZ >Jun 11 14:08:46 taft-01 kernel: dm-cmirror: Received recovery work from 3: 293/90GcsfRZ >Jun 11 14:08:46 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:46 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:46 taft-01 kernel: dm-cmirror: Client finishing recovery: 293/90GcsfRZ >Jun 11 14:08:46 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:46 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:46 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:08:46 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:08:46 taft-01 clvmd[7681]: sync_lock: returning lkid 10137 >Jun 11 14:08:46 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:46 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:46 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:46 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:46 taft-01 clvmd[7681]: distribute command: XID = 785 >Jun 11 14:08:46 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=785 >Jun 11 14:08:46 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:46 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:46 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:46 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:46 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:08:46 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:46 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:46 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:46 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:46 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:46 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:46 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:08:46 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:08:46 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:08:46 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:10137 >Jun 11 14:08:46 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:08:46 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:46 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:08:46 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:46 taft-01 clvmd[7681]: distribute command: XID = 786 >Jun 11 14:08:46 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=786 >Jun 11 14:08:46 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:08:46 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:08:46 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:08:46 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:08:46 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:08:46 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:08:46 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:08:46 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:08:46 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:08:46 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:08:46 taft-01 clvmd[7681]: Send local reply >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Received recovery work from 3: 621/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 621/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Received recovery work from 3: 622/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 622/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Received recovery work from 3: 623/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Received recovery work from 3: 318/90GcsfRZ >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 318/90GcsfRZ >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Received recovery work from 3: 319/90GcsfRZ >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 623/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Received recovery work from 3: 624/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 624/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 319/90GcsfRZ >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Received recovery work from 3: 625/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Received recovery work from 3: 320/90GcsfRZ >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 625/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Received recovery work from 3: 626/xoT7UjpV >Jun 11 14:08:47 taft-01 kernel: dm-cmirror: Client finishing recovery: 320/90GcsfRZ >Jun 11 14:08:48 taft-01 kernel: dm-cmirror: Client finishing recovery: 626/xoT7UjpV >Jun 11 14:08:48 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:49 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:49 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Received recovery work from 3: 338/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Client finishing recovery: 338/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Received recovery work from 3: 339/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Client finishing recovery: 339/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Received recovery work from 3: 343/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Client finishing recovery: 343/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Received recovery work from 3: 344/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Client finishing recovery: 344/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Received recovery work from 3: 345/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Client finishing recovery: 345/90GcsfRZ >Jun 11 14:08:49 taft-01 kernel: dm-cmirror: Received recovery work from 3: 346/90GcsfRZ >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 346/90GcsfRZ >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Received recovery work from 3: 669/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 669/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Received recovery work from 3: 670/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 670/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Received recovery work from 3: 671/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 671/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Received recovery work from 3: 672/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 672/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Received recovery work from 3: 673/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 673/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Received recovery work from 3: 674/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Client finishing recovery: 674/xoT7UjpV >Jun 11 14:08:50 taft-01 kernel: dm-cmirror: Received recovery work from 3: 675/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 675/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 676/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 676/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 677/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 677/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 679/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 679/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 680/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 680/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 367/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 367/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 368/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 368/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 369/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 369/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 370/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 370/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 371/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 688/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 371/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 372/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 688/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 372/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 373/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 373/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 374/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 374/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 375/90GcsfRZ >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 375/90GcsfRZ >Jun 11 14:08:51 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 700/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 700/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 701/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 701/xoT7UjpV >Jun 11 14:08:51 taft-01 kernel: dm-cmirror: Received recovery work from 3: 702/xoT7UjpV >Jun 11 14:08:52 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:52 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 702/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 703/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 703/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 391/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 391/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 392/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 392/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 393/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 393/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 394/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 394/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 395/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 716/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 395/90GcsfRZ >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 716/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 717/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 717/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 718/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 718/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 719/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 719/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 720/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 720/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 721/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 721/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 722/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 722/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 723/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 723/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 726/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 726/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Received recovery work from 3: 727/xoT7UjpV >Jun 11 14:08:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 727/xoT7UjpV >Jun 11 14:08:54 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:54 taft-01 kernel: dm-cmirror: Received recovery work from 3: 446/90GcsfRZ >Jun 11 14:08:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 446/90GcsfRZ >Jun 11 14:08:54 taft-01 kernel: dm-cmirror: Received recovery work from 3: 447/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 447/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 448/90GcsfRZ >Jun 11 14:08:55 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:55 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 448/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 449/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 449/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 450/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 450/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 451/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 451/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 452/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 452/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 453/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 453/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 454/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 454/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 455/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 455/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 458/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 458/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 459/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 459/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 460/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 460/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 461/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 461/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Received recovery work from 3: 462/90GcsfRZ >Jun 11 14:08:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 462/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 790/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 790/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 791/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 791/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 792/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 792/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 793/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 793/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 794/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 794/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 795/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 795/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 796/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 482/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 482/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 483/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 796/xoT7UjpV >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 483/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 484/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 484/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 485/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 485/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 486/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 486/90GcsfRZ >Jun 11 14:08:56 taft-01 kernel: dm-cmirror: Received recovery work from 3: 487/90GcsfRZ >Jun 11 14:08:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 487/90GcsfRZ >Jun 11 14:08:57 taft-01 kernel: dm-cmirror: Received recovery work from 3: 488/90GcsfRZ >Jun 11 14:08:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 488/90GcsfRZ >Jun 11 14:08:57 taft-01 kernel: dm-cmirror: Received recovery work from 3: 489/90GcsfRZ >Jun 11 14:08:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 489/90GcsfRZ >Jun 11 14:08:57 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:08:58 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:08:58 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:00 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:01 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:01 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:01 taft-01 kernel: dm-cmirror: Received recovery work from 3: 846/xoT7UjpV >Jun 11 14:09:01 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:01 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:01 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:09:01 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:09:01 taft-01 clvmd[7681]: sync_lock: returning lkid 1007a >Jun 11 14:09:01 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:01 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:01 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:01 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:01 taft-01 clvmd[7681]: distribute command: XID = 787 >Jun 11 14:09:01 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=787 >Jun 11 14:09:01 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:01 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b70, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:01 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:01 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:01 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:01 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:01 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:01 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:01 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:01 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:01 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 846/xoT7UjpV >Jun 11 14:09:01 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:01 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:01 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:09:01 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:1007a >Jun 11 14:09:01 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:01 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:01 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:01 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:01 taft-01 clvmd[7681]: distribute command: XID = 788 >Jun 11 14:09:01 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=788 >Jun 11 14:09:01 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:01 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:01 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:01 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:01 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:01 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:01 taft-01 kernel: dm-cmirror: Received recovery work from 3: 537/90GcsfRZ >Jun 11 14:09:01 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:01 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:01 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:01 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:01 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 537/90GcsfRZ >Jun 11 14:09:01 taft-01 kernel: dm-cmirror: Received recovery work from 3: 538/90GcsfRZ >Jun 11 14:09:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 538/90GcsfRZ >Jun 11 14:09:03 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:03 taft-01 kernel: dm-cmirror: Received recovery work from 3: 877/xoT7UjpV >Jun 11 14:09:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 877/xoT7UjpV >Jun 11 14:09:03 taft-01 kernel: dm-cmirror: Received recovery work from 3: 878/xoT7UjpV >Jun 11 14:09:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 878/xoT7UjpV >Jun 11 14:09:03 taft-01 kernel: dm-cmirror: Received recovery work from 3: 879/xoT7UjpV >Jun 11 14:09:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 879/xoT7UjpV >Jun 11 14:09:04 taft-01 kernel: dm-cmirror: Received recovery work from 3: 880/xoT7UjpV >Jun 11 14:09:04 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:04 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 880/xoT7UjpV >Jun 11 14:09:04 taft-01 kernel: dm-cmirror: Received recovery work from 3: 881/xoT7UjpV >Jun 11 14:09:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 881/xoT7UjpV >Jun 11 14:09:06 taft-01 kernel: dm-cmirror: Received recovery work from 3: 616/90GcsfRZ >Jun 11 14:09:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 616/90GcsfRZ >Jun 11 14:09:06 taft-01 kernel: dm-cmirror: Received recovery work from 3: 617/90GcsfRZ >Jun 11 14:09:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 617/90GcsfRZ >Jun 11 14:09:06 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:07 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:07 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 637/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 637/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 638/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 638/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 639/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 639/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 640/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 640/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 641/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 641/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 642/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 642/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 643/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 643/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 644/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 644/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 645/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 645/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 646/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 646/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 647/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 647/90GcsfRZ >Jun 11 14:09:07 taft-01 kernel: dm-cmirror: Received recovery work from 3: 648/90GcsfRZ >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 648/90GcsfRZ >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Received recovery work from 3: 649/90GcsfRZ >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 649/90GcsfRZ >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Received recovery work from 3: 919/xoT7UjpV >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 919/xoT7UjpV >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Received recovery work from 3: 920/xoT7UjpV >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 920/xoT7UjpV >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Received recovery work from 3: 921/xoT7UjpV >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 921/xoT7UjpV >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Received recovery work from 3: 922/xoT7UjpV >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 922/xoT7UjpV >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Received recovery work from 3: 923/xoT7UjpV >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Received recovery work from 3: 668/90GcsfRZ >Jun 11 14:09:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 668/90GcsfRZ >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 923/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 924/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 924/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 925/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 925/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 926/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 926/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 927/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 927/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 928/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 928/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 929/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 929/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 930/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 943/90GcsfRZ >Jun 11 14:09:09 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 930/xoT7UjpV >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 943/90GcsfRZ >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 944/90GcsfRZ >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 944/90GcsfRZ >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 945/90GcsfRZ >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 945/90GcsfRZ >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Received recovery work from 3: 946/90GcsfRZ >Jun 11 14:09:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 946/90GcsfRZ >Jun 11 14:09:10 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:10 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Received recovery work from 3: 951/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 951/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Received recovery work from 3: 952/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 952/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Received recovery work from 3: 953/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 953/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Received recovery work from 3: 954/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 954/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Received recovery work from 3: 955/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 955/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Received recovery work from 3: 956/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 956/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Received recovery work from 3: 957/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 957/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Received recovery work from 3: 958/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 958/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Received recovery work from 3: 959/xoT7UjpV >Jun 11 14:09:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 959/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 960/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 960/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 961/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 961/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 962/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 962/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 963/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 963/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 964/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 964/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 965/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 965/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 966/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 966/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 967/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 967/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 968/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 968/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 969/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 969/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Received recovery work from 3: 970/xoT7UjpV >Jun 11 14:09:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 970/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Received recovery work from 3: 988/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 988/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Received recovery work from 3: 989/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 989/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Received recovery work from 3: 990/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 990/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Received recovery work from 3: 991/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 991/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1003/xoT7UjpV >Jun 11 14:09:12 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 1003/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1004/xoT7UjpV >Jun 11 14:09:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 1004/xoT7UjpV >Jun 11 14:09:13 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:13 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1016/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1016/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1017/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1017/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1018/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1018/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1019/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1019/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1020/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1020/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1021/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1021/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1022/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1022/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1023/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1023/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1024/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1024/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1025/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1025/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1026/xoT7UjpV >Jun 11 14:09:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 1026/xoT7UjpV >Jun 11 14:09:14 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1041/xoT7UjpV >Jun 11 14:09:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 1041/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1054/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 1054/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1055/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 1055/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1056/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 1056/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1057/xoT7UjpV >Jun 11 14:09:15 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 1057/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1058/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 1058/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1059/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 1059/xoT7UjpV >Jun 11 14:09:15 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1060/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1060/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1061/xoT7UjpV >Jun 11 14:09:16 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:16 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1061/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1062/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1062/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1063/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1063/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1064/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1064/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1065/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1065/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1066/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1066/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1067/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1067/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1068/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1068/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1069/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1069/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1070/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1070/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1071/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1071/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1072/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1072/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1073/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1073/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1074/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1074/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1075/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1075/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1076/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1076/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1077/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1077/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1078/xoT7UjpV >Jun 11 14:09:16 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:16 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:16 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:09:16 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:09:16 taft-01 clvmd[7681]: sync_lock: returning lkid 100a3 >Jun 11 14:09:16 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:16 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:16 taft-01 clvmd[7681]: distribute command: XID = 789 >Jun 11 14:09:16 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=789 >Jun 11 14:09:16 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:16 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502890, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:16 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:16 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:16 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:16 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:16 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:16 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:16 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1078/xoT7UjpV >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1079/xoT7UjpV >Jun 11 14:09:16 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:16 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:16 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:09:16 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:100a3 >Jun 11 14:09:16 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:16 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:16 taft-01 clvmd[7681]: distribute command: XID = 790 >Jun 11 14:09:16 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=790 >Jun 11 14:09:16 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:16 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:16 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:16 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:16 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:16 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:16 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:16 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 1079/xoT7UjpV >Jun 11 14:09:16 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:16 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1080/xoT7UjpV >Jun 11 14:09:16 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:17 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1080/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1081/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1081/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1082/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1082/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1083/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1083/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1084/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1084/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1085/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1085/xoT7UjpV >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1098/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1098/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1099/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1099/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1100/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1100/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1101/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1101/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1102/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1102/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1103/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1103/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1104/90GcsfRZ >Jun 11 14:09:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 1104/90GcsfRZ >Jun 11 14:09:18 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:18 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1122/90GcsfRZ >Jun 11 14:09:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 1122/90GcsfRZ >Jun 11 14:09:18 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1123/90GcsfRZ >Jun 11 14:09:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 1123/90GcsfRZ >Jun 11 14:09:19 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:19 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:19 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1126/90GcsfRZ >Jun 11 14:09:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 1126/90GcsfRZ >Jun 11 14:09:19 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1132/90GcsfRZ >Jun 11 14:09:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 1132/90GcsfRZ >Jun 11 14:09:19 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1133/90GcsfRZ >Jun 11 14:09:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 1133/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1164/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 1164/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1165/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 1165/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1166/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1161/xoT7UjpV >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 1166/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1167/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 1167/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1168/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 1161/xoT7UjpV >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 1168/90GcsfRZ >Jun 11 14:09:21 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1173/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 1173/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1174/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 1174/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1175/90GcsfRZ >Jun 11 14:09:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 1175/90GcsfRZ >Jun 11 14:09:22 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:22 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:22 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1186/90GcsfRZ >Jun 11 14:09:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 1186/90GcsfRZ >Jun 11 14:09:22 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1195/90GcsfRZ >Jun 11 14:09:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 1195/90GcsfRZ >Jun 11 14:09:22 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1196/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 1196/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1197/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 1197/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1198/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 1198/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1199/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 1199/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1200/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 1200/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1201/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 1201/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1202/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 1202/90GcsfRZ >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1205/xoT7UjpV >Jun 11 14:09:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 1205/xoT7UjpV >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1218/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 1218/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1219/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 1219/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1220/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 1220/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1221/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 1221/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1222/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 1222/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1223/90GcsfRZ >Jun 11 14:09:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 1223/90GcsfRZ >Jun 11 14:09:24 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:25 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:25 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1247/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 1247/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1248/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 1248/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1249/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 1249/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1250/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 1250/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1251/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 1251/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1252/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 1252/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1253/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 1253/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1254/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 1254/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1255/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 1255/90GcsfRZ >Jun 11 14:09:26 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1256/90GcsfRZ >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 1256/90GcsfRZ >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1257/90GcsfRZ >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 1257/90GcsfRZ >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1266/xoT7UjpV >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 1266/xoT7UjpV >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1267/xoT7UjpV >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 1267/xoT7UjpV >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1268/xoT7UjpV >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 1268/xoT7UjpV >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Received recovery work from 3: 1269/xoT7UjpV >Jun 11 14:09:27 taft-01 kernel: dm-cmirror: Client finishing recovery: 1269/xoT7UjpV >Jun 11 14:09:27 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:28 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:28 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:30 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:31 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:31 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:32 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:32 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:32 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:09:32 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:09:32 taft-01 clvmd[7681]: sync_lock: returning lkid 10381 >Jun 11 14:09:32 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:32 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:32 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:32 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:32 taft-01 clvmd[7681]: distribute command: XID = 791 >Jun 11 14:09:32 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=791 >Jun 11 14:09:32 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:32 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b70, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:32 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:32 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:32 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:32 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:32 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:32 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:32 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:32 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:32 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:32 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:32 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:32 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:09:32 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:10381 >Jun 11 14:09:32 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:32 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:32 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:32 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:32 taft-01 clvmd[7681]: distribute command: XID = 792 >Jun 11 14:09:32 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=792 >Jun 11 14:09:32 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:32 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:32 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:32 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:32 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:32 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:32 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:32 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:32 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:32 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:32 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:33 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:34 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:34 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:36 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:37 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:37 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:39 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:40 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:40 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:42 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:43 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:43 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:45 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:46 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:46 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:47 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:47 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:47 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 4 (client=0x2a98502dc0) >Jun 11 14:09:47 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:4 flags=0 >Jun 11 14:09:47 taft-01 clvmd[7681]: sync_lock: returning lkid 20224 >Jun 11 14:09:47 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:47 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:47 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:47 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:47 taft-01 clvmd[7681]: distribute command: XID = 793 >Jun 11 14:09:47 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=793 >Jun 11 14:09:47 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:47 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502890, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:47 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:47 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:47 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:47 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:47 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:47 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:47 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:47 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:47 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:47 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:47 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:09:47 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:47 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:47 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:47 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:47 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:47 taft-01 clvmd[7681]: distribute command: XID = 794 >Jun 11 14:09:47 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=794 >Jun 11 14:09:47 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:47 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502890, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:47 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:47 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:47 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:09:47 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:47 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:09:47 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:09:47 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:09:47 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:09:47 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:09:47 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:09:47 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:09:47 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:47 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:47 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:47 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:47 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:47 taft-01 clvmd[7681]: Read on local socket 5, len = 85 >Jun 11 14:09:47 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:09:47 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:47 taft-01 clvmd[7681]: pre_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:09:47 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ' mode:4 flags=5 >Jun 11 14:09:47 taft-01 clvmd[7681]: sync_lock: returning lkid 20389 >Jun 11 14:09:47 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:47 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:47 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:47 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:47 taft-01 clvmd[7681]: distribute command: XID = 795 >Jun 11 14:09:47 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98502890, len=85, csid=(nil), xid=795 >Jun 11 14:09:47 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:47 taft-01 clvmd[7681]: process_local_command: LOCK_LV (0x32) msg=0x2a985028f0, msglen =85, client=0x2a98502dc0 >Jun 11 14:09:47 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1c LCK_LV_SUSPEND (WRITE|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:09:47 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: LOG INFO: >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: uuid: LVM-1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: uuid_ref : 1 >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: log type : core >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: ?region_count: 1600 >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: ?sync_count : 0 >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: ?sync_search : 0 >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: in_sync : YES >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: suspended : NO >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: recovery_halted : NO >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: server_id : 3 >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: server_valid: YES >Jun 11 14:09:47 taft-01 lvm[7565]: No longer monitoring mirror device helter_skelter-syncd_secondary_core_2legs_2 for events >Jun 11 14:09:47 taft-01 kernel: dm-cmirror: cluster_presuspend: recovery halted on 90GcsfRZ(1) >Jun 11 14:09:47 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:09:47 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:09:47 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:09:47 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:09:48 taft-01 qarshd[20131]: Nothing to do >Jun 11 14:09:49 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:49 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:49 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:09:49 taft-01 kernel: dm-cmirror: cluster_postsuspend >Jun 11 14:09:49 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:49 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:09:49 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_MASTER_LEAVING(13): (90GcsfRZ) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (90GcsfRZ) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (90GcsfRZ) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (90GcsfRZ) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 57005 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:09:51 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:09:51 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:09:51 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:51 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:51 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:51 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:51 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:51 taft-01 clvmd[7681]: Read on local socket 5, len = 85 >Jun 11 14:09:51 taft-01 clvmd[7681]: check_all_clvmds_running >Jun 11 14:09:51 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:51 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:51 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:51 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:51 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:51 taft-01 clvmd[7681]: distribute command: XID = 796 >Jun 11 14:09:51 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985034f0. client=0x2a98502dc0, msg=0x2a98502890, len=85, csid=(nil), xid=796 >Jun 11 14:09:51 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:51 taft-01 clvmd[7681]: process_local_command: LOCK_LV (0x32) msg=0x2a985028f0, msglen =85, client=0x2a98502dc0 >Jun 11 14:09:51 taft-01 clvmd[7681]: do_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LOG INFO: >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: uuid: LVM-1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: Sending message to all cluster nodes >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: uuid_ref : 1 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: log type : core >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: ?region_count: 1600 >Jun 11 14:09:51 taft-01 lvm[7565]: Monitoring mirror device helter_skelter-syncd_secondary_core_2legs_2 for events >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: ?sync_count : 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: ?sync_search : 0 >Jun 11 14:09:51 taft-01 clvmd[7681]: Command return is 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: in_sync : YES >Jun 11 14:09:51 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: suspended : YES >Jun 11 14:09:51 taft-01 clvmd[7681]: Got 1 replies, expecting: 4 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: recovery_halted : YES >Jun 11 14:09:51 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: server_id : 57005 >Jun 11 14:09:51 taft-01 clvmd[7681]: Reply from node taft-03: 0 bytes >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: server_valid: NO >Jun 11 14:09:51 taft-01 clvmd[7681]: Got 2 replies, expecting: 4 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: cluster_resume: Setting recovery_halted = 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: server_id=dead, server_valid=1, 90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: Reply from node taft-04: 0 bytes >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: trigger = LRT_GET_RESYNC_WORK >Jun 11 14:09:51 taft-01 clvmd[7681]: Got 3 replies, expecting: 4 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (90GcsfRZ) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 2 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (90GcsfRZ) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 2 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 4 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_SELECTION(11): (90GcsfRZ) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:09:51 taft-01 clvmd[7681]: Reply from node taft-02: 0 bytes >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 2 >Jun 11 14:09:51 taft-01 clvmd[7681]: Got 4 replies, expecting: 4 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 4 >Jun 11 14:09:51 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_MASTER_ASSIGN(12): (90GcsfRZ) >Jun 11 14:09:51 taft-01 clvmd[7681]: post_lock_lv: resource '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ', cmd = 0x1e LCK_LV_RESUME (UNLOCK|LV|NONBLOCK), flags = 0x84 (DMEVENTD_MONITOR ) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 2 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 2 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: I'm the cluster mirror log server for 90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Disk Resume:: 90GcsfRZ (active) >Jun 11 14:09:51 taft-01 clvmd[7681]: sync_lock: '1pP81XIQLOyvZhCW5VZqNyFEbmpMYLl6RL3MI566E1izekdbU0TJaDFu90GcsfRZ' mode:1 flags=4 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Live nodes :: 4 >Jun 11 14:09:51 taft-01 clvmd[7681]: sync_lock: returning lkid 20389 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: In-Use Regions :: 0 >Jun 11 14:09:51 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Good IUR's :: 0 >Jun 11 14:09:51 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Bad IUR's :: 0 >Jun 11 14:09:51 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Sync count :: 0 >Jun 11 14:09:51 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Disk Region count :: 0 >Jun 11 14:09:51 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Region count :: 1600 >Jun 11 14:09:51 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: NOTE: Mapping has changed. >Jun 11 14:09:51 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Marked regions:: >Jun 11 14:09:51 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:20224 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: 0 - 1600 >Jun 11 14:09:51 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Total = 1600 >Jun 11 14:09:51 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Out-of-sync regions:: >Jun 11 14:09:51 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: 0 - 1600 >Jun 11 14:09:51 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Total = 1600 >Jun 11 14:09:51 taft-01 clvmd[7681]: distribute command: XID = 797 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=797 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Received recovery work from 2: 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (90GcsfRZ) >Jun 11 14:09:51 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 4 >Jun 11 14:09:51 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 4 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 2 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 last message repeated 2 times >Jun 11 14:09:51 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (90GcsfRZ) >Jun 11 14:09:51 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 1 >Jun 11 14:09:51 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:09:51 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 1 >Jun 11 14:09:51 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 last message repeated 3 times >Jun 11 14:09:51 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: LRT_ELECTION(10): (90GcsfRZ) >Jun 11 14:09:51 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: starter : 3 >Jun 11 14:09:51 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: co-ordinator: 1 >Jun 11 14:09:51 taft-01 qarshd[20131]: That's enough >Jun 11 14:09:51 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: node_count : 3 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=797 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 0/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 0/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Resync work completed by 2: 0/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 1/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Received recovery work from 2: 1/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 1/90GcsfRZ >Jun 11 14:09:51 taft-01 last message repeated 26 times >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 1/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Resync work completed by 2: 1/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 2/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Received recovery work from 2: 2/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 2/90GcsfRZ >Jun 11 14:09:51 taft-01 last message repeated 36 times >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 2/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Resync work completed by 2: 2/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 3/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Received recovery work from 2: 3/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 3/90GcsfRZ >Jun 11 14:09:51 taft-01 last message repeated 129 times >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 3/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Resync work completed by 2: 3/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 4/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Received recovery work from 2: 4/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 4/90GcsfRZ >Jun 11 14:09:51 taft-01 last message repeated 71 times >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Client finishing recovery: 4/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Resync work completed by 2: 4/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 5/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Received recovery work from 2: 5/90GcsfRZ >Jun 11 14:09:51 taft-01 kernel: dm-cmirror: Someone is already recovering region 5/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 55 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 5/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 5/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 6/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 6/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 6/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 6 times >Jun 11 14:09:52 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 6/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 2 times >Jun 11 14:09:52 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 6/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 69 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 6/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 6/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 7/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 7/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 7/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 148 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 7/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 7/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 8/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 8/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 8/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 102 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 8/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 8/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 9/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 9/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 9/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 55 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 9/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 9/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 10/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 10/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 10/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 119 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 10/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 10/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 11/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 11/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 11/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 46 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 11/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 11/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 12/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 12/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 12/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 39 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 12/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 12/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 13/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 13/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 13/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 96 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 13/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 13/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 14/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 14/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 14/90GcsfRZ >Jun 11 14:09:52 taft-01 last message repeated 50 times >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Client finishing recovery: 14/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Resync work completed by 2: 14/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 15/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Received recovery work from 2: 15/90GcsfRZ >Jun 11 14:09:52 taft-01 kernel: dm-cmirror: Someone is already recovering region 15/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 113 times >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 15/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 15/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 16/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 16/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 16/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 61 times >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 16/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 16/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 17/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 17/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 17/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 71 times >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 17/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 17/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 17/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 18/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 18/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 18/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 48 times >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 18/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 18/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 19/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 19/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 19/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 68 times >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 19/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 19/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 20/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 20/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 20/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 49 times >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 20/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 20/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 21/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 21/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 21/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 53 times >Jun 11 14:09:53 taft-01 qarshd[20178]: Talking to peer 10.15.80.47:51764 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 21/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 2 times >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 21/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 21/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 22/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 22/90GcsfRZ >Jun 11 14:09:53 taft-01 qarshd[20178]: Running cmdline: lvs -a -o +devices >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 4 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 23 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:53 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:09:53 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:09:53 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:09:53 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:09:53 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98502dc0) >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_lock: returning lkid 10365 >Jun 11 14:09:53 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: distribute command: XID = 798 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=798 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 3 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:53 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 22/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 16 times >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 22/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 22/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 23/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 23/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 3 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:53 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:53 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:10365 >Jun 11 14:09:53 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 clvmd[7681]: distribute command: XID = 799 >Jun 11 14:09:53 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=799 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 3 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 3 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Read on local socket 5, len = 33 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 1 (client=0x2a98502dc0) >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_lock: 'V_VolGroup00' mode:3 flags=0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 5 times >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_lock: returning lkid 202ea >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: distribute command: XID = 800 >Jun 11 14:09:53 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=33, csid=(nil), xid=800 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =33, client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 23/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 24/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 24/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Read on local socket 5, len = 33 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 6 (client=0x2a98502dc0) >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_unlock: 'V_VolGroup00' lkid:202ea >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: distribute command: XID = 801 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=33, csid=(nil), xid=801 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =33, client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 qarshd[20178]: That's enough >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:09:53 taft-01 qarshd[20181]: Talking to peer 10.15.80.47:51765 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:09:53 taft-01 qarshd[20181]: Running cmdline: lvs -a -o +devices >Jun 11 14:09:53 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=801 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 4 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 24/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 16 times >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 24/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 24/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 25/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 25/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 8 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 9 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 4 times >Jun 11 14:09:53 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:09:53 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:09:53 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98502dc0) >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_lock: returning lkid 1020e >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 clvmd[7681]: distribute command: XID = 802 >Jun 11 14:09:53 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=802 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 25/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 25/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 25/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 26/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 26/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 26/90GcsfRZ >Jun 11 14:09:53 taft-01 last message repeated 26 times >Jun 11 14:09:53 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:53 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:53 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:1020e >Jun 11 14:09:53 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 clvmd[7681]: distribute command: XID = 803 >Jun 11 14:09:53 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=803 >Jun 11 14:09:53 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:53 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 26/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 26/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 26/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 26/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 26/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 26/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Read on local socket 5, len = 33 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 1 (client=0x2a98502dc0) >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_lock: 'V_VolGroup00' mode:3 flags=0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: sync_lock: returning lkid 1005c >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Client finishing recovery: 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Resync work completed by 2: 27/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: distribute command: XID = 804 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 28/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=33, csid=(nil), xid=804 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Received recovery work from 2: 28/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b60, msglen =33, client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:53 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:53 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:53 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:53 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:53 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 7 times >Jun 11 14:09:54 taft-01 clvmd[7681]: Read on local socket 5, len = 33 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_VolGroup00' at 6 (client=0x2a98502dc0) >Jun 11 14:09:54 taft-01 clvmd[7681]: sync_unlock: 'V_VolGroup00' lkid:1005c >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 28/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 28/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 28/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 29/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 29/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: distribute command: XID = 805 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=33, csid=(nil), xid=805 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =33, client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Dropping metadata for VG VolGroup00 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 29/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 30/90GcsfRZ >Jun 11 14:09:54 taft-01 qarshd[20181]: That's enough >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 30/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 30/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 30/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:09:54 taft-01 qarshd[20184]: Talking to peer 10.15.80.47:51766 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 30/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 30/90GcsfRZ >Jun 11 14:09:54 taft-01 qarshd[20184]: Running cmdline: lvs -o copy_percent --noheadings helter_skelter/syncd_secondary_core_2legs_1 >Jun 11 14:09:54 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=805 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98502dc0) >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: sync_lock: returning lkid 100ae >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 31/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: distribute command: XID = 806 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=806 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 32/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 32/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 32/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 33/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 33/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 33/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 33/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 33/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 33/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 34/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 34/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 8 times >Jun 11 14:09:54 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 9 times >Jun 11 14:09:54 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:100ae >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 2 times >Jun 11 14:09:54 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 2 times >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 clvmd[7681]: distribute command: XID = 807 >Jun 11 14:09:54 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=807 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:54 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:54 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 2 times >Jun 11 14:09:54 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 3 times >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 qarshd[20184]: That's enough >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 2 times >Jun 11 14:09:54 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 qarshd[20187]: Talking to peer 10.15.80.47:51767 >Jun 11 14:09:54 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 3 times >Jun 11 14:09:54 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=807 >Jun 11 14:09:54 taft-01 qarshd[20187]: Running cmdline: lvs -o copy_percent --noheadings helter_skelter/syncd_secondary_core_2legs_2 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 11 times >Jun 11 14:09:54 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:09:54 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:54 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:09:54 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:09:54 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 7 times >Jun 11 14:09:54 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98502dc0) >Jun 11 14:09:54 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 6 times >Jun 11 14:09:54 taft-01 clvmd[7681]: sync_lock: returning lkid 10359 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 34/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 2 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 34/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 34/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 35/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 35/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: distribute command: XID = 808 >Jun 11 14:09:54 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=808 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:54 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 35/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 56 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 35/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 35/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 36/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 36/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 36/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 31 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 36/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 36/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 37/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 37/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 37/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 16 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 37/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 37/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 38/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 38/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:09:54 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:54 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:09:54 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:10359 >Jun 11 14:09:54 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 38/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: distribute command: XID = 809 >Jun 11 14:09:54 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=809 >Jun 11 14:09:54 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:09:54 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:09:54 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 38/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 38/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 38/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 38/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:09:54 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 38/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 38/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 39/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Send local reply >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:09:54 taft-01 qarshd[20187]: That's enough >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=809 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 3 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 39/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 39/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 39/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 40/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 40/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 40/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 10 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 40/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 40/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 41/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 41/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 41/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 15 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 41/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 41/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 42/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 42/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 42/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 16 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 42/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 42/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 43/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 43/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 43/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 12 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 43/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 43/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 44/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 44/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 44/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 8 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 44/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 44/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 45/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 45/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 45/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 11 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 45/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 45/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 46/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 46/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 46/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 14 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 46/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 46/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 47/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 47/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 47/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 61 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 47/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 47/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 48/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 48/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 48/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 94 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 48/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 48/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 49/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 49/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 49/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 22 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 49/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 49/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 50/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 50/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 50/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 25 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 50/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 50/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 51/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Received recovery work from 2: 51/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 51/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 8 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Client finishing recovery: 51/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 2: 51/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 52/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 52/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 16 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 3: 52/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 53/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 53/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 10 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 53/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 54/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 54/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 8 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 54/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 54/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 55/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 55/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 15 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 55/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 56/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 56/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 4 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 56/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 57/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 57/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 14 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 57/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 58/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 58/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 9 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 58/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 59/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 59/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 17 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 59/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 60/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 60/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 12 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 60/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 61/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 61/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 10 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 61/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 62/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 62/90GcsfRZ >Jun 11 14:09:54 taft-01 last message repeated 80 times >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Resync work completed by 4: 62/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 62/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 63/90GcsfRZ >Jun 11 14:09:54 taft-01 kernel: dm-cmirror: Someone is already recovering region 63/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 26 times >Jun 11 14:09:55 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 63/90GcsfRZ >Jun 11 14:09:55 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 63/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 23 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 4: 63/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 64/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 64/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 44 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 4: 64/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 65/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 65/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 22 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 3: 65/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 66/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 66/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 19 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 3: 66/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 67/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 67/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 12 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 3: 67/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 68/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 68/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 22 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 68/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 69/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 69/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 5 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 69/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 70/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 70/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 19 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 70/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 71/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 71/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 12 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 71/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 71/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 72/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 72/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 11 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 72/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 72/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 73/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 73/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 11 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 73/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 74/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 74/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 39 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 74/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 75/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 75/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 37 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 75/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 75/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 75/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 76/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 76/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 58 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 76/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 77/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 77/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 36 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 77/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 78/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 78/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 34 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 78/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 79/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 79/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 66 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 79/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 80/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 80/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 15 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 80/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 81/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 81/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 18 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 81/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 82/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 82/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 9 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 82/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 83/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 83/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 15 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 83/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 84/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 84/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 14 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 84/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 85/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 85/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 14 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 85/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 86/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 86/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 5 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 86/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 87/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 87/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 13 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 1: 87/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 88/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Received recovery work from 2: 88/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 88/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 9 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 88/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 2: 88/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 89/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Received recovery work from 2: 89/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 89/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 8 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 89/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 2: 89/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 90/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Received recovery work from 2: 90/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 90/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 16 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 90/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 2: 90/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 91/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Received recovery work from 2: 91/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 91/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 13 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 91/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 2: 91/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 92/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Received recovery work from 2: 92/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 92/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 10 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 92/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 2: 92/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 93/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Received recovery work from 2: 93/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 93/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 32 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 93/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 2: 93/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 94/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Received recovery work from 2: 94/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 94/90GcsfRZ >Jun 11 14:09:55 taft-01 last message repeated 39 times >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Client finishing recovery: 94/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Resync work completed by 2: 94/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 95/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Received recovery work from 2: 95/90GcsfRZ >Jun 11 14:09:55 taft-01 kernel: dm-cmirror: Someone is already recovering region 95/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 43 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 95/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 95/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 96/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 96/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 96/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 24 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 96/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 96/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 96/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 97/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 97/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 97/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 51 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 97/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 97/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 98/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 98/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 98/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 32 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 98/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 98/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 99/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 99/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 99/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 14 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 99/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 99/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 100/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 100/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 100/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 20 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 100/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 100/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 101/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 101/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 101/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 10 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 101/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 101/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 102/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 102/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 102/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 10 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 102/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 102/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 103/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 103/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 103/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 15 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 103/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 103/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 104/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 104/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 104/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 20 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 104/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 104/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 105/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 105/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 105/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 7 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 105/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 105/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 106/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 106/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 106/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 13 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 106/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 106/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 106/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 107/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 107/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 107/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 107/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 5 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 107/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 107/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 108/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 108/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 108/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 14 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 108/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 108/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 109/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 109/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 109/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 30 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 109/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 109/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 109/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 110/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 110/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 110/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 51 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 110/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 110/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 111/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 111/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 111/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 42 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 111/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 111/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 112/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 112/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 112/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 14 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 112/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 112/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 113/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 113/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 113/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 113/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 27 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 113/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 113/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 114/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 114/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 114/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 14 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 114/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 114/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 115/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 115/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 115/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 16 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 115/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 115/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 116/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 116/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 116/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 7 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 116/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 116/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 116/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 117/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 117/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 117/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 6 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 117/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 117/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 118/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 118/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 118/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 8 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 118/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 118/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 119/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 119/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 119/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 7 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 119/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 119/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 120/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 120/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 120/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 11 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 120/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 120/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 121/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 121/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 121/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 7 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 121/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 121/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 122/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 122/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 122/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 14 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 122/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 122/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 123/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 123/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 123/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 23 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 123/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 123/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 124/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 124/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 124/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 14 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 124/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 124/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 125/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 125/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 125/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 14 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 125/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 125/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 126/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 126/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 126/90GcsfRZ >Jun 11 14:09:56 taft-01 last message repeated 17 times >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Client finishing recovery: 126/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Resync work completed by 2: 126/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 127/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Received recovery work from 2: 127/90GcsfRZ >Jun 11 14:09:56 taft-01 kernel: dm-cmirror: Someone is already recovering region 127/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 107 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 127/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 127/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 128/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 128/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 128/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 34 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 128/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 128/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 128/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 129/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 129/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 129/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 7 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 129/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 129/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 130/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 130/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 11 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 1: 130/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 131/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 131/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 12 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 1: 131/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 132/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 132/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 18 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 3: 132/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 133/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 133/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 11 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 3: 133/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 134/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 134/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 9 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 3: 134/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 135/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 135/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 6 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 1: 135/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 136/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 136/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 136/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 13 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 136/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 136/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 137/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 137/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 137/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 18 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 137/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 137/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 138/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 138/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 138/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 44 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 138/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 138/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 139/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 139/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 139/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 45 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 139/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 139/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 140/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 140/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 140/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 40 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 140/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 140/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 140/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 141/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 141/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 141/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 18 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 141/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 141/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 142/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 142/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 142/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 11 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 142/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 142/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 143/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 143/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 143/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 3 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 143/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 143/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 144/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 144/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 11 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 3: 144/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 145/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 145/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 14 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 3: 145/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 146/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 146/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 10 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 1: 146/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 147/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 147/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 21 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 1: 147/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 148/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 148/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 5 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 1: 148/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 149/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 149/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 9 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 4: 149/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 150/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 150/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 6 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 4: 150/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 151/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 151/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 15 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 4: 151/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 152/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 152/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 33 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 4: 152/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 153/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 153/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 62 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 4: 153/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 154/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 154/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 154/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 13 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 154/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 154/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 155/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 155/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 155/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 9 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 155/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 155/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 156/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 156/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 156/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 9 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 156/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 156/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 157/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 157/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 157/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 18 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 157/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 157/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 158/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 158/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 158/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 9 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 158/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 158/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 159/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Received recovery work from 2: 159/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 159/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Client finishing recovery: 159/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 2: 159/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 160/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 160/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 11 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 4: 160/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 161/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 161/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 6 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 4: 161/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 162/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 162/90GcsfRZ >Jun 11 14:09:57 taft-01 last message repeated 22 times >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Resync work completed by 4: 162/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 163/90GcsfRZ >Jun 11 14:09:57 taft-01 kernel: dm-cmirror: Someone is already recovering region 163/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 75 times >Jun 11 14:09:58 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 163/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 7 times >Jun 11 14:09:58 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 163/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 33 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 163/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 164/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 164/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 74 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 164/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 165/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 165/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 26 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 165/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 166/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 166/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 16 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 166/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 167/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 167/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 19 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 167/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 168/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 168/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 8 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 168/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 169/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 169/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 11 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 169/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 169/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 170/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 170/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 15 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 170/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 171/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 171/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 6 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 171/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 172/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 172/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 3 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 172/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 173/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 173/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 16 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 173/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 174/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 174/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 10 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 174/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 175/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 175/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 9 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 175/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 176/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 176/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 53 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 176/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 177/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 177/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 47 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 3: 177/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 178/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 178/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 91 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 178/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 179/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 179/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 84 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 179/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 180/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 180/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 45 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 180/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 181/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 181/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 15 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 181/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 182/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 182/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 18 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 182/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 183/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 183/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 16 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 183/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 184/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 184/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 14 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 184/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 185/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 185/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 14 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 185/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 186/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 186/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 5 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 186/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 187/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 187/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 12 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 187/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 188/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 188/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 188/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 4: 188/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 189/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 189/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 18 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 189/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 190/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 190/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 7 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 1: 190/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 191/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 191/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 13 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 3: 191/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 192/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 192/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 10 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 3: 192/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 193/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 193/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 12 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 3: 193/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 194/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 194/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 10 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 3: 194/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 195/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 195/90GcsfRZ >Jun 11 14:09:58 taft-01 last message repeated 9 times >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Resync work completed by 3: 195/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 196/90GcsfRZ >Jun 11 14:09:58 taft-01 kernel: dm-cmirror: Someone is already recovering region 196/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 50 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 196/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 197/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 197/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 19 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 197/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 198/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 198/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 9 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 198/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 198/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 199/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 199/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 23 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 199/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 200/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 200/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 43 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 200/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 201/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 201/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 8 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 201/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 202/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 202/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 11 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 202/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 203/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 203/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 6 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 203/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 204/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 204/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 4 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 204/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 205/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 205/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 29 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 205/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 206/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 206/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 82 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 206/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 207/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 207/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 40 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 207/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 208/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 208/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 33 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 208/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 209/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 209/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 15 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 209/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 210/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 210/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 14 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 210/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 211/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 211/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 10 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 211/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 212/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 212/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 19 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 212/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 213/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 213/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 8 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 213/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 214/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 214/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 9 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 214/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 215/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 215/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 18 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 215/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 216/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 216/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 8 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 216/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 217/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 217/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 17 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 217/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 218/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 218/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 11 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 218/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 218/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 219/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 219/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 7 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 219/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 220/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 220/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 12 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 220/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 221/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 221/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 14 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 221/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 222/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 222/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 14 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 222/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 223/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 223/90GcsfRZ >Jun 11 14:09:59 taft-01 last message repeated 92 times >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Resync work completed by 3: 223/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 224/90GcsfRZ >Jun 11 14:09:59 taft-01 kernel: dm-cmirror: Someone is already recovering region 224/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 63 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 3: 224/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 225/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 225/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 66 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 3: 225/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 226/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 226/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 46 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 3: 226/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 226/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 227/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 227/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 17 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 3: 227/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 227/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 228/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 228/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 10 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 1: 228/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 229/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 229/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 8 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 1: 229/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 230/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 230/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 230/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 20 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 230/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 230/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 231/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 231/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 231/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 12 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 231/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 231/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 232/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 232/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 232/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 13 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 232/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 232/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 233/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 233/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 233/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 233/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 233/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 233/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 234/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 234/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 234/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 19 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 234/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 234/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 235/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 235/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 235/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 12 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 235/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 235/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 236/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 236/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 236/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 9 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 236/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 236/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 237/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 237/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 237/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 8 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 237/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 237/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 238/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 238/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 238/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 9 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 238/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 238/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 239/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 239/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 239/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 11 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 239/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 239/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 240/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 240/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 240/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 11 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 240/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 240/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 240/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 241/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 241/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 241/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 7 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 241/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 241/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 242/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 242/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 242/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 7 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 242/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 242/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 243/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 243/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 243/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 46 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 243/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 243/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 243/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 244/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 244/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 244/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 65 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 244/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 244/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 245/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 245/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 245/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 24 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 245/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 245/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 246/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 246/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 246/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 19 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 246/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 246/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 247/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 247/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 247/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 16 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 247/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 247/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 248/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 248/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 248/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 11 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 248/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 248/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 248/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 249/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 249/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 249/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 249/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 8 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 249/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 249/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 250/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 250/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 250/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 7 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 250/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 250/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 250/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 251/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 251/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 251/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 16 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 251/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 251/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 252/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 252/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 252/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 8 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 252/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 252/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 253/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 253/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 253/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 24 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 253/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 253/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 254/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 254/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 254/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 67 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 254/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 254/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 255/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 255/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 255/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 33 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 255/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 255/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 255/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 256/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 256/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 256/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 15 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 256/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 256/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 257/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 257/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 257/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 19 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 257/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 257/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 258/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 258/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 258/90GcsfRZ >Jun 11 14:10:00 taft-01 last message repeated 10 times >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Client finishing recovery: 258/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Resync work completed by 2: 258/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 259/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Received recovery work from 2: 259/90GcsfRZ >Jun 11 14:10:00 taft-01 kernel: dm-cmirror: Someone is already recovering region 259/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 22 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 259/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 259/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 260/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 260/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 260/90GcsfRZ >Jun 11 14:10:01 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 260/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 2 times >Jun 11 14:10:01 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 260/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 62 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 260/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 260/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 261/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 261/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 261/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 93 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 261/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 261/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 262/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 262/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 262/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 56 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 262/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 262/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 263/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 263/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 263/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 24 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 263/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 263/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 264/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 264/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 264/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 9 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 264/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 264/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 265/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 265/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 265/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 14 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 265/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 265/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 266/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 266/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 266/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 10 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 266/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 266/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 267/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 267/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 267/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 16 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 267/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 267/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 268/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 268/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 268/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 16 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 268/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 268/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 268/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 269/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 269/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 269/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 48 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 269/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 269/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 270/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 270/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 270/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 27 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 270/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 270/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 271/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 271/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 15 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 4: 271/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 272/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 272/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 14 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 4: 272/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 273/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 273/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 16 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 1: 273/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 274/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 274/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 10 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 1: 274/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 275/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 275/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 8 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 1: 275/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 276/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 276/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 21 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 1: 276/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 277/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 277/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 23 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 1: 277/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 278/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 278/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 278/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 33 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 278/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 278/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 279/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 279/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 279/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 13 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 279/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 279/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 279/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 280/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 280/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 280/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 20 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 280/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 280/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 281/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 281/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 281/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 10 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 281/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 281/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 282/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 282/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 282/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 9 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 282/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 282/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 283/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 283/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 283/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 36 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 283/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 283/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 283/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 283/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 284/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 284/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 284/90GcsfRZ >Jun 11 14:10:01 taft-01 last message repeated 76 times >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Client finishing recovery: 284/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Resync work completed by 2: 284/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 285/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Received recovery work from 2: 285/90GcsfRZ >Jun 11 14:10:01 taft-01 kernel: dm-cmirror: Someone is already recovering region 285/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 55 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 285/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 285/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 286/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 286/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 286/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 32 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 286/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 286/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 287/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 287/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 287/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 29 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 287/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 287/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 288/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 288/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 288/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 21 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 288/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 288/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 289/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 289/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 289/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 16 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 289/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 289/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 290/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 290/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 290/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 16 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 290/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 290/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 291/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 291/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 291/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 17 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 291/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 291/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 292/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 292/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 15 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 3: 292/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 293/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 293/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 10 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 3: 293/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 293/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 294/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 294/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 294/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 14 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 294/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 294/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 295/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 295/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 295/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 10 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 295/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 295/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 296/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 296/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 296/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 34 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 296/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 296/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 297/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 297/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 297/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 26 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 297/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 297/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 298/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 298/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 298/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 18 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 298/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 298/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 299/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 299/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 299/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 9 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 299/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 299/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 300/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 300/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 300/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 7 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 300/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 300/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 301/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 301/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 301/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 13 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 301/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 301/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 302/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 302/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 302/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 19 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 302/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 302/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 303/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 303/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 303/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 5 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 303/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 303/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 304/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 304/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 304/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 6 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 304/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 304/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 305/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 305/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 305/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 15 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 305/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 305/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 306/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 306/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 306/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 26 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 306/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 306/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 307/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 307/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 307/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 31 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 307/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 307/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 308/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 308/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 308/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 19 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 308/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 308/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 309/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 309/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 309/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 15 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 309/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 309/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 309/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 310/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 310/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 310/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 10 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 310/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 310/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 311/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 311/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 311/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 15 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 311/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 311/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 312/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 312/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 312/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 11 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 312/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 312/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 313/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 313/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 313/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 9 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 313/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 313/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 314/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 314/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 314/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 17 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 314/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 314/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 315/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 315/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 315/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 21 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 315/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 315/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 316/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 316/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 316/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 15 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 316/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 316/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 317/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 317/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 317/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 14 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 317/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 317/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 318/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 318/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 318/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 15 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 318/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 318/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 319/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 319/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 319/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 14 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 319/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 319/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 320/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 320/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 320/90GcsfRZ >Jun 11 14:10:02 taft-01 last message repeated 36 times >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Client finishing recovery: 320/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Resync work completed by 2: 320/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 321/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 321/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Received recovery work from 2: 321/90GcsfRZ >Jun 11 14:10:02 taft-01 kernel: dm-cmirror: Someone is already recovering region 321/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 43 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 321/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 321/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 322/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 322/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 322/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 110 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 322/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 322/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 323/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 323/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 323/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 8 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 323/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 323/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 324/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 324/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 324/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 19 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 324/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 324/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 325/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 325/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 325/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 10 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 325/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 325/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 326/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 326/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 326/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 7 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 326/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 326/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 327/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 327/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 327/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 10 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 327/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 327/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 328/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 328/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 328/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 13 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 328/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 328/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 329/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 329/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 329/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 13 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 329/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 329/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 329/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 329/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 330/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 330/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 330/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 16 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 330/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 330/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 331/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 331/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 331/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 78 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 331/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 331/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 332/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 332/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 332/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 34 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 332/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 332/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 333/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 333/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 333/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 20 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 333/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 333/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 334/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 334/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 334/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 12 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 334/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 334/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 335/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 335/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 335/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 15 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 335/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 335/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 336/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 336/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 336/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 14 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 336/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 336/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 337/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 337/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 337/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 14 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 337/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 337/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 338/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 338/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 338/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 3 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 338/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 338/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 339/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 339/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 339/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 18 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 339/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 339/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 340/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 340/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 340/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 16 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 340/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 340/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 341/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 341/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 341/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 23 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 341/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 341/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 342/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 342/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 342/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 23 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 342/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 342/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 343/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 343/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 343/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 12 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 343/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 343/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 344/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 344/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 344/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 12 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 344/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 344/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 345/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 345/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 345/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 21 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 345/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 345/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 346/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 346/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 346/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 6 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 346/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 346/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 347/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 347/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 347/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 82 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 347/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 347/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 348/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 348/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 348/90GcsfRZ >Jun 11 14:10:03 taft-01 last message repeated 94 times >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Client finishing recovery: 348/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Resync work completed by 2: 348/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 349/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Received recovery work from 2: 349/90GcsfRZ >Jun 11 14:10:03 taft-01 kernel: dm-cmirror: Someone is already recovering region 349/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 20 times >Jun 11 14:10:04 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 349/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 2 times >Jun 11 14:10:04 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 349/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 349/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 349/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 350/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 350/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 350/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 34 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 350/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 350/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 351/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 351/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 351/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 16 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 351/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 351/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 352/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 352/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 352/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 19 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 352/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 352/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 353/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 353/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 353/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 19 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 353/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 353/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 354/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 354/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 354/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 20 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 354/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 354/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 355/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 355/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 355/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 34 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 355/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 355/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 356/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 356/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 356/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 52 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 356/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 356/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 357/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 357/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 357/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 17 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 357/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 357/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 358/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 358/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 358/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 12 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 358/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 358/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 359/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 359/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 359/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 4 times >Jun 11 14:10:04 taft-01 qarshd[20192]: Talking to peer 10.15.80.47:51769 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 359/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 359/90GcsfRZ >Jun 11 14:10:04 taft-01 qarshd[20192]: Running cmdline: lvs -o copy_percent --noheadings helter_skelter/syncd_secondary_core_2legs_1 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 359/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 5 times >Jun 11 14:10:04 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:10:04 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:04 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 359/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 2 times >Jun 11 14:10:04 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:10:04 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 359/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 2 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 359/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 359/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 360/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 360/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 2 times >Jun 11 14:10:04 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 3 times >Jun 11 14:10:04 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98502dc0) >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: sync_lock: returning lkid 203b7 >Jun 11 14:10:04 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:04 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:04 taft-01 clvmd[7681]: distribute command: XID = 810 >Jun 11 14:10:04 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=810 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 360/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 39 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 360/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 360/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 361/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Received recovery work from 2: 361/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 361/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 78 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Client finishing recovery: 361/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 2: 361/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 362/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 362/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 23 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 4: 362/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 363/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 363/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 48 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 4: 363/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 364/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 364/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 32 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 4: 364/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 365/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 365/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 19 times >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 4: 365/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 366/90GcsfRZ >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 366/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 3 times >Jun 11 14:10:04 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:04 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:04 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:10:04 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:203b7 >Jun 11 14:10:04 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:04 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:04 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:04 taft-01 clvmd[7681]: distribute command: XID = 811 >Jun 11 14:10:04 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=811 >Jun 11 14:10:04 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:04 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:10:04 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 366/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 366/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 366/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 366/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 366/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 4: 366/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:10:04 taft-01 qarshd[20192]: That's enough >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 qarshd[20195]: Talking to peer 10.15.80.47:51770 >Jun 11 14:10:04 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 qarshd[20195]: Running cmdline: lvs -o copy_percent --noheadings helter_skelter/syncd_secondary_core_2legs_2 >Jun 11 14:10:04 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=811 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 4: 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: in sub thread: client = 0x2a98503020 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 367/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98503020) >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: sync_lock: returning lkid 102f6 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: distribute command: XID = 812 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98503020, msg=0x2a98503130, len=37, csid=(nil), xid=812 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98503020 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 4: 368/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 8 times >Jun 11 14:10:04 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:04 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98503020) >Jun 11 14:10:04 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:102f6 >Jun 11 14:10:04 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 last message repeated 5 times >Jun 11 14:10:04 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: distribute command: XID = 813 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Resync work completed by 4: 369/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98503020, msg=0x2a98502850, len=37, csid=(nil), xid=813 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:04 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503130, msglen =37, client=0x2a98503020 >Jun 11 14:10:04 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:04 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:04 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 qarshd[20195]: That's enough >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:10:04 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:04 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:05 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:05 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:05 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:05 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:05 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98503020, msg=(nil), len=0, csid=(nil), xid=813 >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:05 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:05 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 370/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 274 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 370/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 371/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 371/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 39 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 371/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 372/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 372/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 29 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 372/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 373/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 373/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 46 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 373/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 374/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 374/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 44 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 374/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 375/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 375/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 60 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 375/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 375/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 376/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 376/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 44 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 376/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 377/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 377/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 69 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 377/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 378/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 378/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 29 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 378/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 379/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 379/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 38 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 379/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 380/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 380/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 28 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 380/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 381/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 381/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 16 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 381/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 382/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 382/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 14 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 382/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 383/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 383/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 15 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 383/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 384/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 384/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 16 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 384/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 384/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 384/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 385/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 385/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 18 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 385/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 386/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 386/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 22 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 386/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 387/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 387/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 66 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 387/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 387/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 388/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 388/90GcsfRZ >Jun 11 14:10:05 taft-01 last message repeated 24 times >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Resync work completed by 4: 388/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 389/90GcsfRZ >Jun 11 14:10:05 taft-01 kernel: dm-cmirror: Someone is already recovering region 389/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 35 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 389/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 390/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 390/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 16 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 390/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 391/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 391/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 10 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 391/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 392/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 392/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 14 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 392/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 392/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 393/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 393/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 23 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 393/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 394/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 394/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 15 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 394/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 395/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 395/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 23 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 395/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 396/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 396/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 17 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 396/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 397/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 397/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 5 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 397/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 398/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 398/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 16 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 4: 398/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 399/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 399/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 16 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 1: 399/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 400/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 400/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 400/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 14 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 400/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 400/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 401/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 401/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 401/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 25 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 401/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 401/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 402/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 402/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 402/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 17 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 402/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 402/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 403/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 403/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 403/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 29 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 403/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 403/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 404/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 404/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 404/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 38 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 404/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 404/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 405/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 405/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 405/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 20 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 405/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 405/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 406/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 406/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 406/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 13 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 406/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 406/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 407/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 407/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 407/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 38 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 407/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 407/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 408/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 408/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 408/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 26 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 408/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 408/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 409/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 409/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 409/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 36 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 409/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 409/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 410/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 410/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 410/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 77 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 410/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 410/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 411/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 411/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 411/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 28 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 411/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 411/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 412/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 412/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 412/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 85 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 412/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 412/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 413/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 413/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 413/90GcsfRZ >Jun 11 14:10:06 taft-01 last message repeated 111 times >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Client finishing recovery: 413/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Resync work completed by 2: 413/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 414/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Received recovery work from 2: 414/90GcsfRZ >Jun 11 14:10:06 taft-01 kernel: dm-cmirror: Someone is already recovering region 414/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 35 times >Jun 11 14:10:07 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 414/90GcsfRZ >Jun 11 14:10:07 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 414/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 28 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 414/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 414/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 415/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 415/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 415/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 24 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 415/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 415/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 416/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 416/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 416/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 24 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 416/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 416/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 417/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 417/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 417/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 14 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 417/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 417/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 417/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 418/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 418/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 46 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 1: 418/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 419/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 419/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 54 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 1: 419/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 420/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 420/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 24 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 1: 420/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 421/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 421/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 35 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 1: 421/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 422/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 422/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 68 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 1: 422/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 423/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 423/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 423/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 45 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 423/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 423/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 424/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 424/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 424/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 16 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 424/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 424/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 425/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 425/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 425/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 14 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 425/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 425/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 426/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 426/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 426/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 16 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 426/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 426/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 427/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 427/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 427/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 21 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 427/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 427/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 428/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 428/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 428/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 15 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 428/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 428/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 429/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 429/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 429/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 15 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 429/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 429/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 430/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 430/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 430/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 57 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 430/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 430/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 431/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 431/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 431/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 33 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 431/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 431/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 432/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 432/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 432/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 35 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 432/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 432/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 433/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 433/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 433/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 12 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 433/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 433/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 434/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 434/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 434/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 18 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 434/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 434/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 435/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 435/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 435/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 16 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 435/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 435/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 436/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 436/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 436/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 17 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 436/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 436/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 437/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 437/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 437/90GcsfRZ >Jun 11 14:10:07 taft-01 last message repeated 9 times >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Client finishing recovery: 437/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Resync work completed by 2: 437/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 438/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Received recovery work from 2: 438/90GcsfRZ >Jun 11 14:10:07 taft-01 kernel: dm-cmirror: Someone is already recovering region 438/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 18 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Client finishing recovery: 438/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 2: 438/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 439/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 439/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 35 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 439/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 440/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 440/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 19 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 440/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 441/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 441/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 13 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 441/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 442/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 442/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 6 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 442/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 443/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 443/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 13 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 443/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 444/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 444/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 11 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 444/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 445/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 445/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 17 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 445/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 446/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 446/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 18 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 446/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 447/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 447/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 53 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 447/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 448/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 448/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 21 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 1: 448/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 449/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 449/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 14 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 1: 449/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 450/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 450/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 36 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 1: 450/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 451/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 451/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 34 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 1: 451/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 452/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 452/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 40 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 1: 452/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 453/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 453/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 26 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 1: 453/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 454/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 454/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 38 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 1: 454/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 455/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 455/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 27 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 4: 455/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 456/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 456/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 39 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 4: 456/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 457/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 457/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 77 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 4: 457/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 458/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 458/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 58 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 1: 458/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 459/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 459/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 33 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 459/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 460/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 460/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 12 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 3: 460/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 461/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 461/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 17 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 4: 461/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 462/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 462/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 9 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 4: 462/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 462/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 463/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 463/90GcsfRZ >Jun 11 14:10:08 taft-01 last message repeated 19 times >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Resync work completed by 4: 463/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 464/90GcsfRZ >Jun 11 14:10:08 taft-01 kernel: dm-cmirror: Someone is already recovering region 464/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 67 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 3: 464/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 465/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 465/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 52 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 3: 465/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 466/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 466/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 33 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 3: 466/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 467/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 467/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 43 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 3: 467/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 468/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 468/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 42 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 3: 468/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 469/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 469/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 10 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 3: 469/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 470/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 470/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 11 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 3: 470/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 471/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 471/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 15 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 3: 471/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 472/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 472/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 17 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 4: 472/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 473/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 473/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 15 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 1: 473/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 474/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 474/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 19 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 1: 474/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 475/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 475/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 47 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 3: 475/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 476/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 476/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 35 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 1: 476/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 477/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 477/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 16 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 1: 477/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 478/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 478/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 11 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 1: 478/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 479/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 479/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 15 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 1: 479/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 480/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 480/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 18 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 1: 480/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 480/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 2 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 481/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 481/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 481/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 13 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 481/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 2: 481/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 482/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 482/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 482/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 10 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 482/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 2: 482/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 483/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 483/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 483/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 11 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 483/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 2: 483/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 484/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 484/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 484/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 484/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 484/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 2: 484/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 485/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 485/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 485/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 90 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 485/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 2: 485/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 486/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 486/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 486/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 41 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 486/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 2: 486/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 487/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 487/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 487/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 39 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 487/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 2: 487/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 488/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 488/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 488/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 39 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 488/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 2: 488/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 489/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 489/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 489/90GcsfRZ >Jun 11 14:10:09 taft-01 last message repeated 52 times >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Client finishing recovery: 489/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Resync work completed by 2: 489/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 489/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 490/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Received recovery work from 2: 490/90GcsfRZ >Jun 11 14:10:09 taft-01 kernel: dm-cmirror: Someone is already recovering region 490/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 14 times >Jun 11 14:10:10 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 490/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 3 times >Jun 11 14:10:10 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 490/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 20 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 490/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 490/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 491/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 491/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 491/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 67 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 491/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 491/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 492/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 492/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 21 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 4: 492/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 493/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 493/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 32 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 4: 493/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 494/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 494/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 31 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 4: 494/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 495/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 495/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 57 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 4: 495/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 496/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 496/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 40 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 4: 496/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 497/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 497/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 33 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 4: 497/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 498/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 498/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 41 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 4: 498/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 499/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 499/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 69 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 4: 499/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 500/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 500/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 500/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 38 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 500/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 500/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 501/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 501/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 501/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 17 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 501/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 501/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 502/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 502/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 502/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 19 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 502/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 502/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 503/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 503/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 503/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 36 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 503/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 503/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 504/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 504/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 504/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 84 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 504/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 504/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 505/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 505/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 505/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 23 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 505/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 505/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 506/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 506/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 506/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 40 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 506/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 506/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 507/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 507/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Someone is already recovering region 507/90GcsfRZ >Jun 11 14:10:10 taft-01 last message repeated 46 times >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Client finishing recovery: 507/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Resync work completed by 2: 507/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 508/90GcsfRZ >Jun 11 14:10:10 taft-01 kernel: dm-cmirror: Received recovery work from 2: 508/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 508/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 9 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 508/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 508/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 509/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 509/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 509/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 10 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 509/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 509/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 510/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 510/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 510/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 15 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 510/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 510/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 511/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 511/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 511/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 9 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 511/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 511/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 512/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 512/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 512/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 46 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 512/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 512/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 513/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 513/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 513/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 23 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 513/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 513/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 514/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 514/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 514/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 32 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 514/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 514/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 515/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 515/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 515/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 18 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 515/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 515/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 516/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 516/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 516/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 14 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 516/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 516/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 517/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 517/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 517/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 9 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 517/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 517/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 518/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 518/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 518/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 14 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 518/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 518/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 519/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 519/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 519/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 40 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 519/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 519/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 520/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 520/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 520/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 31 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 520/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 520/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 521/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 521/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 521/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 126 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 521/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 521/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 522/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 522/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 522/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 25 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 522/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 522/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 523/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 523/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 523/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 46 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 523/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 523/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 524/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 524/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 524/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 10 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 524/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 524/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 525/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 525/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 525/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 6 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 525/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 525/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 526/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 526/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 526/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 11 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 526/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 526/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 527/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 527/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 527/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 57 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 527/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 527/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 528/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 528/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 528/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 25 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 528/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 528/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 529/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 529/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 529/90GcsfRZ >Jun 11 14:10:11 taft-01 last message repeated 53 times >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Client finishing recovery: 529/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Resync work completed by 2: 529/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 530/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Received recovery work from 2: 530/90GcsfRZ >Jun 11 14:10:11 taft-01 kernel: dm-cmirror: Someone is already recovering region 530/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 72 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 530/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 2: 530/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 531/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Received recovery work from 2: 531/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 531/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 5 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 531/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 2: 531/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 532/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Received recovery work from 2: 532/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 532/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 6 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 532/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 2: 532/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 533/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Received recovery work from 2: 533/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 533/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 17 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 533/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 2: 533/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 534/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Received recovery work from 2: 534/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 534/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 14 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 534/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 2: 534/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 535/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Received recovery work from 2: 535/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 535/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 50 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 535/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 2: 535/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 536/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Received recovery work from 2: 536/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 536/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 18 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Client finishing recovery: 536/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 2: 536/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 537/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 537/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 22 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 537/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 538/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 538/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 5 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 538/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 539/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 539/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 9 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 539/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 540/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 540/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 29 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 540/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 541/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 541/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 8 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 541/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 542/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 542/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 13 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 542/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 543/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 543/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 24 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 543/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 544/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 544/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 13 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 544/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 545/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 545/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 12 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 545/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 546/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 546/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 25 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 546/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 547/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 547/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 14 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 547/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 548/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 548/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 16 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 548/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 549/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 549/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 38 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 549/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 550/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 550/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 27 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 550/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 551/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 551/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 47 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 551/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 552/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 552/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 20 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 552/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 553/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 553/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 10 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 553/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 554/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 554/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 49 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 554/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 555/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 555/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 24 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 555/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 556/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 556/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 14 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 556/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 557/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 557/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 26 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 557/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 558/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 558/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 32 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 558/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 559/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 559/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 13 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 559/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 560/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 560/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 18 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 560/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 561/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 561/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 6 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 561/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 562/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 562/90GcsfRZ >Jun 11 14:10:12 taft-01 last message repeated 20 times >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Resync work completed by 4: 562/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 563/90GcsfRZ >Jun 11 14:10:12 taft-01 kernel: dm-cmirror: Someone is already recovering region 563/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 29 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 4: 563/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 564/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 564/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 3 times >Jun 11 14:10:13 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:10:13 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 564/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 38 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 4: 564/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 564/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 565/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 565/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 43 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 4: 565/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 566/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 566/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 54 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 4: 566/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 567/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 567/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 55 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 4: 567/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 568/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 568/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 31 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 4: 568/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 569/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 569/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 27 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 4: 569/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 570/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 570/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 18 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 4: 570/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 571/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 571/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 13 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 4: 571/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 572/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 572/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 12 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 3: 572/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 573/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 573/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 12 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 3: 573/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 574/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 574/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 36 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 3: 574/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 575/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 575/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 14 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 3: 575/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 576/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 576/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 21 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 3: 576/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 576/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 577/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 577/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 14 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 3: 577/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 578/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 578/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 14 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 1: 578/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 579/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 579/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 579/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 32 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 579/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 2: 579/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 580/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 580/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 580/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 50 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 580/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 2: 580/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 581/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 581/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 581/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 16 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 581/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 2: 581/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 582/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 582/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 582/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 8 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 582/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 2: 582/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 583/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 583/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 583/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 17 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 583/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 2: 583/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 584/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 584/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 584/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 8 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 584/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 2: 584/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 585/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 585/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 585/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 11 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 585/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 2: 585/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 586/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 586/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 586/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 37 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 586/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 2: 586/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 587/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 587/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 587/90GcsfRZ >Jun 11 14:10:13 taft-01 last message repeated 24 times >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Client finishing recovery: 587/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Resync work completed by 2: 587/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 588/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Received recovery work from 2: 588/90GcsfRZ >Jun 11 14:10:13 taft-01 kernel: dm-cmirror: Someone is already recovering region 588/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 31 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 588/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 588/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 589/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 589/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 589/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 43 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 589/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 589/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 590/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 590/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 590/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 22 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 590/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 590/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 591/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 591/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 591/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 5 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 591/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 591/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 592/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 592/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 592/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 19 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 592/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 592/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 593/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 593/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 28 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 3: 593/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 594/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 594/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 21 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 3: 594/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 595/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 595/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 9 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 3: 595/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 596/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 596/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 14 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 3: 596/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 597/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 597/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 15 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 3: 597/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 598/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 598/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 13 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 3: 598/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 599/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 599/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 14 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 3: 599/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 600/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 600/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 29 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 3: 600/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 601/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 601/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 41 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 3: 601/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 602/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 602/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 31 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 4: 602/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 603/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 603/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 65 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 4: 603/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 604/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 604/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 604/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 64 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 604/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 604/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 605/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 605/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 605/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 18 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 605/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 605/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 606/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 606/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 606/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 14 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 606/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 606/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 607/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 607/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 607/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 11 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 607/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 607/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 608/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 608/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 608/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 7 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 608/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 608/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 609/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 609/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 609/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 9 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 609/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 609/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 610/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 610/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 610/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 14 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 610/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 610/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 611/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 611/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 611/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 32 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 611/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 611/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 612/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 612/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 612/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 22 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 612/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 612/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 612/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 613/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 613/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 613/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 58 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 613/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 613/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 614/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 614/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 614/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 47 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 614/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 614/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 614/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 615/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 615/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 615/90GcsfRZ >Jun 11 14:10:14 taft-01 last message repeated 52 times >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Client finishing recovery: 615/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Resync work completed by 2: 615/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 616/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Received recovery work from 2: 616/90GcsfRZ >Jun 11 14:10:14 taft-01 kernel: dm-cmirror: Someone is already recovering region 616/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 27 times >Jun 11 14:10:15 taft-01 qarshd[20200]: Talking to peer 10.15.80.47:51771 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 616/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 616/90GcsfRZ >Jun 11 14:10:15 taft-01 qarshd[20200]: Running cmdline: lvs -o copy_percent --noheadings helter_skelter/syncd_secondary_core_2legs_1 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 616/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 616/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 616/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 616/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 617/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:10:15 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:15 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:10:15 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 2 times >Jun 11 14:10:15 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98502dc0) >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: sync_lock: returning lkid 102c6 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 3 times >Jun 11 14:10:15 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: distribute command: XID = 814 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=814 >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:15 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:15 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:10:15 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 617/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 4 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 617/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 617/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 618/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 618/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 618/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 20 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 618/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 618/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 619/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 619/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 619/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 16 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 619/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 619/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 620/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 620/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 620/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 15 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 620/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 620/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 621/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 621/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 13 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 4: 621/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 621/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 621/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 622/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 622/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 7 times >Jun 11 14:10:15 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:15 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:15 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:10:15 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:102c6 >Jun 11 14:10:15 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:15 taft-01 clvmd[7681]: distribute command: XID = 815 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=815 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98502dc0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:10:15 taft-01 qarshd[20200]: That's enough >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 qarshd[20203]: Talking to peer 10.15.80.47:51772 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:15 taft-01 qarshd[20203]: Running cmdline: lvs -o copy_percent --noheadings helter_skelter/syncd_secondary_core_2legs_2 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=815 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 622/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 3 times >Jun 11 14:10:15 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: in sub thread: client = 0x2a98503020 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:10:15 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98503020) >Jun 11 14:10:15 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: sync_lock: returning lkid 2018a >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: distribute command: XID = 816 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98503020, msg=0x2a98503130, len=37, csid=(nil), xid=816 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502b30, msglen =37, client=0x2a98503020 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 623/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 624/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 624/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 624/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 624/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 624/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 624/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 624/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 51 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 624/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 624/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 624/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 625/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 625/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 625/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 11 times >Jun 11 14:10:15 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 625/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 2 times >Jun 11 14:10:15 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 625/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 625/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 625/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98503020) >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 625/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 626/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 626/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:2018a >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 626/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 626/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 626/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 626/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 626/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: distribute command: XID = 817 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 626/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98503020, msg=0x2a98502b30, len=37, csid=(nil), xid=817 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98503130, msglen =37, client=0x2a98503020 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 2 times >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98503020 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 627/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 627/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 627/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 628/90GcsfRZ >Jun 11 14:10:15 taft-01 qarshd[20203]: That's enough >Jun 11 14:10:15 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 628/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98503020, msg=(nil), len=0, csid=(nil), xid=817 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 628/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 9 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 628/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 628/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 629/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 629/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 629/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 40 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 629/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 629/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 630/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 630/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 630/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 14 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 630/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 630/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 631/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 631/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 631/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 11 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 631/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 631/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 632/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 632/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 632/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 18 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 632/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 632/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 633/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 633/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 633/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 45 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 633/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 633/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 634/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 634/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 634/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 57 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 634/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 634/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 635/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 635/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 635/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 62 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 635/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 635/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 636/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 636/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 636/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 45 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 636/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 636/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 637/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 637/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 637/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 12 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 637/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 637/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 638/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 638/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 638/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 638/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 13 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 638/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 638/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 639/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 639/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 639/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 10 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 639/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 639/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 640/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 640/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 640/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 9 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 640/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 640/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 641/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 641/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 641/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 57 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 641/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 641/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 642/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 642/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 642/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 38 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 642/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 642/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 643/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 643/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 643/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 8 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 643/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 643/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 644/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 644/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 644/90GcsfRZ >Jun 11 14:10:15 taft-01 last message repeated 24 times >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Client finishing recovery: 644/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Resync work completed by 2: 644/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 645/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Received recovery work from 2: 645/90GcsfRZ >Jun 11 14:10:15 taft-01 kernel: dm-cmirror: Someone is already recovering region 645/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 10 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 645/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 2: 645/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 646/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 646/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 3 times >Jun 11 14:10:16 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:10:16 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 646/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 72 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 3: 646/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 647/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 647/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 17 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 3: 647/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 648/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 648/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 17 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 3: 648/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 649/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 649/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 22 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 649/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 650/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 650/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 41 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 650/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 651/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 651/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 28 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 651/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 652/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 652/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 31 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 652/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 653/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 653/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 39 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 653/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 654/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 654/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 60 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 654/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 655/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 655/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 68 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 655/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 656/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 656/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 32 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 656/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 657/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 657/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 28 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 657/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 658/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 658/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 79 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 4: 658/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 659/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 659/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Received recovery work from 2: 659/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 659/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 43 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 659/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 2: 659/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 660/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Received recovery work from 2: 660/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 660/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 14 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 660/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 2: 660/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 661/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Received recovery work from 2: 661/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 661/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 14 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 661/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 2: 661/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 662/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Received recovery work from 2: 662/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 662/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 34 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 662/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 2: 662/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 663/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Received recovery work from 2: 663/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 663/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 36 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 663/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 2: 663/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 664/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Received recovery work from 2: 664/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 664/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 8 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 664/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 2: 664/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 665/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Received recovery work from 2: 665/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 665/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 23 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Client finishing recovery: 665/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 2: 665/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 665/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 666/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 666/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 17 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 3: 666/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 667/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 667/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 22 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 3: 667/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 668/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 668/90GcsfRZ >Jun 11 14:10:16 taft-01 last message repeated 26 times >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Resync work completed by 3: 668/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 669/90GcsfRZ >Jun 11 14:10:16 taft-01 kernel: dm-cmirror: Someone is already recovering region 669/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 20 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 3: 669/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 670/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 670/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 50 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 3: 670/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 671/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 671/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 22 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 3: 671/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 672/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 672/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 46 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 1: 672/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 673/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 673/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 33 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 1: 673/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 1: 674/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 674/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 50 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 1: 674/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 675/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 675/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 20 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 3: 675/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 676/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 676/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 29 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 3: 676/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 677/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 677/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 64 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 3: 677/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 678/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 678/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 678/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 30 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 678/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 678/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 679/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 679/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 679/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 32 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 679/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 679/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 680/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 680/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 680/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 23 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 680/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 680/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 681/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 681/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 681/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 28 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 681/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 681/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 681/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 682/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 682/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 682/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 14 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 682/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 682/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 683/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 683/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 683/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 16 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 683/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 683/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 684/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 684/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 684/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 56 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 684/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 684/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 685/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 685/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 685/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 19 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 685/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 685/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 686/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 686/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 686/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 15 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 686/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 686/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 687/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 687/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 687/90GcsfRZ >Jun 11 14:10:17 taft-01 last message repeated 18 times >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Client finishing recovery: 687/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Resync work completed by 2: 687/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 688/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Received recovery work from 2: 688/90GcsfRZ >Jun 11 14:10:17 taft-01 kernel: dm-cmirror: Someone is already recovering region 688/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 224 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 688/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 688/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 689/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 689/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 689/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 91 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 689/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 689/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 690/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 690/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 690/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 45 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 690/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 690/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 691/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 691/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 691/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 23 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 691/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 691/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 692/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 692/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 692/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 81 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 692/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 692/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 693/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 693/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 693/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 232 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 693/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 693/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 694/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 694/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 694/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 39 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 694/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 694/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 695/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 695/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 695/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 43 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 695/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 695/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 696/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 696/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 696/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 54 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 696/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 696/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 697/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 697/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 697/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 22 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 697/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 697/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 698/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 698/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 698/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 17 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 698/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 698/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 699/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 699/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 699/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 18 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 699/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 699/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 700/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 700/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 700/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 15 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 700/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 700/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 701/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 701/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 701/90GcsfRZ >Jun 11 14:10:18 taft-01 last message repeated 18 times >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Client finishing recovery: 701/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Resync work completed by 2: 701/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 702/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Received recovery work from 2: 702/90GcsfRZ >Jun 11 14:10:18 taft-01 kernel: dm-cmirror: Someone is already recovering region 702/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 13 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 702/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 702/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 703/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 703/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 703/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 10 times >Jun 11 14:10:19 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 703/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 703/90GcsfRZ >Jun 11 14:10:19 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 703/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 11 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 703/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 703/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 704/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 704/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 704/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 28 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 704/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 704/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 705/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 705/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 705/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 68 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 705/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 705/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 706/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 706/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 706/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 103 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 706/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 706/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 707/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 707/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 707/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 314 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 707/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 707/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 708/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 708/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 708/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 20 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 708/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 708/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 709/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 709/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 709/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 19 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 709/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 709/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 710/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 710/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 710/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 27 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 710/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 710/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 711/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 711/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 711/90GcsfRZ >Jun 11 14:10:19 taft-01 last message repeated 36 times >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Client finishing recovery: 711/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Resync work completed by 2: 711/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 712/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Received recovery work from 2: 712/90GcsfRZ >Jun 11 14:10:19 taft-01 kernel: dm-cmirror: Someone is already recovering region 712/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 49 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 712/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 712/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 713/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 713/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 713/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 11 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 713/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 713/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 714/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 714/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 714/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 9 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 714/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 714/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 715/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 715/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 715/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 715/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 715/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 716/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 716/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 716/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 19 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 716/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 716/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 717/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 717/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 717/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 20 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 717/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 717/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 718/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 718/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 718/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 50 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 718/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 718/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 719/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 719/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 719/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 88 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 719/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 719/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 720/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 720/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 720/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 128 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 720/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 720/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 720/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 721/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 721/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 721/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 28 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 721/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 721/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 722/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 722/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 722/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 4 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 722/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 722/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 723/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 723/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 723/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 10 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 723/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 723/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 724/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 724/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 724/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 24 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 724/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 724/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 725/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 725/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 725/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 10 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 725/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 725/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 726/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 726/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 726/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 111 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 726/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 726/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 727/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 727/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 727/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 101 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 727/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 727/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 728/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 728/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 728/90GcsfRZ >Jun 11 14:10:20 taft-01 last message repeated 41 times >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Client finishing recovery: 728/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Resync work completed by 2: 728/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 729/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Received recovery work from 2: 729/90GcsfRZ >Jun 11 14:10:20 taft-01 kernel: dm-cmirror: Someone is already recovering region 729/90GcsfRZ >Jun 11 14:10:21 taft-01 last message repeated 58 times >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 729/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Resync work completed by 2: 729/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 730/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Received recovery work from 2: 730/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Someone is already recovering region 730/90GcsfRZ >Jun 11 14:10:21 taft-01 last message repeated 85 times >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 730/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Resync work completed by 2: 730/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 731/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Received recovery work from 2: 731/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Someone is already recovering region 731/90GcsfRZ >Jun 11 14:10:21 taft-01 last message repeated 124 times >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 731/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Resync work completed by 2: 731/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 732/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Received recovery work from 2: 732/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Someone is already recovering region 732/90GcsfRZ >Jun 11 14:10:21 taft-01 last message repeated 389 times >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 732/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Resync work completed by 2: 732/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 733/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Received recovery work from 2: 733/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Someone is already recovering region 733/90GcsfRZ >Jun 11 14:10:21 taft-01 last message repeated 32 times >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 733/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Resync work completed by 2: 733/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 734/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Received recovery work from 2: 734/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Someone is already recovering region 734/90GcsfRZ >Jun 11 14:10:21 taft-01 last message repeated 40 times >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 734/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Resync work completed by 2: 734/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 735/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Received recovery work from 2: 735/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Someone is already recovering region 735/90GcsfRZ >Jun 11 14:10:21 taft-01 last message repeated 11 times >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 735/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Resync work completed by 2: 735/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 736/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Received recovery work from 2: 736/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Someone is already recovering region 736/90GcsfRZ >Jun 11 14:10:21 taft-01 last message repeated 9 times >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 736/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Resync work completed by 2: 736/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 737/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Received recovery work from 2: 737/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Someone is already recovering region 737/90GcsfRZ >Jun 11 14:10:21 taft-01 last message repeated 5 times >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Client finishing recovery: 737/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Resync work completed by 2: 737/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 738/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Received recovery work from 2: 738/90GcsfRZ >Jun 11 14:10:21 taft-01 kernel: dm-cmirror: Someone is already recovering region 738/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 22 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 738/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 738/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 739/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 739/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 739/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 9 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 739/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 739/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 740/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 740/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 740/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 2 times >Jun 11 14:10:22 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:10:22 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 740/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 7 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 740/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 740/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 741/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 741/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 741/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 18 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 741/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 741/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 742/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 742/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 742/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 40 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 742/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 742/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 743/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 743/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 743/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 35 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 743/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 743/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 744/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 744/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 744/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 15 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 744/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 744/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 745/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 745/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 745/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 4 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 745/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 745/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 746/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 746/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 746/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 12 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 746/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 746/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 747/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 747/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 747/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 14 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 747/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 747/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 748/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 748/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 748/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 11 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 748/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 748/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 749/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 749/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 749/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 9 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 749/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 749/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 750/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 750/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 750/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 4 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 750/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 750/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 751/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 751/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 751/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 21 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 751/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 751/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 752/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 752/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 752/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 20 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 752/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 752/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 753/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Received recovery work from 2: 753/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 753/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 20 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Client finishing recovery: 753/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 2: 753/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 754/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 754/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 65 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 754/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 755/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 755/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 32 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 755/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 756/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 756/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 23 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 756/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 757/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 757/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 47 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 757/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 758/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 758/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 26 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 758/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 759/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 759/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 55 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 759/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 760/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 760/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 30 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 760/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 761/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 761/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 20 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 761/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 762/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 762/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 13 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 762/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 763/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 763/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 4 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 763/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 764/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 764/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 26 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 764/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 765/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 765/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 18 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 765/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 766/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 766/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 18 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 766/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 767/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 767/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 20 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 767/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 768/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 768/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 27 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 768/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 768/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 769/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 769/90GcsfRZ >Jun 11 14:10:22 taft-01 last message repeated 20 times >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Resync work completed by 3: 769/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 770/90GcsfRZ >Jun 11 14:10:22 taft-01 kernel: dm-cmirror: Someone is already recovering region 770/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 22 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 4: 770/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 771/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 771/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 16 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 4: 771/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 772/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 772/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 12 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 4: 772/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 773/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 773/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 10 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 773/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 774/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 774/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 10 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 774/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 775/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 775/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 22 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 775/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 776/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 776/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 20 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 776/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 777/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 777/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 32 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 777/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 778/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 778/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 38 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 778/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 779/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 779/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 40 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 779/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 779/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 779/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 780/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 780/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 44 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 780/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 781/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 781/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 26 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 781/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 4: 782/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 782/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 17 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 4: 782/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 782/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 783/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 783/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 9 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 783/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 784/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 784/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 5 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 784/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 785/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 785/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 785/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 786/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 786/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 15 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 786/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 3: 787/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 787/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 10 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 3: 787/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 787/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 788/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 788/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 788/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 22 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 788/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 788/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 789/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 789/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 789/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 7 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 789/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 789/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 790/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 790/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 790/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 13 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 790/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 790/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 791/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 791/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 791/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 14 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 791/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 791/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 792/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 792/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 792/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 23 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 792/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 792/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 793/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 793/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 793/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 16 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 793/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 793/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 794/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 794/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 794/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 15 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 794/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 794/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 795/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 795/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 795/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 26 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 795/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 795/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 796/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 796/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 796/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 25 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 796/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 796/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 797/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 797/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 797/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 44 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 797/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 797/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 798/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 798/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 798/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 87 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 798/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 798/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 799/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 799/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 799/90GcsfRZ >Jun 11 14:10:23 taft-01 last message repeated 92 times >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Client finishing recovery: 799/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Resync work completed by 2: 799/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 800/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Received recovery work from 2: 800/90GcsfRZ >Jun 11 14:10:23 taft-01 kernel: dm-cmirror: Someone is already recovering region 800/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 48 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 800/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 800/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 801/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 801/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 801/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 26 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 801/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 801/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 802/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 802/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 802/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 8 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 802/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 802/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 803/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 803/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 803/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 20 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 803/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 803/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 804/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 804/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 804/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 29 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 804/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 804/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 805/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 805/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 805/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 60 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 805/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 805/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 806/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 806/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 806/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 105 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 806/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 806/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 807/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 807/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 807/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 59 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 807/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 807/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 808/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 808/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 808/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 48 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 808/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 808/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 809/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 809/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 809/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 79 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 809/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 809/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 810/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 810/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 810/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 88 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 810/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 810/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 810/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 811/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 811/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 811/90GcsfRZ >Jun 11 14:10:24 taft-01 last message repeated 176 times >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Client finishing recovery: 811/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Resync work completed by 2: 811/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 812/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Received recovery work from 2: 812/90GcsfRZ >Jun 11 14:10:24 taft-01 kernel: dm-cmirror: Someone is already recovering region 812/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 39 times >Jun 11 14:10:25 taft-01 qarshd[19968]: Nothing to do >Jun 11 14:10:25 taft-01 qarshd[19969]: Nothing to do >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 812/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 10 times >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 812/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Resync work completed by 2: 812/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 813/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Received recovery work from 2: 813/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 813/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 50 times >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 813/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Resync work completed by 2: 813/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 814/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Received recovery work from 2: 814/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 814/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 58 times >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 814/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Resync work completed by 2: 814/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 815/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Received recovery work from 2: 815/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 113 times >Jun 11 14:10:25 taft-01 qarshd[20208]: Talking to peer 10.15.80.47:51773 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 qarshd[20208]: Running cmdline: lvs -o copy_percent --noheadings helter_skelter/syncd_secondary_core_2legs_1 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 6 times >Jun 11 14:10:25 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:10:25 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 2 times >Jun 11 14:10:25 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:10:25 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:10:25 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 2 times >Jun 11 14:10:25 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:10:25 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98502dc0) >Jun 11 14:10:25 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: sync_lock: returning lkid 100b2 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 clvmd[7681]: distribute command: XID = 818 >Jun 11 14:10:25 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=818 >Jun 11 14:10:25 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:25 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 815/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 5 times >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 815/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Resync work completed by 2: 815/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 816/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Received recovery work from 2: 816/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 8 times >Jun 11 14:10:25 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:25 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:10:25 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:100b2 >Jun 11 14:10:25 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:25 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:25 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 clvmd[7681]: distribute command: XID = 819 >Jun 11 14:10:25 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=819 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 2 times >Jun 11 14:10:25 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 qarshd[20208]: That's enough >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:10:25 taft-01 qarshd[20211]: Talking to peer 10.15.80.47:51774 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=819 >Jun 11 14:10:25 taft-01 qarshd[20211]: Running cmdline: lvs -o copy_percent --noheadings helter_skelter/syncd_secondary_core_2legs_2 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 8 times >Jun 11 14:10:25 taft-01 clvmd[7681]: Got new connection on fd 5 >Jun 11 14:10:25 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: creating pipe, [10, 11] >Jun 11 14:10:25 taft-01 clvmd[7681]: Creating pre&post thread >Jun 11 14:10:25 taft-01 clvmd[7681]: Created pre&post thread, state = 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 2 times >Jun 11 14:10:25 taft-01 clvmd[7681]: in sub thread: client = 0x2a98502dc0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Sub thread ready for work. >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 1 (client=0x2a98502dc0) >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: sync_lock: 'V_helter_skelter' mode:3 flags=0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: sync_lock: returning lkid 2037f >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 2 times >Jun 11 14:10:25 taft-01 clvmd[7681]: distribute command: XID = 820 >Jun 11 14:10:25 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=820 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:25 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:25 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:25 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 59 times >Jun 11 14:10:25 taft-01 clvmd[7681]: Read on local socket 5, len = 37 >Jun 11 14:10:25 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:25 taft-01 clvmd[7681]: doing PRE command LOCK_VG 'V_helter_skelter' at 6 (client=0x2a98502dc0) >Jun 11 14:10:25 taft-01 clvmd[7681]: sync_unlock: 'V_helter_skelter' lkid:2037f >Jun 11 14:10:25 taft-01 clvmd[7681]: Writing status 0 down pipe 11 >Jun 11 14:10:25 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:25 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 clvmd[7681]: distribute command: XID = 821 >Jun 11 14:10:25 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=0x2a98503020, len=37, csid=(nil), xid=821 >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting to do post command - state = 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: process_work_item: local >Jun 11 14:10:25 taft-01 clvmd[7681]: process_local_command: LOCK_VG (0x33) msg=0x2a98502850, msglen =37, client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 3 times >Jun 11 14:10:25 taft-01 clvmd[7681]: Dropping metadata for VG helter_skelter >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Reply from node taft-01: 0 bytes >Jun 11 14:10:25 taft-01 clvmd[7681]: Got 1 replies, expecting: 1 >Jun 11 14:10:25 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:25 taft-01 clvmd[7681]: Got post command condition... >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting for next pre command >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 4 times >Jun 11 14:10:25 taft-01 clvmd[7681]: read on PIPE 10: 4 bytes: status: 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: background routine status was 0, sock_client=0x2a98502dc0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Send local reply >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Read on local socket 5, len = 0 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 5 times >Jun 11 14:10:25 taft-01 clvmd[7681]: EOF on local socket: inprogress=0 >Jun 11 14:10:25 taft-01 qarshd[20211]: That's enough >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Waiting for child thread >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Got pre command condition... >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 6 times >Jun 11 14:10:25 taft-01 clvmd[7681]: Subthread finished >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: Joined child thread >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: ret == 0, errno = 9. removing client >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: add_to_lvmqueue: cmd=0x2a985057e0. client=0x2a98502dc0, msg=(nil), len=0, csid=(nil), xid=821 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: process_work_item: free fd 5 >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 clvmd[7681]: LVM thread waiting for work >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 816/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 144 times >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 816/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Resync work completed by 2: 816/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 817/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Received recovery work from 2: 817/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 817/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 82 times >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 817/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Resync work completed by 2: 817/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 818/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Received recovery work from 2: 818/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 818/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 28 times >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 818/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Resync work completed by 2: 818/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 819/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Received recovery work from 2: 819/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 819/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 24 times >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 819/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Resync work completed by 2: 819/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 820/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Received recovery work from 2: 820/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 820/90GcsfRZ >Jun 11 14:10:25 taft-01 last message repeated 11 times >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Client finishing recovery: 820/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Resync work completed by 2: 820/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 821/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Received recovery work from 2: 821/90GcsfRZ >Jun 11 14:10:25 taft-01 kernel: dm-cmirror: Someone is already recovering region 821/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 163 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 821/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 821/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 822/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Received recovery work from 2: 822/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 822/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 117 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 822/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 822/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 823/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Received recovery work from 2: 823/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 823/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 24 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 823/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 823/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 824/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Received recovery work from 2: 824/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 824/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 10 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 824/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 824/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 825/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Received recovery work from 2: 825/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 825/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 10 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 825/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 825/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 826/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Received recovery work from 2: 826/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 826/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 75 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 826/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 826/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 827/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Received recovery work from 2: 827/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 827/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 94 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 827/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 827/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 828/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Received recovery work from 2: 828/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 828/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 95 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 828/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 828/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 828/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 829/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Received recovery work from 2: 829/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 829/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 101 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 829/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 829/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Assigning recovery work to 2: 830/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Received recovery work from 2: 830/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Someone is already recovering region 830/90GcsfRZ >Jun 11 14:10:26 taft-01 last message repeated 54 times >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Client finishing recovery: 830/90GcsfRZ >Jun 11 14:10:26 taft-01 kernel: dm-cmirror: Resync work completed by 2: 830/90GcsfRZ >Jun 11 14:13:50 taft-01 syslogd 1.4.1: restart. >Jun 11 14:13:50 taft-01 syslog: syslogd startup succeeded >Jun 11 14:13:50 taft-01 kernel: klogd 1.4.1, log source = /proc/kmsg started. >Jun 11 14:13:50 taft-01 kernel: Bootdata ok (command line is ro root=/dev/VolGroup00/LogVol00 console=tty0 console=ttyS0,115200n8) >Jun 11 14:13:50 taft-01 kernel: Linux version 2.6.9-71.ELsmp (brewbuilder@hs20-bc2-3.build.redhat.com) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-9)) #1 SMP Tue May 27 16:42:27 EDT 2008 >Jun 11 14:13:50 taft-01 kernel: BIOS-provided physical RAM map: >Jun 11 14:13:50 taft-01 kernel: BIOS-e820: 0000000000000000 - 00000000000a0000 (usable) >Jun 11 14:13:50 taft-01 kernel: BIOS-e820: 0000000000100000 - 00000000dffc0000 (usable) >Jun 11 14:13:50 taft-01 kernel: BIOS-e820: 00000000dffc0000 - 00000000dffcfc00 (ACPI data) >Jun 11 14:13:50 taft-01 kernel: BIOS-e820: 00000000dffcfc00 - 00000000dffff000 (reserved) >Jun 11 14:13:50 taft-01 kernel: BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved) >Jun 11 14:13:50 taft-01 kernel: BIOS-e820: 00000000fec00000 - 00000000fec90000 (reserved) >Jun 11 14:13:50 taft-01 kernel: BIOS-e820: 00000000fed00000 - 00000000fed00400 (reserved) >Jun 11 14:13:50 taft-01 kernel: BIOS-e820: 00000000fee00000 - 00000000fee10000 (reserved) >Jun 11 14:13:50 taft-01 syslog: klogd startup succeeded
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 450939
: 308986 |
308987
|
308989
|
308990