Description of problem: [root@taft-02 ~]# lvcreate -m 1 -n corelog --corelog -L 5G taft Error locking on node taft-03: device-mapper: reload ioctl failed: Invalid argument Error locking on node taft-02: device-mapper: reload ioctl failed: Invalid argument Error locking on node taft-04: device-mapper: reload ioctl failed: Invalid argument Failed to activate new LV. device-mapper: dm-log-clustered: Server error while processing request [DM_CLOG_CTR]: -22 device-mapper: table: 253:5: mirror: Error creating mirror dirty log device-mapper: ioctl: error adding target to table Nov 15 10:18:22 taft-02 clogd[6456]: Received constructor request with bad data Nov 15 10:18:22 taft-02 kernel: device-mapper: dm-log-clustered: Server error while processing request [DM_CLOG_2 Nov 15 10:18:22 taft-02 kernel: device-mapper: table: 253:5: mirror: Error creating mirror dirty log Nov 15 10:18:22 taft-02 kernel: device-mapper: ioctl: error adding target to table Nov 15 10:18:22 taft-02 [6568]: Monitoring mirror device taft-corelog for events Version-Release number of selected component (if applicable): 2.6.18-53.el5 cmirror-1.1.0-7 How reproducible: Everytime
A second time I tried this (after a reboot) the cmd hung with the storm of console messages: [root@taft-02 ~]# lvcreate -m 1 -n corelog --corelog -L 5G taft [HANG] [DM_CLOG_GET_RESYNC_WORK] to server: -3 device-mapper: dm-log-clustered: Unable to send cluster log request [DM_CLOG_GET_SYNC_COUNT] to server: -3 device-mapper: dm-log-clustered: Unable to send cluster log request [DM_CLOG_IS_REMOTE_RECOVERING] to server: -3 device-mapper: dm-log-clustered: Unable to send cluster log request [DM_CLOG_GET_RESYNC_WORK] to server: -3 device-mapper: dm-log-clustered: Unable to send cluster log request [DM_CLOG_GET_SYNC_COUNT] to server: -3 device-mapper: dm-log-clustered: Unable to send cluster log request [DM_CLOG_IS_REMOTE_RECOVERING] to server: -3 Nov 15 10:38:27 taft-02 kernel: device-mapper: dm-log-clustered: Unable to send cluster log request [DM_CLOG_GET3
For future reference, -3 means "No such process"... meaning the server probably segfaulted in your test. This means that you are probably able to grab a core file. Anyway, I believe I fixed this one while fixing 385001 (and others).
Fix verified in cmirror-1.1.5-4.el5.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHEA-2009-0158.html