Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1451620 Details for
Bug 1591498
bcache io error messages during random commands
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
verbose lvcreate attempt
lvcreate (text/plain), 224.98 KB, created by
Corey Marthaler
on 2018-06-14 22:03:42 UTC
(
hide
)
Description:
verbose lvcreate attempt
Filename:
MIME Type:
Creator:
Corey Marthaler
Created:
2018-06-14 22:03:42 UTC
Size:
224.98 KB
patch
obsolete
>#lvmcmdline.c:2815 Parsing: lvcreate --type raid10 -m 1 -vvvv -n raid10_3 -L 100M test >#lvmcmdline.c:1869 Recognised command lvcreate_raid_any (id 50 / enum 46). >#config/config.c:1480 devices/global_filter not found in config: defaulting to global_filter = [ "a|.*/|" ] >#libdm-config.c:1002 global/lvmetad_update_wait_time not found in config: defaulting to 10 >#daemon-client.c:33 /run/lvm/lvmetad.socket: Opening daemon socket to lvmetad for protocol lvmetad version 1. >#daemon-client.c:52 Sending daemon lvmetad: hello >#cache/lvmetad.c:143 Successfully connected to lvmetad on fd 3. >#filters/filter-sysfs.c:327 Sysfs filter initialised. >#filters/filter-internal.c:77 Internal filter initialised. >#filters/filter-type.c:56 LVM type filter initialised. >#filters/filter-usable.c:183 Usable device filter initialised. >#filters/filter-mpath.c:291 mpath filter initialised. >#filters/filter-partitioned.c:69 Partitioned filter initialised. >#filters/filter-md.c:169 MD filter initialised. >#filters/filter-composite.c:109 Composite filter initialised. >#config/config.c:1480 devices/filter not found in config: defaulting to filter = [ "a|.*/|" ] >#filters/filter-regex.c:216 Regex filter initialised. >#filters/filter-usable.c:183 Usable device filter initialised. >#filters/filter-composite.c:109 Composite filter initialised. >#libdm-config.c:975 devices/cache not found in config: defaulting to /etc/lvm/cache/.cache >#filters/filter-persistent.c:404 Persistent filter initialised. >#filters/filter-composite.c:109 Composite filter initialised. >#libdm-config.c:1074 metadata/record_lvs_history not found in config: defaulting to 0 >#lvmcmdline.c:2883 DEGRADED MODE. Incomplete RAID LVs will be processed. >#lvmcmdline.c:2889 Processing command: lvcreate --type raid10 -m 1 -vvvv -n raid10_3 -L 100M test >#lvmcmdline.c:2890 Command pid: 3187 >#lvmcmdline.c:2891 System ID: >#lvmcmdline.c:2894 O_DIRECT will be used >#locking/locking.c:129 File-based locking selected. >#libdm-common.c:984 Preparing SELinux context for /run/lock/lvm to system_u:object_r:lvm_lock_t:s0. >#libdm-common.c:987 Resetting SELinux context to default value. >#cache/lvmetad.c:256 Sending lvmetad get_global_info >#lvmcmdline.c:2987 WARNING: Not using lvmetad because a repair command was run. >#daemon-client.c:179 Closing daemon socket (fd 3). >#filters/filter-sysfs.c:327 Sysfs filter initialised. >#filters/filter-internal.c:77 Internal filter initialised. >#filters/filter-type.c:56 LVM type filter initialised. >#filters/filter-usable.c:183 Usable device filter initialised. >#filters/filter-mpath.c:291 mpath filter initialised. >#filters/filter-partitioned.c:69 Partitioned filter initialised. >#filters/filter-md.c:169 MD filter initialised. >#filters/filter-composite.c:109 Composite filter initialised. >#libdm-config.c:975 devices/cache not found in config: defaulting to /etc/lvm/cache/.cache >#filters/filter-persistent.c:404 Persistent filter initialised. >#activate/activate.c:535 Getting target version for raid >#ioctl/libdm-iface.c:1857 dm version [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm versions [ opencount flush ] [16384] (*1) >#activate/activate.c:572 Found raid target v1.13.2. >#activate/activate.c:535 Getting target version for raid >#ioctl/libdm-iface.c:1857 dm versions [ opencount flush ] [16384] (*1) >#activate/activate.c:572 Found raid target v1.13.2. >#libdm-config.c:1002 metadata/stripesize not found in config: defaulting to 64 >#toollib.c:1289 Using default stripesize 64.00 KiB. >#libdm-config.c:1002 activation/mirror_region_size not found in config: defaulting to 2048 >#libdm-config.c:975 report/output_format not found in config: defaulting to basic >#libdm-config.c:1074 log/report_command_log not found in config: defaulting to 0 >#toollib.c:2246 Processing each VG >#cache/lvmcache.c:1453 Finding VG info >#filters/filter-sysfs.c:327 Sysfs filter initialised. >#filters/filter-internal.c:77 Internal filter initialised. >#filters/filter-type.c:56 LVM type filter initialised. >#filters/filter-usable.c:183 Usable device filter initialised. >#filters/filter-mpath.c:291 mpath filter initialised. >#filters/filter-partitioned.c:69 Partitioned filter initialised. >#filters/filter-md.c:169 MD filter initialised. >#filters/filter-composite.c:109 Composite filter initialised. >#libdm-config.c:975 devices/cache not found in config: defaulting to /etc/lvm/cache/.cache >#filters/filter-persistent.c:404 Persistent filter initialised. >#label/label.c:686 Finding devices to scan >#device/dev-cache.c:353 /dev/vda: Added to device cache (252:0) >#device/dev-cache.c:349 /dev/disk/by-path/pci-0000:00:04.0: Aliased to /dev/vda in device cache (252:0) >#device/dev-cache.c:349 /dev/disk/by-path/virtio-pci-0000:00:04.0: Aliased to /dev/vda in device cache (252:0) >#device/dev-cache.c:353 /dev/vda1: Added to device cache (252:1) >#device/dev-cache.c:349 /dev/disk/by-path/pci-0000:00:04.0-part1: Aliased to /dev/vda1 in device cache (252:1) >#device/dev-cache.c:349 /dev/disk/by-path/virtio-pci-0000:00:04.0-part1: Aliased to /dev/vda1 in device cache (252:1) >#device/dev-cache.c:349 /dev/disk/by-uuid/8bc77211-2053-487d-b991-423dbe3f8977: Aliased to /dev/vda1 in device cache (252:1) >#device/dev-cache.c:353 /dev/vda2: Added to device cache (252:2) >#device/dev-cache.c:349 /dev/disk/by-id/lvm-pv-uuid-tZWkpV-9qle-izId-POoz-cCaB-OQGu-b2M5bU: Aliased to /dev/vda2 in device cache (252:2) >#device/dev-cache.c:349 /dev/disk/by-path/pci-0000:00:04.0-part2: Aliased to /dev/vda2 in device cache (252:2) >#device/dev-cache.c:349 /dev/disk/by-path/virtio-pci-0000:00:04.0-part2: Aliased to /dev/vda2 in device cache (252:2) >#device/dev-cache.c:353 /dev/sda: Added to device cache (8:0) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405f0678ceee8fe44d68ced2bb2e: Aliased to /dev/sda in device cache (8:0) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405f0678ceee8fe44d68ced2bb2e: Aliased to /dev/sda in device cache (8:0) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-001-path-001-lun-0: Aliased to /dev/sda in device cache (8:0) >#device/dev-cache.c:353 /dev/sda1: Added to device cache (8:1) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405f0678ceee8fe44d68ced2bb2e-part1: Aliased to /dev/sda1 in device cache (8:1) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405f0678ceee8fe44d68ced2bb2e-part1: Aliased to /dev/sda1 in device cache (8:1) >#device/dev-cache.c:349 /dev/disk/by-partuuid/5bc79665-9049-4320-b51e-6ef93a3cc05d: Aliased to /dev/sda1 in device cache (8:1) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-001-path-001-lun-0-part1: Aliased to /dev/sda1 in device cache (8:1) >#device/dev-cache.c:353 /dev/sdb: Added to device cache (8:16) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405da0aff18428f497988e2eb266: Aliased to /dev/sdb in device cache (8:16) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405da0aff18428f497988e2eb266: Aliased to /dev/sdb in device cache (8:16) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-002-path-001-lun-0: Aliased to /dev/sdb in device cache (8:16) >#device/dev-cache.c:353 /dev/sdb1: Added to device cache (8:17) >#device/dev-cache.c:349 /dev/disk/by-id/lvm-pv-uuid-9UmA9b-kfg9-flFj-3jim-cn07-QIMa-5vuUkN: Aliased to /dev/sdb1 in device cache (8:17) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405da0aff18428f497988e2eb266-part1: Aliased to /dev/sdb1 in device cache (8:17) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405da0aff18428f497988e2eb266-part1: Aliased to /dev/sdb1 in device cache (8:17) >#device/dev-cache.c:349 /dev/disk/by-partuuid/dc149f78-91b2-495a-8d64-42ad3ae7a252: Aliased to /dev/sdb1 in device cache (8:17) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-002-path-001-lun-0-part1: Aliased to /dev/sdb1 in device cache (8:17) >#device/dev-cache.c:353 /dev/sdc: Added to device cache (8:32) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405387a78d944c14c488f013b76b: Aliased to /dev/sdc in device cache (8:32) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405387a78d944c14c488f013b76b: Aliased to /dev/sdc in device cache (8:32) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-003-path-001-lun-0: Aliased to /dev/sdc in device cache (8:32) >#device/dev-cache.c:353 /dev/sdc1: Added to device cache (8:33) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405387a78d944c14c488f013b76b-part1: Aliased to /dev/sdc1 in device cache (8:33) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405387a78d944c14c488f013b76b-part1: Aliased to /dev/sdc1 in device cache (8:33) >#device/dev-cache.c:349 /dev/disk/by-partuuid/764be2b1-424c-45c2-90e8-44d7126fe56f: Aliased to /dev/sdc1 in device cache (8:33) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-003-path-001-lun-0-part1: Aliased to /dev/sdc1 in device cache (8:33) >#device/dev-cache.c:353 /dev/sdd: Added to device cache (8:48) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-3600140552b0442badb5452d9bd925bb8: Aliased to /dev/sdd in device cache (8:48) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x600140552b0442badb5452d9bd925bb8: Aliased to /dev/sdd in device cache (8:48) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-004-path-001-lun-0: Aliased to /dev/sdd in device cache (8:48) >#device/dev-cache.c:353 /dev/sdd1: Added to device cache (8:49) >#device/dev-cache.c:349 /dev/disk/by-id/lvm-pv-uuid-Fn1Qic-WelQ-QQBt-JsoK-6WCB-JyxL-QfngwF: Aliased to /dev/sdd1 in device cache (8:49) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-3600140552b0442badb5452d9bd925bb8-part1: Aliased to /dev/sdd1 in device cache (8:49) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x600140552b0442badb5452d9bd925bb8-part1: Aliased to /dev/sdd1 in device cache (8:49) >#device/dev-cache.c:349 /dev/disk/by-partuuid/317d197e-00ce-4172-83d4-684706f88d20: Aliased to /dev/sdd1 in device cache (8:49) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-004-path-001-lun-0-part1: Aliased to /dev/sdd1 in device cache (8:49) >#device/dev-cache.c:353 /dev/sde: Added to device cache (8:64) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-360014050bc3ef7e36594df7b32ab79bd: Aliased to /dev/sde in device cache (8:64) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x60014050bc3ef7e36594df7b32ab79bd: Aliased to /dev/sde in device cache (8:64) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-005-path-001-lun-0: Aliased to /dev/sde in device cache (8:64) >#device/dev-cache.c:353 /dev/sde1: Added to device cache (8:65) >#device/dev-cache.c:349 /dev/disk/by-id/lvm-pv-uuid-oeJ9d1-BOcd-sjvb-9eWl-8YC4-cM1g-9IOE9B: Aliased to /dev/sde1 in device cache (8:65) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-360014050bc3ef7e36594df7b32ab79bd-part1: Aliased to /dev/sde1 in device cache (8:65) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x60014050bc3ef7e36594df7b32ab79bd-part1: Aliased to /dev/sde1 in device cache (8:65) >#device/dev-cache.c:349 /dev/disk/by-partuuid/799af81d-e019-433f-ab36-64b634d10078: Aliased to /dev/sde1 in device cache (8:65) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-005-path-001-lun-0-part1: Aliased to /dev/sde1 in device cache (8:65) >#device/dev-cache.c:353 /dev/sdf: Added to device cache (8:80) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405f7aafcb915d345cc8eb8eb65c: Aliased to /dev/sdf in device cache (8:80) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405f7aafcb915d345cc8eb8eb65c: Aliased to /dev/sdf in device cache (8:80) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-006-path-001-lun-0: Aliased to /dev/sdf in device cache (8:80) >#device/dev-cache.c:353 /dev/sdf1: Added to device cache (8:81) >#device/dev-cache.c:349 /dev/disk/by-id/lvm-pv-uuid-TkzTqM-LbTN-3NcN-fKhY-nVzd-KTTQ-nUJt8E: Aliased to /dev/sdf1 in device cache (8:81) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405f7aafcb915d345cc8eb8eb65c-part1: Aliased to /dev/sdf1 in device cache (8:81) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405f7aafcb915d345cc8eb8eb65c-part1: Aliased to /dev/sdf1 in device cache (8:81) >#device/dev-cache.c:349 /dev/disk/by-partuuid/f3f36f27-1785-4cd3-9649-fb7cde83cd47: Aliased to /dev/sdf1 in device cache (8:81) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-006-path-001-lun-0-part1: Aliased to /dev/sdf1 in device cache (8:81) >#device/dev-cache.c:353 /dev/sdg: Added to device cache (8:96) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-360014056ad6094b53d64ea7a7534c20e: Aliased to /dev/sdg in device cache (8:96) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x60014056ad6094b53d64ea7a7534c20e: Aliased to /dev/sdg in device cache (8:96) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-007-path-001-lun-0: Aliased to /dev/sdg in device cache (8:96) >#device/dev-cache.c:353 /dev/sdg1: Added to device cache (8:97) >#device/dev-cache.c:349 /dev/disk/by-id/lvm-pv-uuid-bYq8ib-KTwh-A6gX-Obcf-lqVy-MFrw-UwVc2W: Aliased to /dev/sdg1 in device cache (8:97) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-360014056ad6094b53d64ea7a7534c20e-part1: Aliased to /dev/sdg1 in device cache (8:97) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x60014056ad6094b53d64ea7a7534c20e-part1: Aliased to /dev/sdg1 in device cache (8:97) >#device/dev-cache.c:349 /dev/disk/by-partuuid/5cfb77c9-cd9e-43e9-a3d4-2239c1747090: Aliased to /dev/sdg1 in device cache (8:97) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-007-path-001-lun-0-part1: Aliased to /dev/sdg1 in device cache (8:97) >#device/dev-cache.c:353 /dev/sdh: Added to device cache (8:112) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405f1e179731a1b43509c75e3b01: Aliased to /dev/sdh in device cache (8:112) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405f1e179731a1b43509c75e3b01: Aliased to /dev/sdh in device cache (8:112) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-008-path-001-lun-0: Aliased to /dev/sdh in device cache (8:112) >#device/dev-cache.c:353 /dev/sdh1: Added to device cache (8:113) >#device/dev-cache.c:349 /dev/disk/by-id/lvm-pv-uuid-38Zj4a-gqpY-VYq8-IUUb-OJ6L-eqtO-hr4hWn: Aliased to /dev/sdh1 in device cache (8:113) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-36001405f1e179731a1b43509c75e3b01-part1: Aliased to /dev/sdh1 in device cache (8:113) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x6001405f1e179731a1b43509c75e3b01-part1: Aliased to /dev/sdh1 in device cache (8:113) >#device/dev-cache.c:349 /dev/disk/by-partuuid/f933ddc0-5a93-4372-aa25-b7968212e271: Aliased to /dev/sdh1 in device cache (8:113) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-008-path-001-lun-0-part1: Aliased to /dev/sdh1 in device cache (8:113) >#device/dev-cache.c:353 /dev/sdi: Added to device cache (8:128) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-360014057556288ee2e24d42b1ed9c39b: Aliased to /dev/sdi in device cache (8:128) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x60014057556288ee2e24d42b1ed9c39b: Aliased to /dev/sdi in device cache (8:128) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-009-path-001-lun-0: Aliased to /dev/sdi in device cache (8:128) >#device/dev-cache.c:353 /dev/sdi1: Added to device cache (8:129) >#device/dev-cache.c:349 /dev/disk/by-id/lvm-pv-uuid-Ml7Ggg-bqK5-7nsx-v3md-7Mg4-0lFL-VTCtPO: Aliased to /dev/sdi1 in device cache (8:129) >#device/dev-cache.c:349 /dev/disk/by-id/scsi-360014057556288ee2e24d42b1ed9c39b-part1: Aliased to /dev/sdi1 in device cache (8:129) >#device/dev-cache.c:349 /dev/disk/by-id/wwn-0x60014057556288ee2e24d42b1ed9c39b-part1: Aliased to /dev/sdi1 in device cache (8:129) >#device/dev-cache.c:349 /dev/disk/by-partuuid/9388c5c3-4d12-44aa-b955-9e7fe10a7934: Aliased to /dev/sdi1 in device cache (8:129) >#device/dev-cache.c:349 /dev/disk/by-path/ip-10.15.104.25:3260-iscsi-iqn.2013-05.com.redhat.beaker.cluster-qe:cluster88567.disk-009-path-001-lun-0-part1: Aliased to /dev/sdi1 in device cache (8:129) >#device/dev-cache.c:353 /dev/dm-0: Added to device cache (253:0) >#device/dev-cache.c:349 /dev/mapper/rhel_host--087-pool00_tmeta: Aliased to /dev/dm-0 in device cache (preferred name) (253:0) >#device/dev-cache.c:353 /dev/dm-1: Added to device cache (253:1) >#device/dev-cache.c:349 /dev/mapper/rhel_host--087-pool00_tdata: Aliased to /dev/dm-1 in device cache (preferred name) (253:1) >#device/dev-cache.c:353 /dev/dm-10: Added to device cache (253:10) >#device/dev-cache.c:349 /dev/disk/by-id/dm-name-test-raid1: Aliased to /dev/dm-10 in device cache (preferred name) (253:10) >#device/dev-cache.c:349 /dev/disk/by-id/dm-uuid-LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvJ3p4GeF3qSSvinboNG2moteyBObfeo2g: Aliased to /dev/disk/by-id/dm-name-test-raid1 in device cache (253:10) >#device/dev-cache.c:349 /dev/mapper/test-raid1: Aliased to /dev/disk/by-id/dm-name-test-raid1 in device cache (preferred name) (253:10) >#device/dev-cache.c:349 /dev/test/raid1: Aliased to /dev/mapper/test-raid1 in device cache (preferred name) (253:10) >#device/dev-cache.c:353 /dev/dm-11: Added to device cache (253:11) >#device/dev-cache.c:349 /dev/mapper/test-POOL_rmeta_0: Aliased to /dev/dm-11 in device cache (preferred name) (253:11) >#device/dev-cache.c:353 /dev/dm-12: Added to device cache (253:12) >#device/dev-cache.c:349 /dev/mapper/test-POOL_rimage_0: Aliased to /dev/dm-12 in device cache (preferred name) (253:12) >#device/dev-cache.c:353 /dev/dm-13: Added to device cache (253:13) >#device/dev-cache.c:349 /dev/mapper/test-POOL_rmeta_1: Aliased to /dev/dm-13 in device cache (preferred name) (253:13) >#device/dev-cache.c:353 /dev/dm-14: Added to device cache (253:14) >#device/dev-cache.c:349 /dev/mapper/test-POOL_rimage_1: Aliased to /dev/dm-14 in device cache (preferred name) (253:14) >#device/dev-cache.c:353 /dev/dm-15: Added to device cache (253:15) >#device/dev-cache.c:349 /dev/disk/by-id/dm-name-test-POOL: Aliased to /dev/dm-15 in device cache (preferred name) (253:15) >#device/dev-cache.c:349 /dev/disk/by-id/dm-uuid-LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv2011hfWfHaTzv53dG1McnOkgp1iG1i3d: Aliased to /dev/disk/by-id/dm-name-test-POOL in device cache (253:15) >#device/dev-cache.c:349 /dev/mapper/test-POOL: Aliased to /dev/disk/by-id/dm-name-test-POOL in device cache (preferred name) (253:15) >#device/dev-cache.c:349 /dev/test/POOL: Aliased to /dev/mapper/test-POOL in device cache (preferred name) (253:15) >#device/dev-cache.c:353 /dev/dm-16: Added to device cache (253:16) >#device/dev-cache.c:349 /dev/mapper/test-raid10_rmeta_0: Aliased to /dev/dm-16 in device cache (preferred name) (253:16) >#device/dev-cache.c:353 /dev/dm-17: Added to device cache (253:17) >#device/dev-cache.c:349 /dev/mapper/test-raid10_rimage_0: Aliased to /dev/dm-17 in device cache (preferred name) (253:17) >#device/dev-cache.c:353 /dev/dm-18: Added to device cache (253:18) >#device/dev-cache.c:349 /dev/mapper/test-raid10_rmeta_1: Aliased to /dev/dm-18 in device cache (preferred name) (253:18) >#device/dev-cache.c:353 /dev/dm-19: Added to device cache (253:19) >#device/dev-cache.c:349 /dev/mapper/test-raid10_rimage_1: Aliased to /dev/dm-19 in device cache (preferred name) (253:19) >#device/dev-cache.c:353 /dev/dm-2: Added to device cache (253:2) >#device/dev-cache.c:349 /dev/mapper/rhel_host--087-pool00-tpool: Aliased to /dev/dm-2 in device cache (preferred name) (253:2) >#device/dev-cache.c:353 /dev/dm-20: Added to device cache (253:20) >#device/dev-cache.c:349 /dev/mapper/test-raid10_rmeta_2: Aliased to /dev/dm-20 in device cache (preferred name) (253:20) >#device/dev-cache.c:353 /dev/dm-21: Added to device cache (253:21) >#device/dev-cache.c:349 /dev/mapper/test-raid10_rimage_2: Aliased to /dev/dm-21 in device cache (preferred name) (253:21) >#device/dev-cache.c:353 /dev/dm-22: Added to device cache (253:22) >#device/dev-cache.c:349 /dev/mapper/test-raid10_rmeta_3: Aliased to /dev/dm-22 in device cache (preferred name) (253:22) >#device/dev-cache.c:353 /dev/dm-23: Added to device cache (253:23) >#device/dev-cache.c:349 /dev/mapper/test-raid10_rimage_3: Aliased to /dev/dm-23 in device cache (preferred name) (253:23) >#device/dev-cache.c:353 /dev/dm-24: Added to device cache (253:24) >#device/dev-cache.c:349 /dev/disk/by-id/dm-name-test-raid10: Aliased to /dev/dm-24 in device cache (preferred name) (253:24) >#device/dev-cache.c:349 /dev/disk/by-id/dm-uuid-LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvbhvEwOP3o2zT2IqgS5dVJhfieDYr3eyW: Aliased to /dev/disk/by-id/dm-name-test-raid10 in device cache (253:24) >#device/dev-cache.c:349 /dev/mapper/test-raid10: Aliased to /dev/disk/by-id/dm-name-test-raid10 in device cache (preferred name) (253:24) >#device/dev-cache.c:349 /dev/test/raid10: Aliased to /dev/mapper/test-raid10 in device cache (preferred name) (253:24) >#device/dev-cache.c:353 /dev/dm-25: Added to device cache (253:25) >#device/dev-cache.c:349 /dev/mapper/test-raid10_2_rmeta_0: Aliased to /dev/dm-25 in device cache (preferred name) (253:25) >#device/dev-cache.c:353 /dev/dm-26: Added to device cache (253:26) >#device/dev-cache.c:349 /dev/mapper/test-raid10_2_rimage_0: Aliased to /dev/dm-26 in device cache (preferred name) (253:26) >#device/dev-cache.c:353 /dev/dm-27: Added to device cache (253:27) >#device/dev-cache.c:349 /dev/mapper/test-raid10_2_rmeta_1: Aliased to /dev/dm-27 in device cache (preferred name) (253:27) >#device/dev-cache.c:353 /dev/dm-28: Added to device cache (253:28) >#device/dev-cache.c:349 /dev/mapper/test-raid10_2_rimage_1: Aliased to /dev/dm-28 in device cache (preferred name) (253:28) >#device/dev-cache.c:353 /dev/dm-29: Added to device cache (253:29) >#device/dev-cache.c:349 /dev/mapper/test-raid10_2_rmeta_2: Aliased to /dev/dm-29 in device cache (preferred name) (253:29) >#device/dev-cache.c:353 /dev/dm-3: Added to device cache (253:3) >#device/dev-cache.c:349 /dev/disk/by-id/dm-name-rhel_host--087-root: Aliased to /dev/dm-3 in device cache (preferred name) (253:3) >#device/dev-cache.c:349 /dev/disk/by-id/dm-uuid-LVM-b3TswpE4nQBq04b48nZAy61gkbvnYngKmOcYaehefpLASIvAy0KKSw0L3Eo0mAS8: Aliased to /dev/disk/by-id/dm-name-rhel_host--087-root in device cache (253:3) >#device/dev-cache.c:349 /dev/disk/by-uuid/19e564c2-e49a-4155-b703-58d6b45ed627: Aliased to /dev/disk/by-id/dm-name-rhel_host--087-root in device cache (253:3) >#device/dev-cache.c:349 /dev/mapper/rhel_host--087-root: Aliased to /dev/disk/by-id/dm-name-rhel_host--087-root in device cache (preferred name) (253:3) >#device/dev-cache.c:349 /dev/rhel_host-087/root: Aliased to /dev/mapper/rhel_host--087-root in device cache (preferred name) (253:3) >#device/dev-cache.c:353 /dev/dm-30: Added to device cache (253:30) >#device/dev-cache.c:349 /dev/mapper/test-raid10_2_rimage_2: Aliased to /dev/dm-30 in device cache (preferred name) (253:30) >#device/dev-cache.c:353 /dev/dm-31: Added to device cache (253:31) >#device/dev-cache.c:349 /dev/mapper/test-raid10_2_rmeta_3: Aliased to /dev/dm-31 in device cache (preferred name) (253:31) >#device/dev-cache.c:353 /dev/dm-32: Added to device cache (253:32) >#device/dev-cache.c:349 /dev/mapper/test-raid10_2_rimage_3: Aliased to /dev/dm-32 in device cache (preferred name) (253:32) >#device/dev-cache.c:353 /dev/dm-33: Added to device cache (253:33) >#device/dev-cache.c:349 /dev/disk/by-id/dm-name-test-raid10_2: Aliased to /dev/dm-33 in device cache (preferred name) (253:33) >#device/dev-cache.c:349 /dev/disk/by-id/dm-uuid-LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvBkkksvMLuk9dUBi9HVln9mGH6nRGx79x: Aliased to /dev/disk/by-id/dm-name-test-raid10_2 in device cache (253:33) >#device/dev-cache.c:349 /dev/mapper/test-raid10_2: Aliased to /dev/disk/by-id/dm-name-test-raid10_2 in device cache (preferred name) (253:33) >#device/dev-cache.c:349 /dev/test/raid10_2: Aliased to /dev/mapper/test-raid10_2 in device cache (preferred name) (253:33) >#device/dev-cache.c:353 /dev/dm-4: Added to device cache (253:4) >#device/dev-cache.c:349 /dev/disk/by-id/dm-name-rhel_host--087-swap: Aliased to /dev/dm-4 in device cache (preferred name) (253:4) >#device/dev-cache.c:349 /dev/disk/by-id/dm-uuid-LVM-b3TswpE4nQBq04b48nZAy61gkbvnYngKtGd9MILeFkU0D7rdWqFniuldOma06bwh: Aliased to /dev/disk/by-id/dm-name-rhel_host--087-swap in device cache (253:4) >#device/dev-cache.c:349 /dev/disk/by-uuid/87eee5e5-328d-4a51-8dcb-47804482e02e: Aliased to /dev/disk/by-id/dm-name-rhel_host--087-swap in device cache (253:4) >#device/dev-cache.c:349 /dev/mapper/rhel_host--087-swap: Aliased to /dev/disk/by-id/dm-name-rhel_host--087-swap in device cache (preferred name) (253:4) >#device/dev-cache.c:349 /dev/rhel_host-087/swap: Aliased to /dev/mapper/rhel_host--087-swap in device cache (preferred name) (253:4) >#device/dev-cache.c:353 /dev/dm-5: Added to device cache (253:5) >#device/dev-cache.c:349 /dev/mapper/rhel_host--087-pool00: Aliased to /dev/dm-5 in device cache (preferred name) (253:5) >#device/dev-cache.c:353 /dev/dm-6: Added to device cache (253:6) >#device/dev-cache.c:349 /dev/mapper/test-raid1_rmeta_0: Aliased to /dev/dm-6 in device cache (preferred name) (253:6) >#device/dev-cache.c:353 /dev/dm-7: Added to device cache (253:7) >#device/dev-cache.c:349 /dev/mapper/test-raid1_rimage_0: Aliased to /dev/dm-7 in device cache (preferred name) (253:7) >#device/dev-cache.c:353 /dev/dm-8: Added to device cache (253:8) >#device/dev-cache.c:349 /dev/mapper/test-raid1_rmeta_1: Aliased to /dev/dm-8 in device cache (preferred name) (253:8) >#device/dev-cache.c:353 /dev/dm-9: Added to device cache (253:9) >#device/dev-cache.c:349 /dev/mapper/test-raid1_rimage_1: Aliased to /dev/dm-9 in device cache (preferred name) (253:9) >#device/dev-io.c:601 Opened /dev/sda RO O_DIRECT >#device/dev-io.c:359 /dev/sda: size is 52428800 sectors >#device/dev-io.c:650 Closed /dev/sda >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sda >#filters/filter-md.c:99 filter md deferred /dev/sda >#filters/filter-persistent.c:326 filter cache deferred /dev/sda >#device/dev-io.c:601 Opened /dev/vda RO O_DIRECT >#device/dev-io.c:359 /dev/vda: size is 16777216 sectors >#device/dev-io.c:650 Closed /dev/vda >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/vda >#filters/filter-md.c:99 filter md deferred /dev/vda >#filters/filter-persistent.c:326 filter cache deferred /dev/vda >#ioctl/libdm-iface.c:1857 dm status (253:0) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:626 /dev/mapper/rhel_host--087-pool00_tmeta: Reserved uuid LVM-b3TswpE4nQBq04b48nZAy61gkbvnYngKKbexoEI9ZlP7yBJ57P2tYajQproZeF4i-tmeta on internal LV device rhel_host--087-pool00_tmeta not usable. >#filters/filter-usable.c:135 /dev/mapper/rhel_host--087-pool00_tmeta: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/rhel_host--087-pool00_tmeta >#device/dev-io.c:601 Opened /dev/sda1 RO O_DIRECT >#device/dev-io.c:359 /dev/sda1: size is 52428720 sectors >#device/dev-io.c:650 Closed /dev/sda1 >#filters/filter-mpath.c:196 /dev/sda1: Device is a partition, using primary device sda for mpath component detection >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sda1 >#filters/filter-md.c:99 filter md deferred /dev/sda1 >#filters/filter-persistent.c:326 filter cache deferred /dev/sda1 >#device/dev-io.c:601 Opened /dev/vda1 RO O_DIRECT >#device/dev-io.c:359 /dev/vda1: size is 2097152 sectors >#device/dev-io.c:650 Closed /dev/vda1 >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/vda1 >#filters/filter-md.c:99 filter md deferred /dev/vda1 >#filters/filter-persistent.c:326 filter cache deferred /dev/vda1 >#ioctl/libdm-iface.c:1857 dm status (253:1) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:626 /dev/mapper/rhel_host--087-pool00_tdata: Reserved uuid LVM-b3TswpE4nQBq04b48nZAy61gkbvnYngKbzfGPkjJzrkI6jETYGvJPapryomgut0M-tdata on internal LV device rhel_host--087-pool00_tdata not usable. >#filters/filter-usable.c:135 /dev/mapper/rhel_host--087-pool00_tdata: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/rhel_host--087-pool00_tdata >#device/dev-io.c:601 Opened /dev/vda2 RO O_DIRECT >#device/dev-io.c:359 /dev/vda2: size is 14678016 sectors >#device/dev-io.c:650 Closed /dev/vda2 >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/vda2 >#filters/filter-md.c:99 filter md deferred /dev/vda2 >#filters/filter-persistent.c:326 filter cache deferred /dev/vda2 >#ioctl/libdm-iface.c:1857 dm status (253:2) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:626 /dev/mapper/rhel_host--087-pool00-tpool: Reserved uuid LVM-b3TswpE4nQBq04b48nZAy61gkbvnYngKwsW64UIq4ndjsGYAD40JiO9zDsm0rENr-tpool on internal LV device rhel_host--087-pool00-tpool not usable. >#filters/filter-usable.c:135 /dev/mapper/rhel_host--087-pool00-tpool: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/rhel_host--087-pool00-tpool >#ioctl/libdm-iface.c:1857 dm status (253:3) [ noopencount noflush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm table (253:3) [ noopencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm status (253:2) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/rhel_host-087/root RO O_DIRECT >#device/dev-io.c:359 /dev/rhel_host-087/root: size is 10035200 sectors >#device/dev-io.c:650 Closed /dev/rhel_host-087/root >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/rhel_host-087/root >#filters/filter-md.c:99 filter md deferred /dev/rhel_host-087/root >#filters/filter-persistent.c:326 filter cache deferred /dev/rhel_host-087/root >#ioctl/libdm-iface.c:1857 dm status (253:4) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/rhel_host-087/swap RO O_DIRECT >#device/dev-io.c:359 /dev/rhel_host-087/swap: size is 1679360 sectors >#device/dev-io.c:650 Closed /dev/rhel_host-087/swap >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/rhel_host-087/swap >#filters/filter-md.c:99 filter md deferred /dev/rhel_host-087/swap >#filters/filter-persistent.c:326 filter cache deferred /dev/rhel_host-087/swap >#ioctl/libdm-iface.c:1857 dm status (253:5) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:626 /dev/mapper/rhel_host--087-pool00: Reserved uuid LVM-b3TswpE4nQBq04b48nZAy61gkbvnYngKwsW64UIq4ndjsGYAD40JiO9zDsm0rENr-pool on internal LV device rhel_host--087-pool00 not usable. >#filters/filter-usable.c:135 /dev/mapper/rhel_host--087-pool00: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/rhel_host--087-pool00 >#ioctl/libdm-iface.c:1857 dm status (253:6) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid1_rmeta_0: Reserved internal LV device test/raid1_rmeta_0 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid1_rmeta_0: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid1_rmeta_0 >#ioctl/libdm-iface.c:1857 dm status (253:7) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid1_rimage_0: Reserved internal LV device test/raid1_rimage_0 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid1_rimage_0: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid1_rimage_0 >#ioctl/libdm-iface.c:1857 dm status (253:8) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid1_rmeta_1: Reserved internal LV device test/raid1_rmeta_1 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid1_rmeta_1: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid1_rmeta_1 >#ioctl/libdm-iface.c:1857 dm status (253:9) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid1_rimage_1: Reserved internal LV device test/raid1_rimage_1 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid1_rimage_1: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid1_rimage_1 >#ioctl/libdm-iface.c:1857 dm status (253:10) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/test/raid1 RO O_DIRECT >#device/dev-io.c:359 /dev/test/raid1: size is 204800 sectors >#device/dev-io.c:650 Closed /dev/test/raid1 >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/test/raid1 >#filters/filter-md.c:99 filter md deferred /dev/test/raid1 >#filters/filter-persistent.c:326 filter cache deferred /dev/test/raid1 >#ioctl/libdm-iface.c:1857 dm status (253:11) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-POOL_rmeta_0: Reserved internal LV device test/POOL_rmeta_0 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-POOL_rmeta_0: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-POOL_rmeta_0 >#ioctl/libdm-iface.c:1857 dm status (253:12) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-POOL_rimage_0: Reserved internal LV device test/POOL_rimage_0 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-POOL_rimage_0: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-POOL_rimage_0 >#ioctl/libdm-iface.c:1857 dm status (253:13) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-POOL_rmeta_1: Reserved internal LV device test/POOL_rmeta_1 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-POOL_rmeta_1: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-POOL_rmeta_1 >#ioctl/libdm-iface.c:1857 dm status (253:14) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-POOL_rimage_1: Reserved internal LV device test/POOL_rimage_1 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-POOL_rimage_1: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-POOL_rimage_1 >#ioctl/libdm-iface.c:1857 dm status (253:15) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/test/POOL RO O_DIRECT >#device/dev-io.c:359 /dev/test/POOL: size is 2097152 sectors >#device/dev-io.c:650 Closed /dev/test/POOL >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/test/POOL >#filters/filter-md.c:99 filter md deferred /dev/test/POOL >#filters/filter-persistent.c:326 filter cache deferred /dev/test/POOL >#device/dev-io.c:601 Opened /dev/sdb RO O_DIRECT >#device/dev-io.c:359 /dev/sdb: size is 52428800 sectors >#device/dev-io.c:650 Closed /dev/sdb >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdb >#filters/filter-md.c:99 filter md deferred /dev/sdb >#filters/filter-persistent.c:326 filter cache deferred /dev/sdb >#ioctl/libdm-iface.c:1857 dm status (253:16) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_rmeta_0: Reserved internal LV device test/raid10_rmeta_0 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_rmeta_0: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_rmeta_0 >#device/dev-io.c:601 Opened /dev/sdb1 RO O_DIRECT >#device/dev-io.c:359 /dev/sdb1: size is 52428720 sectors >#device/dev-io.c:650 Closed /dev/sdb1 >#filters/filter-mpath.c:196 /dev/sdb1: Device is a partition, using primary device sdb for mpath component detection >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdb1 >#filters/filter-md.c:99 filter md deferred /dev/sdb1 >#filters/filter-persistent.c:326 filter cache deferred /dev/sdb1 >#ioctl/libdm-iface.c:1857 dm status (253:17) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_rimage_0: Reserved internal LV device test/raid10_rimage_0 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_rimage_0: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_rimage_0 >#ioctl/libdm-iface.c:1857 dm status (253:18) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_rmeta_1: Reserved internal LV device test/raid10_rmeta_1 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_rmeta_1: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_rmeta_1 >#ioctl/libdm-iface.c:1857 dm status (253:19) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_rimage_1: Reserved internal LV device test/raid10_rimage_1 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_rimage_1: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_rimage_1 >#ioctl/libdm-iface.c:1857 dm status (253:20) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_rmeta_2: Reserved internal LV device test/raid10_rmeta_2 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_rmeta_2: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_rmeta_2 >#ioctl/libdm-iface.c:1857 dm status (253:21) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_rimage_2: Reserved internal LV device test/raid10_rimage_2 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_rimage_2: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_rimage_2 >#ioctl/libdm-iface.c:1857 dm status (253:22) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_rmeta_3: Reserved internal LV device test/raid10_rmeta_3 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_rmeta_3: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_rmeta_3 >#ioctl/libdm-iface.c:1857 dm status (253:23) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_rimage_3: Reserved internal LV device test/raid10_rimage_3 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_rimage_3: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_rimage_3 >#ioctl/libdm-iface.c:1857 dm status (253:24) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/test/raid10 RO O_DIRECT >#device/dev-io.c:359 /dev/test/raid10: size is 212992 sectors >#device/dev-io.c:650 Closed /dev/test/raid10 >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/test/raid10 >#filters/filter-md.c:99 filter md deferred /dev/test/raid10 >#filters/filter-persistent.c:326 filter cache deferred /dev/test/raid10 >#ioctl/libdm-iface.c:1857 dm status (253:25) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_2_rmeta_0: Reserved internal LV device test/raid10_2_rmeta_0 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_2_rmeta_0: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_2_rmeta_0 >#ioctl/libdm-iface.c:1857 dm status (253:26) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_2_rimage_0: Reserved internal LV device test/raid10_2_rimage_0 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_2_rimage_0: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_2_rimage_0 >#ioctl/libdm-iface.c:1857 dm status (253:27) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_2_rmeta_1: Reserved internal LV device test/raid10_2_rmeta_1 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_2_rmeta_1: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_2_rmeta_1 >#ioctl/libdm-iface.c:1857 dm status (253:28) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_2_rimage_1: Reserved internal LV device test/raid10_2_rimage_1 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_2_rimage_1: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_2_rimage_1 >#ioctl/libdm-iface.c:1857 dm status (253:29) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_2_rmeta_2: Reserved internal LV device test/raid10_2_rmeta_2 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_2_rmeta_2: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_2_rmeta_2 >#ioctl/libdm-iface.c:1857 dm status (253:30) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_2_rimage_2: Reserved internal LV device test/raid10_2_rimage_2 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_2_rimage_2: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_2_rimage_2 >#ioctl/libdm-iface.c:1857 dm status (253:31) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_2_rmeta_3: Reserved internal LV device test/raid10_2_rmeta_3 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_2_rmeta_3: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_2_rmeta_3 >#device/dev-io.c:601 Opened /dev/sdc RO O_DIRECT >#device/dev-io.c:359 /dev/sdc: size is 52428800 sectors >#device/dev-io.c:650 Closed /dev/sdc >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdc >#filters/filter-md.c:99 filter md deferred /dev/sdc >#filters/filter-persistent.c:326 filter cache deferred /dev/sdc >#ioctl/libdm-iface.c:1857 dm status (253:32) [ noopencount noflush ] [16384] (*1) >#activate/dev_manager.c:637 /dev/mapper/test-raid10_2_rimage_3: Reserved internal LV device test/raid10_2_rimage_3 not usable. >#filters/filter-usable.c:135 /dev/mapper/test-raid10_2_rimage_3: Skipping unusable device. >#filters/filter-persistent.c:335 filter caching bad /dev/mapper/test-raid10_2_rimage_3 >#device/dev-io.c:601 Opened /dev/sdc1 RO O_DIRECT >#device/dev-io.c:359 /dev/sdc1: size is 52428720 sectors >#device/dev-io.c:650 Closed /dev/sdc1 >#filters/filter-mpath.c:196 /dev/sdc1: Device is a partition, using primary device sdc for mpath component detection >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdc1 >#filters/filter-md.c:99 filter md deferred /dev/sdc1 >#filters/filter-persistent.c:326 filter cache deferred /dev/sdc1 >#ioctl/libdm-iface.c:1857 dm status (253:33) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/test/raid10_2 RO O_DIRECT >#device/dev-io.c:359 /dev/test/raid10_2: size is 212992 sectors >#device/dev-io.c:650 Closed /dev/test/raid10_2 >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/test/raid10_2 >#filters/filter-md.c:99 filter md deferred /dev/test/raid10_2 >#filters/filter-persistent.c:326 filter cache deferred /dev/test/raid10_2 >#device/dev-io.c:601 Opened /dev/sdd RO O_DIRECT >#device/dev-io.c:359 /dev/sdd: size is 52428800 sectors >#device/dev-io.c:650 Closed /dev/sdd >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdd >#filters/filter-md.c:99 filter md deferred /dev/sdd >#filters/filter-persistent.c:326 filter cache deferred /dev/sdd >#device/dev-io.c:601 Opened /dev/sdd1 RO O_DIRECT >#device/dev-io.c:359 /dev/sdd1: size is 52428720 sectors >#device/dev-io.c:650 Closed /dev/sdd1 >#filters/filter-mpath.c:196 /dev/sdd1: Device is a partition, using primary device sdd for mpath component detection >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdd1 >#filters/filter-md.c:99 filter md deferred /dev/sdd1 >#filters/filter-persistent.c:326 filter cache deferred /dev/sdd1 >#device/dev-io.c:601 Opened /dev/sde RO O_DIRECT >#device/dev-io.c:359 /dev/sde: size is 52428800 sectors >#device/dev-io.c:650 Closed /dev/sde >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sde >#filters/filter-md.c:99 filter md deferred /dev/sde >#filters/filter-persistent.c:326 filter cache deferred /dev/sde >#device/dev-io.c:601 Opened /dev/sde1 RO O_DIRECT >#device/dev-io.c:359 /dev/sde1: size is 52428720 sectors >#device/dev-io.c:650 Closed /dev/sde1 >#filters/filter-mpath.c:196 /dev/sde1: Device is a partition, using primary device sde for mpath component detection >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sde1 >#filters/filter-md.c:99 filter md deferred /dev/sde1 >#filters/filter-persistent.c:326 filter cache deferred /dev/sde1 >#device/dev-io.c:601 Opened /dev/sdf RO O_DIRECT >#device/dev-io.c:359 /dev/sdf: size is 52428800 sectors >#device/dev-io.c:650 Closed /dev/sdf >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdf >#filters/filter-md.c:99 filter md deferred /dev/sdf >#filters/filter-persistent.c:326 filter cache deferred /dev/sdf >#device/dev-io.c:601 Opened /dev/sdf1 RO O_DIRECT >#device/dev-io.c:359 /dev/sdf1: size is 52428720 sectors >#device/dev-io.c:650 Closed /dev/sdf1 >#filters/filter-mpath.c:196 /dev/sdf1: Device is a partition, using primary device sdf for mpath component detection >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdf1 >#filters/filter-md.c:99 filter md deferred /dev/sdf1 >#filters/filter-persistent.c:326 filter cache deferred /dev/sdf1 >#device/dev-io.c:601 Opened /dev/sdg RO O_DIRECT >#device/dev-io.c:359 /dev/sdg: size is 52428800 sectors >#device/dev-io.c:650 Closed /dev/sdg >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdg >#filters/filter-md.c:99 filter md deferred /dev/sdg >#filters/filter-persistent.c:326 filter cache deferred /dev/sdg >#device/dev-io.c:601 Opened /dev/sdg1 RO O_DIRECT >#device/dev-io.c:359 /dev/sdg1: size is 52428720 sectors >#device/dev-io.c:650 Closed /dev/sdg1 >#filters/filter-mpath.c:196 /dev/sdg1: Device is a partition, using primary device sdg for mpath component detection >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdg1 >#filters/filter-md.c:99 filter md deferred /dev/sdg1 >#filters/filter-persistent.c:326 filter cache deferred /dev/sdg1 >#device/dev-io.c:601 Opened /dev/sdh RO O_DIRECT >#device/dev-io.c:359 /dev/sdh: size is 52428800 sectors >#device/dev-io.c:650 Closed /dev/sdh >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdh >#filters/filter-md.c:99 filter md deferred /dev/sdh >#filters/filter-persistent.c:326 filter cache deferred /dev/sdh >#device/dev-io.c:601 Opened /dev/sdh1 RO O_DIRECT >#device/dev-io.c:359 /dev/sdh1: size is 52428720 sectors >#device/dev-io.c:650 Closed /dev/sdh1 >#filters/filter-mpath.c:196 /dev/sdh1: Device is a partition, using primary device sdh for mpath component detection >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdh1 >#filters/filter-md.c:99 filter md deferred /dev/sdh1 >#filters/filter-persistent.c:326 filter cache deferred /dev/sdh1 >#device/dev-io.c:601 Opened /dev/sdi RO O_DIRECT >#device/dev-io.c:359 /dev/sdi: size is 52428800 sectors >#device/dev-io.c:650 Closed /dev/sdi >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdi >#filters/filter-md.c:99 filter md deferred /dev/sdi >#filters/filter-persistent.c:326 filter cache deferred /dev/sdi >#device/dev-io.c:601 Opened /dev/sdi1 RO O_DIRECT >#device/dev-io.c:359 /dev/sdi1: size is 52428720 sectors >#device/dev-io.c:650 Closed /dev/sdi1 >#filters/filter-mpath.c:196 /dev/sdi1: Device is a partition, using primary device sdi for mpath component detection >#filters/filter-partitioned.c:30 filter partitioned deferred /dev/sdi1 >#filters/filter-md.c:99 filter md deferred /dev/sdi1 >#filters/filter-persistent.c:326 filter cache deferred /dev/sdi1 >#label/label.c:722 Found 27 devices to scan >#label/label.c:528 Scanning 27 devices for VG info >#label/label.c:566 Scanning submitted 27 reads >#label/label.c:581 Processing data from device /dev/sda fd 4 block 0x56102e3bc9a0 >#label/label.c:364 Scan filtering /dev/sda >#device/dev-io.c:336 /dev/sda: using cached size 52428800 sectors >#filters/filter-partitioned.c:37 /dev/sda: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/sda >#label/label.c:376 /dev/sda: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/vda fd 5 block 0x56102e3bc9f0 >#label/label.c:364 Scan filtering /dev/vda >#device/dev-io.c:336 /dev/vda: using cached size 16777216 sectors >#filters/filter-partitioned.c:37 /dev/vda: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/vda >#label/label.c:376 /dev/vda: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/sda1 fd 6 block 0x56102e3bca40 >#label/label.c:364 Scan filtering /dev/sda1 >#device/dev-io.c:336 /dev/sda1: using cached size 52428720 sectors >#filters/filter-mpath.c:196 /dev/sda1: Device is a partition, using primary device sda for mpath component detection >#device/dev-io.c:336 /dev/sda1: using cached size 52428720 sectors >#filters/filter-persistent.c:335 filter caching good /dev/sda1 >#label/label.c:400 /dev/sda1: No lvm label detected >#label/label.c:405 <backtrace> >#label/label.c:581 Processing data from device /dev/vda1 fd 7 block 0x56102e3bca90 >#label/label.c:364 Scan filtering /dev/vda1 >#device/dev-io.c:336 /dev/vda1: using cached size 2097152 sectors >#device/dev-io.c:336 /dev/vda1: using cached size 2097152 sectors >#filters/filter-persistent.c:335 filter caching good /dev/vda1 >#label/label.c:400 /dev/vda1: No lvm label detected >#label/label.c:405 <backtrace> >#label/label.c:581 Processing data from device /dev/vda2 fd 8 block 0x56102e3bcae0 >#label/label.c:364 Scan filtering /dev/vda2 >#device/dev-io.c:336 /dev/vda2: using cached size 14678016 sectors >#device/dev-io.c:336 /dev/vda2: using cached size 14678016 sectors >#filters/filter-persistent.c:335 filter caching good /dev/vda2 >#label/label.c:310 /dev/vda2: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/vda2: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/vda2: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/vda2 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/vda2 at 32768 size 2737 (+0) >#format_text/format-text.c:1313 Found metadata summary on /dev/vda2 at 32768 size 2737 for VG rhel_host-087 >#cache/lvmcache.c:750 lvmcache has no info for vgname "rhel_host-087" with VGID b3TswpE4nQBq04b48nZAy61gkbvnYngK. >#cache/lvmcache.c:750 lvmcache has no info for vgname "rhel_host-087". >#cache/lvmcache.c:2074 lvmcache /dev/vda2: now in VG rhel_host-087 with 1 mda(s). >#cache/lvmcache.c:1900 lvmcache /dev/vda2: VG rhel_host-087: set VGID to b3TswpE4nQBq04b48nZAy61gkbvnYngK. >#cache/lvmcache.c:2220 lvmcache /dev/vda2: VG rhel_host-087: set seqno to 7 >#cache/lvmcache.c:2237 lvmcache /dev/vda2: VG rhel_host-087: set mda_checksum to 9392b056 mda_size to 2737 >#cache/lvmcache.c:2111 lvmcache /dev/vda2: VG rhel_host-087: set creation host to host-087.virt.lab.msp.redhat.com. >#label/label.c:581 Processing data from device /dev/rhel_host-087/root fd 9 block 0x56102e3bcb30 >#label/label.c:364 Scan filtering /dev/rhel_host-087/root >#ioctl/libdm-iface.c:1857 dm status (253:3) [ noopencount noflush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm table (253:3) [ noopencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm status (253:2) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:336 /dev/rhel_host-087/root: using cached size 10035200 sectors >#device/dev-io.c:336 /dev/rhel_host-087/root: using cached size 10035200 sectors >#filters/filter-persistent.c:335 filter caching good /dev/rhel_host-087/root >#label/label.c:400 /dev/rhel_host-087/root: No lvm label detected >#label/label.c:405 <backtrace> >#label/label.c:581 Processing data from device /dev/rhel_host-087/swap fd 10 block 0x56102e3bcb80 >#label/label.c:364 Scan filtering /dev/rhel_host-087/swap >#ioctl/libdm-iface.c:1857 dm status (253:4) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:336 /dev/rhel_host-087/swap: using cached size 1679360 sectors >#device/dev-io.c:336 /dev/rhel_host-087/swap: using cached size 1679360 sectors >#filters/filter-persistent.c:335 filter caching good /dev/rhel_host-087/swap >#label/label.c:400 /dev/rhel_host-087/swap: No lvm label detected >#label/label.c:405 <backtrace> >#label/label.c:581 Processing data from device /dev/test/raid1 fd 11 block 0x56102e3bcbd0 >#label/label.c:364 Scan filtering /dev/test/raid1 >#ioctl/libdm-iface.c:1857 dm status (253:10) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:336 /dev/test/raid1: using cached size 204800 sectors >#device/dev-io.c:336 /dev/test/raid1: using cached size 204800 sectors >#filters/filter-persistent.c:335 filter caching good /dev/test/raid1 >#label/label.c:400 /dev/test/raid1: No lvm label detected >#label/label.c:405 <backtrace> >#label/label.c:581 Processing data from device /dev/test/POOL fd 12 block 0x56102e3bcc20 >#label/label.c:364 Scan filtering /dev/test/POOL >#ioctl/libdm-iface.c:1857 dm status (253:15) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:336 /dev/test/POOL: using cached size 2097152 sectors >#device/dev-io.c:336 /dev/test/POOL: using cached size 2097152 sectors >#filters/filter-persistent.c:335 filter caching good /dev/test/POOL >#label/label.c:400 /dev/test/POOL: No lvm label detected >#label/label.c:405 <backtrace> >#label/label.c:581 Processing data from device /dev/sdb fd 13 block 0x56102e3bcc70 >#label/label.c:364 Scan filtering /dev/sdb >#device/dev-io.c:336 /dev/sdb: using cached size 52428800 sectors >#filters/filter-partitioned.c:37 /dev/sdb: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/sdb >#label/label.c:376 /dev/sdb: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/sdb1 fd 14 block 0x56102e3bccc0 >#label/label.c:364 Scan filtering /dev/sdb1 >#device/dev-io.c:336 /dev/sdb1: using cached size 52428720 sectors >#filters/filter-mpath.c:196 /dev/sdb1: Device is a partition, using primary device sdb for mpath component detection >#device/dev-io.c:336 /dev/sdb1: using cached size 52428720 sectors >#filters/filter-persistent.c:335 filter caching good /dev/sdb1 >#label/label.c:310 /dev/sdb1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdb1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdb1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdb1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdb1 at 2577408 size 10978 (+0) >#format_text/format-text.c:1313 Found metadata summary on /dev/sdb1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:750 lvmcache has no info for vgname "test" with VGID prawKhZOlbTjc9me9XsaTB6SzDQx51Jv. >#cache/lvmcache.c:750 lvmcache has no info for vgname "test". >#cache/lvmcache.c:2074 lvmcache /dev/sdb1: now in VG test with 1 mda(s). >#cache/lvmcache.c:1900 lvmcache /dev/sdb1: VG test: set VGID to prawKhZOlbTjc9me9XsaTB6SzDQx51Jv. >#cache/lvmcache.c:2220 lvmcache /dev/sdb1: VG test: set seqno to 325 >#cache/lvmcache.c:2237 lvmcache /dev/sdb1: VG test: set mda_checksum to 768b2936 mda_size to 10978 >#cache/lvmcache.c:2111 lvmcache /dev/sdb1: VG test: set creation host to host-087.virt.lab.msp.redhat.com. >#label/label.c:581 Processing data from device /dev/test/raid10 fd 15 block 0x56102e3bcd10 >#label/label.c:364 Scan filtering /dev/test/raid10 >#ioctl/libdm-iface.c:1857 dm status (253:24) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:336 /dev/test/raid10: using cached size 212992 sectors >#device/dev-io.c:336 /dev/test/raid10: using cached size 212992 sectors >#filters/filter-persistent.c:335 filter caching good /dev/test/raid10 >#label/label.c:400 /dev/test/raid10: No lvm label detected >#label/label.c:405 <backtrace> >#label/label.c:581 Processing data from device /dev/sdc fd 16 block 0x56102e3bcd60 >#label/label.c:364 Scan filtering /dev/sdc >#device/dev-io.c:336 /dev/sdc: using cached size 52428800 sectors >#filters/filter-partitioned.c:37 /dev/sdc: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/sdc >#label/label.c:376 /dev/sdc: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/sdc1 fd 17 block 0x56102e3bcdb0 >#label/label.c:364 Scan filtering /dev/sdc1 >#device/dev-io.c:336 /dev/sdc1: using cached size 52428720 sectors >#filters/filter-mpath.c:196 /dev/sdc1: Device is a partition, using primary device sdc for mpath component detection >#device/dev-io.c:336 /dev/sdc1: using cached size 52428720 sectors >#filters/filter-persistent.c:335 filter caching good /dev/sdc1 >#label/label.c:400 /dev/sdc1: No lvm label detected >#label/label.c:405 <backtrace> >#label/label.c:581 Processing data from device /dev/test/raid10_2 fd 18 block 0x56102e3bce00 >#label/label.c:364 Scan filtering /dev/test/raid10_2 >#ioctl/libdm-iface.c:1857 dm status (253:33) [ noopencount noflush ] [16384] (*1) >#device/dev-io.c:336 /dev/test/raid10_2: using cached size 212992 sectors >#device/dev-io.c:336 /dev/test/raid10_2: using cached size 212992 sectors >#filters/filter-persistent.c:335 filter caching good /dev/test/raid10_2 >#label/label.c:400 /dev/test/raid10_2: No lvm label detected >#label/label.c:405 <backtrace> >#label/label.c:581 Processing data from device /dev/sdd fd 19 block 0x56102e3bce50 >#label/label.c:364 Scan filtering /dev/sdd >#device/dev-io.c:336 /dev/sdd: using cached size 52428800 sectors >#filters/filter-partitioned.c:37 /dev/sdd: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/sdd >#label/label.c:376 /dev/sdd: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/sdd1 fd 20 block 0x56102e3bcea0 >#label/label.c:364 Scan filtering /dev/sdd1 >#device/dev-io.c:336 /dev/sdd1: using cached size 52428720 sectors >#filters/filter-mpath.c:196 /dev/sdd1: Device is a partition, using primary device sdd for mpath component detection >#device/dev-io.c:336 /dev/sdd1: using cached size 52428720 sectors >#filters/filter-persistent.c:335 filter caching good /dev/sdd1 >#label/label.c:310 /dev/sdd1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdd1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdd1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdd1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdd1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdd1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdd1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdd1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sde fd 21 block 0x56102e3bcef0 >#label/label.c:364 Scan filtering /dev/sde >#device/dev-io.c:336 /dev/sde: using cached size 52428800 sectors >#filters/filter-partitioned.c:37 /dev/sde: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/sde >#label/label.c:376 /dev/sde: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/sde1 fd 22 block 0x56102e3bcf40 >#label/label.c:364 Scan filtering /dev/sde1 >#device/dev-io.c:336 /dev/sde1: using cached size 52428720 sectors >#filters/filter-mpath.c:196 /dev/sde1: Device is a partition, using primary device sde for mpath component detection >#device/dev-io.c:336 /dev/sde1: using cached size 52428720 sectors >#filters/filter-persistent.c:335 filter caching good /dev/sde1 >#label/label.c:310 /dev/sde1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sde1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sde1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sde1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sde1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sde1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sde1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sde1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sdf fd 23 block 0x56102e3bcf90 >#label/label.c:364 Scan filtering /dev/sdf >#device/dev-io.c:336 /dev/sdf: using cached size 52428800 sectors >#filters/filter-partitioned.c:37 /dev/sdf: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/sdf >#label/label.c:376 /dev/sdf: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/sdf1 fd 24 block 0x56102e3bcfe0 >#label/label.c:364 Scan filtering /dev/sdf1 >#device/dev-io.c:336 /dev/sdf1: using cached size 52428720 sectors >#filters/filter-mpath.c:196 /dev/sdf1: Device is a partition, using primary device sdf for mpath component detection >#device/dev-io.c:336 /dev/sdf1: using cached size 52428720 sectors >#filters/filter-persistent.c:335 filter caching good /dev/sdf1 >#label/label.c:310 /dev/sdf1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdf1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdf1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdf1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdf1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdf1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdf1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdf1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sdg fd 25 block 0x56102e3bd030 >#label/label.c:364 Scan filtering /dev/sdg >#device/dev-io.c:336 /dev/sdg: using cached size 52428800 sectors >#filters/filter-partitioned.c:37 /dev/sdg: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/sdg >#label/label.c:376 /dev/sdg: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/sdg1 fd 26 block 0x56102e3bd080 >#label/label.c:364 Scan filtering /dev/sdg1 >#device/dev-io.c:336 /dev/sdg1: using cached size 52428720 sectors >#filters/filter-mpath.c:196 /dev/sdg1: Device is a partition, using primary device sdg for mpath component detection >#device/dev-io.c:336 /dev/sdg1: using cached size 52428720 sectors >#filters/filter-persistent.c:335 filter caching good /dev/sdg1 >#label/label.c:310 /dev/sdg1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdg1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdg1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdg1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdg1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdg1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdg1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdg1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sdh fd 27 block 0x56102e3bd0d0 >#label/label.c:364 Scan filtering /dev/sdh >#device/dev-io.c:336 /dev/sdh: using cached size 52428800 sectors >#filters/filter-partitioned.c:37 /dev/sdh: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/sdh >#label/label.c:376 /dev/sdh: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/sdh1 fd 28 block 0x56102e3bd120 >#label/label.c:364 Scan filtering /dev/sdh1 >#device/dev-io.c:336 /dev/sdh1: using cached size 52428720 sectors >#filters/filter-mpath.c:196 /dev/sdh1: Device is a partition, using primary device sdh for mpath component detection >#device/dev-io.c:336 /dev/sdh1: using cached size 52428720 sectors >#filters/filter-persistent.c:335 filter caching good /dev/sdh1 >#label/label.c:310 /dev/sdh1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdh1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdh1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdh1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdh1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdh1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdh1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdh1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sdi fd 29 block 0x56102e3bd170 >#label/label.c:364 Scan filtering /dev/sdi >#device/dev-io.c:336 /dev/sdi: using cached size 52428800 sectors >#filters/filter-partitioned.c:37 /dev/sdi: Skipping: Partition table signature found >#filters/filter-persistent.c:335 filter caching bad /dev/sdi >#label/label.c:376 /dev/sdi: Not processing filtered >#label/label.c:379 <backtrace> >#label/label.c:581 Processing data from device /dev/sdi1 fd 30 block 0x56102e3bd1c0 >#label/label.c:364 Scan filtering /dev/sdi1 >#device/dev-io.c:336 /dev/sdi1: using cached size 52428720 sectors >#filters/filter-mpath.c:196 /dev/sdi1: Device is a partition, using primary device sdi for mpath component detection >#device/dev-io.c:336 /dev/sdi1: using cached size 52428720 sectors >#filters/filter-persistent.c:335 filter caching good /dev/sdi1 >#label/label.c:310 /dev/sdi1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdi1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdi1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdi1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdi1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdi1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdi1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdi1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:616 Scanned devices: open errors 0 read errors 0 process errors 0 >#cache/lvmcache.c:1560 Found VG info for 2 VGs >#toollib.c:2294 Obtaining the complete list of VGs to process >#toollib.c:2008 Processing VG test prawKh-ZOlb-Tjc9-me9X-saTB-6SzD-Qx51Jv >#locking/locking.c:353 Dropping cache for test. >#misc/lvm-flock.c:202 Locking /run/lock/lvm/V_test WB >#libdm-common.c:984 Preparing SELinux context for /run/lock/lvm/V_test to system_u:object_r:lvm_lock_t:s0. >#misc/lvm-flock.c:100 _do_flock /run/lock/lvm/V_test:aux WB >#misc/lvm-flock.c:100 _do_flock /run/lock/lvm/V_test WB >#misc/lvm-flock.c:47 _undo_flock /run/lock/lvm/V_test:aux >#libdm-common.c:987 Resetting SELinux context to default value. >#metadata/metadata.c:3785 Reading VG test prawKh-ZOlb-Tjc9-me9X-saTB-6SzD-Qx51Jv >#metadata/metadata.c:3874 Rescanning devices for test >#cache/lvmcache.c:750 lvmcache has no info for vgname "test" with VGID prawKhZOlbTjc9me9XsaTB6SzDQx51Jv. >#label/label.c:528 Scanning 7 devices for VG info >#label/label.c:566 Scanning submitted 7 reads >#label/label.c:581 Processing data from device /dev/sdb1 fd 4 block 0x56102e3bca40 >#label/label.c:310 /dev/sdb1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdb1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdb1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdb1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdb1 at 2577408 size 10978 (+0) >#format_text/format-text.c:1313 Found metadata summary on /dev/sdb1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:750 lvmcache has no info for vgname "test" with VGID prawKhZOlbTjc9me9XsaTB6SzDQx51Jv. >#cache/lvmcache.c:750 lvmcache has no info for vgname "test". >#cache/lvmcache.c:2074 lvmcache /dev/sdb1: now in VG test with 1 mda(s). >#cache/lvmcache.c:1900 lvmcache /dev/sdb1: VG test: set VGID to prawKhZOlbTjc9me9XsaTB6SzDQx51Jv. >#cache/lvmcache.c:2220 lvmcache /dev/sdb1: VG test: set seqno to 325 >#cache/lvmcache.c:2237 lvmcache /dev/sdb1: VG test: set mda_checksum to 768b2936 mda_size to 10978 >#cache/lvmcache.c:2111 lvmcache /dev/sdb1: VG test: set creation host to host-087.virt.lab.msp.redhat.com. >#label/label.c:581 Processing data from device /dev/sdd1 fd 6 block 0x56102e3bca90 >#label/label.c:310 /dev/sdd1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdd1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdd1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdd1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdd1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdd1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdd1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdd1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sde1 fd 7 block 0x56102e3bcb30 >#label/label.c:310 /dev/sde1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sde1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sde1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sde1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sde1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sde1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sde1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sde1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sdf1 fd 9 block 0x56102e3bcb80 >#label/label.c:310 /dev/sdf1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdf1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdf1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdf1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdf1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdf1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdf1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdf1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sdg1 fd 10 block 0x56102e3bcbd0 >#label/label.c:310 /dev/sdg1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdg1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdg1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdg1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdg1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdg1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdg1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdg1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sdh1 fd 11 block 0x56102e3bcc20 >#label/label.c:310 /dev/sdh1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdh1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdh1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdh1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdh1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdh1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdh1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdh1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:581 Processing data from device /dev/sdi1 fd 12 block 0x56102e3bcc70 >#label/label.c:310 /dev/sdi1: lvm2 label detected at sector 1 >#cache/lvmcache.c:2074 lvmcache /dev/sdi1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mda(s). >#format_text/text_label.c:423 /dev/sdi1: PV header extension version 2 found >#format_text/format-text.c:331 Reading mda header sector from /dev/sdi1 at 4096 >#format_text/import.c:58 Reading metadata summary from /dev/sdi1 at 2577408 size 10978 (+0) >#format_text/import.c:77 Skipped parsing metadata on /dev/sdi1 >#format_text/format-text.c:1313 Found metadata summary on /dev/sdi1 at 2577408 size 10978 for VG test >#cache/lvmcache.c:2074 lvmcache /dev/sdi1: now in VG test (prawKhZOlbTjc9me9XsaTB6SzDQx51Jv) with 1 mda(s). >#label/label.c:616 Scanned devices: open errors 0 read errors 0 process errors 0 >#metadata/metadata.c:3960 Reading VG test from /dev/sdb1 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdb1 at 4096 >#format_text/import.c:154 Reading metadata from /dev/sdb1 at 2577408 size 10978 (+0) >#metadata/vg.c:74 Allocated VG test at 0x56102e3f9d70. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/POOL:0[0] on LV test/POOL_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/POOL:0[0] on LV test/POOL_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rimage_0. >#metadata/lv_manip.c:1191 Stack test/POOL:0[1] on LV test/POOL_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/POOL:0[1] on LV test/POOL_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid1:0[0] on LV test/raid1_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid1:0[0] on LV test/raid1_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid1:0[1] on LV test/raid1_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid1:0[1] on LV test/raid1_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[0] on LV test/raid10_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10:0[0] on LV test/raid10_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10:0[1] on LV test/raid10_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[1] on LV test/raid10_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[2] on LV test/raid10_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10:0[2] on LV test/raid10_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10:0[3] on LV test/raid10_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10:0[3] on LV test/raid10_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_3. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[0] on LV test/raid10_2_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[0] on LV test/raid10_2_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[1] on LV test/raid10_2_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[1] on LV test/raid10_2_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[2] on LV test/raid10_2_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[2] on LV test/raid10_2_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[3] on LV test/raid10_2_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[3] on LV test/raid10_2_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_3. >#format_text/format-text.c:578 Found metadata on /dev/sdb1 at 2577408 size 10978 for VG test >#metadata/metadata.c:3960 Reading VG test from /dev/sdd1 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdd1 at 4096 >#format_text/import.c:154 Reading metadata from /dev/sdd1 at 2577408 size 10978 (+0) >#format_text/import.c:173 Skipped parsing metadata on /dev/sdd1 >#format_text/format-text.c:578 Found metadata on /dev/sdd1 at 2577408 size 10978 for VG test >#metadata/metadata.c:3960 Reading VG test from /dev/sde1 >#format_text/format-text.c:331 Reading mda header sector from /dev/sde1 at 4096 >#format_text/import.c:154 Reading metadata from /dev/sde1 at 2577408 size 10978 (+0) >#format_text/import.c:173 Skipped parsing metadata on /dev/sde1 >#format_text/format-text.c:578 Found metadata on /dev/sde1 at 2577408 size 10978 for VG test >#metadata/metadata.c:3960 Reading VG test from /dev/sdf1 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdf1 at 4096 >#format_text/import.c:154 Reading metadata from /dev/sdf1 at 2577408 size 10978 (+0) >#format_text/import.c:173 Skipped parsing metadata on /dev/sdf1 >#format_text/format-text.c:578 Found metadata on /dev/sdf1 at 2577408 size 10978 for VG test >#metadata/metadata.c:3960 Reading VG test from /dev/sdg1 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdg1 at 4096 >#format_text/import.c:154 Reading metadata from /dev/sdg1 at 2577408 size 10978 (+0) >#format_text/import.c:173 Skipped parsing metadata on /dev/sdg1 >#format_text/format-text.c:578 Found metadata on /dev/sdg1 at 2577408 size 10978 for VG test >#metadata/metadata.c:3960 Reading VG test from /dev/sdh1 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdh1 at 4096 >#format_text/import.c:154 Reading metadata from /dev/sdh1 at 2577408 size 10978 (+0) >#format_text/import.c:173 Skipped parsing metadata on /dev/sdh1 >#format_text/format-text.c:578 Found metadata on /dev/sdh1 at 2577408 size 10978 for VG test >#metadata/metadata.c:3960 Reading VG test from /dev/sdi1 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdi1 at 4096 >#format_text/import.c:154 Reading metadata from /dev/sdi1 at 2577408 size 10978 (+0) >#format_text/import.c:173 Skipped parsing metadata on /dev/sdi1 >#format_text/format-text.c:578 Found metadata on /dev/sdi1 at 2577408 size 10978 for VG test >#libdm-config.c:1002 metadata/lvs_history_retention_time not found in config: defaulting to 0 >#device/dev-io.c:336 /dev/sdg1: using cached size 52428720 sectors >#device/dev-io.c:336 /dev/sdd1: using cached size 52428720 sectors >#device/dev-io.c:336 /dev/sdf1: using cached size 52428720 sectors >#device/dev-io.c:336 /dev/sdb1: using cached size 52428720 sectors >#device/dev-io.c:336 /dev/sdh1: using cached size 52428720 sectors >#device/dev-io.c:336 /dev/sdi1: using cached size 52428720 sectors >#device/dev-io.c:336 /dev/sde1: using cached size 52428720 sectors >#metadata/pv_manip.c:417 /dev/sdg1 0: 0 2: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 1: 2 1: POOL_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 2: 3 256: POOL_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 3: 259 1: raid1_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 4: 260 25: raid1_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 5: 285 1: raid10_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 6: 286 13: raid10_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 7: 299 1: raid10_2_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 8: 300 13: raid10_2_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 9: 313 6084: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 0: 0 2: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 1: 2 1: POOL_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 2: 3 256: POOL_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 3: 259 1: raid1_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 4: 260 25: raid1_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 5: 285 1: raid10_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 6: 286 13: raid10_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 7: 299 1: raid10_2_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 8: 300 13: raid10_2_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 9: 313 6084: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 0: 0 1: raid10_rmeta_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 1: 1 13: raid10_rimage_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 2: 14 1: raid10_2_rmeta_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 3: 15 13: raid10_2_rimage_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 4: 28 6369: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 0: 0 1: raid10_rmeta_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 1: 1 13: raid10_rimage_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 2: 14 1: raid10_2_rmeta_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 3: 15 13: raid10_2_rimage_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 4: 28 6369: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdh1 0: 0 6397: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdi1 0: 0 6397: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sde1 0: 0 6397: NULL(0:0) >#metadata/vg.c:74 Allocated VG test at 0x56102e401d90. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/POOL:0[0] on LV test/POOL_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/POOL:0[0] on LV test/POOL_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rimage_0. >#metadata/lv_manip.c:1191 Stack test/POOL:0[1] on LV test/POOL_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/POOL:0[1] on LV test/POOL_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid1:0[0] on LV test/raid1_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid1:0[0] on LV test/raid1_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid1:0[1] on LV test/raid1_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid1:0[1] on LV test/raid1_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[0] on LV test/raid10_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10:0[0] on LV test/raid10_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10:0[1] on LV test/raid10_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[1] on LV test/raid10_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[2] on LV test/raid10_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10:0[2] on LV test/raid10_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10:0[3] on LV test/raid10_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10:0[3] on LV test/raid10_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_3. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[0] on LV test/raid10_2_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[0] on LV test/raid10_2_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[1] on LV test/raid10_2_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[1] on LV test/raid10_2_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[2] on LV test/raid10_2_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[2] on LV test/raid10_2_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[3] on LV test/raid10_2_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[3] on LV test/raid10_2_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_3. >#toollib.c:2034 Running command for VG test prawKh-ZOlb-Tjc9-me9X-saTB-6SzD-Qx51Jv >#libdm-config.c:1074 allocation/raid_stripe_all_devices not found in config: defaulting to 0 >#metadata/lv_manip.c:904 Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB(26 extents). >#format_text/archiver.c:142 Archiving volume group "test" metadata (seqno 325). >#metadata/lv_manip.c:5812 Creating logical volume raid10_3 >#metadata/lv_manip.c:4144 Adding segment of type raid10 to LV raid10_3. >#metadata/lv_manip.c:3445 Adjusted allocation request to 28 logical extents. Existing size 0. New size 28. >#metadata/lv_manip.c:3448 Mirror log of 1 extents of size 8192 sectors needed for region size 2.00 MiB. >#metadata/pv_map.c:54 Allowing allocation on /dev/sdg1 start PE 0 length 2 >#metadata/pv_map.c:54 Allowing allocation on /dev/sdg1 start PE 313 length 6084 >#metadata/pv_map.c:54 Allowing allocation on /dev/sdd1 start PE 0 length 2 >#metadata/pv_map.c:54 Allowing allocation on /dev/sdd1 start PE 313 length 6084 >#metadata/pv_map.c:54 Allowing allocation on /dev/sdf1 start PE 28 length 6369 >#metadata/pv_map.c:54 Allowing allocation on /dev/sdb1 start PE 28 length 6369 >#metadata/pv_map.c:54 Allowing allocation on /dev/sdh1 start PE 0 length 6397 >#metadata/pv_map.c:54 Allowing allocation on /dev/sdi1 start PE 0 length 6397 >#metadata/pv_map.c:54 Allowing allocation on /dev/sde1 start PE 0 length 6397 >#metadata/lv_manip.c:3191 Trying allocation using contiguous policy. >#metadata/lv_manip.c:2793 Areas to be sorted and filled sequentially. >#metadata/lv_manip.c:2708 Still need 56 total extents from 44101 remaining (0 positional slots): >#metadata/lv_manip.c:2711 4 (4 data/0 parity) parallel areas of 13 extents each >#metadata/lv_manip.c:2715 4 metadata areas of 1 extents each >#metadata/lv_manip.c:2370 Considering allocation area 0 as /dev/sdg1 start PE 313 length 14 leaving 6070. >#metadata/lv_manip.c:2370 Considering allocation area 1 as /dev/sdd1 start PE 313 length 14 leaving 6070. >#metadata/lv_manip.c:2370 Considering allocation area 2 as /dev/sdf1 start PE 28 length 14 leaving 6355. >#metadata/lv_manip.c:2370 Considering allocation area 3 as /dev/sdb1 start PE 28 length 14 leaving 6355. >#metadata/lv_manip.c:2370 Considering allocation area 4 as /dev/sdh1 start PE 0 length 14 leaving 6383. >#metadata/lv_manip.c:2370 Considering allocation area 5 as /dev/sdi1 start PE 0 length 14 leaving 6383. >#metadata/lv_manip.c:2370 Considering allocation area 6 as /dev/sde1 start PE 0 length 14 leaving 6383. >#metadata/lv_manip.c:2933 Sorting 7 areas >#metadata/lv_manip.c:1935 Allocating parallel metadata area 0 on /dev/sdg1 start PE 313 length 1. >#metadata/lv_manip.c:1950 Allocating parallel area 0 on /dev/sdg1 start PE 314 length 13. >#metadata/lv_manip.c:1935 Allocating parallel metadata area 1 on /dev/sdd1 start PE 313 length 1. >#metadata/lv_manip.c:1950 Allocating parallel area 1 on /dev/sdd1 start PE 314 length 13. >#metadata/lv_manip.c:1935 Allocating parallel metadata area 2 on /dev/sdf1 start PE 28 length 1. >#metadata/lv_manip.c:1950 Allocating parallel area 2 on /dev/sdf1 start PE 29 length 13. >#metadata/lv_manip.c:1935 Allocating parallel metadata area 3 on /dev/sdb1 start PE 28 length 1. >#metadata/lv_manip.c:1950 Allocating parallel area 3 on /dev/sdb1 start PE 29 length 13. >#metadata/lv_manip.c:5812 Creating logical volume raid10_3_rimage_0 >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[0] on LV test/raid10_3_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_0. >#metadata/lv_manip.c:5812 Creating logical volume raid10_3_rmeta_0 >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[0] on LV test/raid10_3_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_0. >#metadata/lv_manip.c:5812 Creating logical volume raid10_3_rimage_1 >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[1] on LV test/raid10_3_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_1. >#metadata/lv_manip.c:5812 Creating logical volume raid10_3_rmeta_1 >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[1] on LV test/raid10_3_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_1. >#metadata/lv_manip.c:5812 Creating logical volume raid10_3_rimage_2 >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[2] on LV test/raid10_3_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_2. >#metadata/lv_manip.c:5812 Creating logical volume raid10_3_rmeta_2 >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[2] on LV test/raid10_3_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_2. >#metadata/lv_manip.c:5812 Creating logical volume raid10_3_rimage_3 >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[3] on LV test/raid10_3_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_3. >#metadata/lv_manip.c:5812 Creating logical volume raid10_3_rmeta_3 >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[3] on LV test/raid10_3_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_3. >#metadata/lv_manip.c:5963 LV raid10_3_rmeta_0 in VG test is now visible. >#metadata/lv_manip.c:5963 LV raid10_3_rmeta_1 in VG test is now visible. >#metadata/lv_manip.c:5963 LV raid10_3_rmeta_2 in VG test is now visible. >#metadata/lv_manip.c:5963 LV raid10_3_rmeta_3 in VG test is now visible. >#metadata/pv_manip.c:417 /dev/sdg1 0: 0 2: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 1: 2 1: POOL_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 2: 3 256: POOL_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 3: 259 1: raid1_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 4: 260 25: raid1_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 5: 285 1: raid10_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 6: 286 13: raid10_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 7: 299 1: raid10_2_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 8: 300 13: raid10_2_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 9: 313 1: raid10_3_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 10: 314 13: raid10_3_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 11: 327 6070: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 0: 0 2: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 1: 2 1: POOL_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 2: 3 256: POOL_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 3: 259 1: raid1_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 4: 260 25: raid1_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 5: 285 1: raid10_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 6: 286 13: raid10_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 7: 299 1: raid10_2_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 8: 300 13: raid10_2_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 9: 313 1: raid10_3_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 10: 314 13: raid10_3_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 11: 327 6070: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 0: 0 1: raid10_rmeta_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 1: 1 13: raid10_rimage_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 2: 14 1: raid10_2_rmeta_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 3: 15 13: raid10_2_rimage_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 4: 28 1: raid10_3_rmeta_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 5: 29 13: raid10_3_rimage_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 6: 42 6355: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 0: 0 1: raid10_rmeta_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 1: 1 13: raid10_rimage_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 2: 14 1: raid10_2_rmeta_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 3: 15 13: raid10_2_rimage_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 4: 28 1: raid10_3_rmeta_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 5: 29 13: raid10_3_rimage_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 6: 42 6355: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdh1 0: 0 6397: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdi1 0: 0 6397: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sde1 0: 0 6397: NULL(0:0) >#locking/locking.c:353 Dropping cache for test. >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdb1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdb1 at 2588672 len 14031 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdd1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdd1 at 2588672 len 14031 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sde1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sde1 at 2588672 len 14031 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdf1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdf1 at 2588672 len 14031 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdg1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdg1 at 2588672 len 14031 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdh1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdh1 at 2588672 len 14031 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdi1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdi1 at 2588672 len 14031 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdb1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (326) to /dev/sdb1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdd1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (326) to /dev/sdd1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sde1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (326) to /dev/sde1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdf1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (326) to /dev/sdf1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdg1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (326) to /dev/sdg1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdh1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (326) to /dev/sdh1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdi1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (326) to /dev/sdi1 header at 4096 >#metadata/vg.c:74 Allocated VG test at 0x56102e419dc0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rmeta_3. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rimage_3. >#metadata/lv_manip.c:1191 Stack test/POOL:0[0] on LV test/POOL_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/POOL:0[0] on LV test/POOL_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rimage_0. >#metadata/lv_manip.c:1191 Stack test/POOL:0[1] on LV test/POOL_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/POOL:0[1] on LV test/POOL_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid1:0[0] on LV test/raid1_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid1:0[0] on LV test/raid1_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid1:0[1] on LV test/raid1_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid1:0[1] on LV test/raid1_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[0] on LV test/raid10_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10:0[0] on LV test/raid10_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10:0[1] on LV test/raid10_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[1] on LV test/raid10_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[2] on LV test/raid10_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10:0[2] on LV test/raid10_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10:0[3] on LV test/raid10_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10:0[3] on LV test/raid10_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_3. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[0] on LV test/raid10_2_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[0] on LV test/raid10_2_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[1] on LV test/raid10_2_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[1] on LV test/raid10_2_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[2] on LV test/raid10_2_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[2] on LV test/raid10_2_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[3] on LV test/raid10_2_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[3] on LV test/raid10_2_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_3. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[0] on LV test/raid10_3_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[0] on LV test/raid10_3_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[1] on LV test/raid10_3_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[1] on LV test/raid10_3_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[2] on LV test/raid10_3_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[2] on LV test/raid10_3_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[3] on LV test/raid10_3_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[3] on LV test/raid10_3_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_3. >#format_text/format-text.c:331 Reading mda header sector from /dev/sdb1 at 4096 >#format_text/format-text.c:790 Committing test metadata (326) to /dev/sdb1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdd1 at 4096 >#format_text/format-text.c:790 Committing test metadata (326) to /dev/sdd1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sde1 at 4096 >#format_text/format-text.c:790 Committing test metadata (326) to /dev/sde1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdf1 at 4096 >#format_text/format-text.c:790 Committing test metadata (326) to /dev/sdf1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdg1 at 4096 >#format_text/format-text.c:790 Committing test metadata (326) to /dev/sdg1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdh1 at 4096 >#format_text/format-text.c:790 Committing test metadata (326) to /dev/sdh1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdi1 at 4096 >#format_text/format-text.c:790 Committing test metadata (326) to /dev/sdi1 header at 4096 >#locking/locking.c:353 Dropping cache for test. >#metadata/vg.c:89 Freeing VG test at 0x56102e401d90. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ noopencount flush ] [16384] (*1) >#activate/dev_manager.c:760 Skipping checks for old devices without LVM- dm uuid prefix (kernel vsn 3 >= 3). >#activate/activate.c:1591 test/raid10_3_rmeta_0 is not active >#locking/file_locking.c:100 Locking LV prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 (R) >#activate/activate.c:479 activation/volume_list configuration setting not defined: Checking only host tags for test/raid10_3_rmeta_0. >#activate/activate.c:2815 Activating test/raid10_3_rmeta_0. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ noopencount flush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/sdg1 RO O_DIRECT >#device/dev-io.c:390 /dev/sdg1: read_ahead is 8192 sectors >#device/dev-io.c:650 Closed /dev/sdg1 >#mm/memlock.c:619 Entering prioritized section (activating). >#mm/memlock.c:482 Raised task priority 0 -> -18. >#activate/dev_manager.c:3224 Creating ACTIVATE tree for test/raid10_3_rmeta_0. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rmeta_0 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 in deptree. >#activate/activate.c:523 Getting driver version >#ioctl/libdm-iface.c:1857 dm version [ opencount flush ] [16384] (*1) >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rmeta_0 >#activate/activate.c:535 Getting target version for linear >#ioctl/libdm-iface.c:1857 dm versions [ opencount flush ] [16384] (*1) >#activate/activate.c:572 Found linear target v1.3.0. >#activate/activate.c:535 Getting target version for striped >#ioctl/libdm-iface.c:1857 dm versions [ opencount flush ] [16384] (*1) >#activate/activate.c:572 Found striped target v1.6.0. >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rmeta_0 is 8192 >#libdm-deptree.c:1944 Creating test-raid10_3_rmeta_0 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rmeta_0 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rmeta_0 (253:34). >#libdm-deptree.c:2641 Adding target to (253:34): 0 8192 linear 8:97 2580480 >#ioctl/libdm-iface.c:1857 dm table (253:34) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:34) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 8192 for test-raid10_3_rmeta_0 (253:34). >#libdm-deptree.c:1302 Resuming test-raid10_3_rmeta_0 (253:34). >#libdm-common.c:2433 Udev cookie 0xd4d702a (semid 688129) created >#libdm-common.c:2453 Udev cookie 0xd4d702a (semid 688129) incremented to 1 >#libdm-common.c:2325 Udev cookie 0xd4d702a (semid 688129) incremented to 2 >#libdm-common.c:2575 Udev cookie 0xd4d702a (semid 688129) assigned to RESUME task(5) with flags DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK (0x2c) >#ioctl/libdm-iface.c:1857 dm resume (253:34) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1484 test-raid10_3_rmeta_0: Stacking NODE_ADD (253,34) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_0: Stacking NODE_READ_AHEAD 8192 (flags=1) >#activate/dev_manager.c:3224 Creating CLEAN tree for test/raid10_3_rmeta_0. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:34) [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow [ opencount flush ] [16384] (*1) >#mm/memlock.c:631 Leaving section (activated). >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_1 is not active >#locking/file_locking.c:100 Locking LV prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF (R) >#activate/activate.c:479 activation/volume_list configuration setting not defined: Checking only host tags for test/raid10_3_rmeta_1. >#activate/activate.c:2815 Activating test/raid10_3_rmeta_1. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ noopencount flush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/sdd1 RO O_DIRECT >#device/dev-io.c:390 /dev/sdd1: read_ahead is 8192 sectors >#device/dev-io.c:650 Closed /dev/sdd1 >#mm/memlock.c:619 Entering prioritized section (activating). >#activate/dev_manager.c:3224 Creating ACTIVATE tree for test/raid10_3_rmeta_1. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rmeta_1 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rmeta_1 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rmeta_1 is 8192 >#libdm-deptree.c:1944 Creating test-raid10_3_rmeta_1 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rmeta_1 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rmeta_1 (253:35). >#libdm-deptree.c:2641 Adding target to (253:35): 0 8192 linear 8:49 2580480 >#ioctl/libdm-iface.c:1857 dm table (253:35) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:35) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 8192 for test-raid10_3_rmeta_1 (253:35). >#libdm-deptree.c:1302 Resuming test-raid10_3_rmeta_1 (253:35). >#libdm-common.c:2325 Udev cookie 0xd4d702a (semid 688129) incremented to 3 >#libdm-common.c:2575 Udev cookie 0xd4d702a (semid 688129) assigned to RESUME task(5) with flags DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK (0x2c) >#ioctl/libdm-iface.c:1857 dm resume (253:35) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1484 test-raid10_3_rmeta_1: Stacking NODE_ADD (253,35) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_1: Stacking NODE_READ_AHEAD 8192 (flags=1) >#activate/dev_manager.c:3224 Creating CLEAN tree for test/raid10_3_rmeta_1. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:35) [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow [ opencount flush ] [16384] (*1) >#mm/memlock.c:631 Leaving section (activated). >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_2 is not active >#locking/file_locking.c:100 Locking LV prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu (R) >#activate/activate.c:479 activation/volume_list configuration setting not defined: Checking only host tags for test/raid10_3_rmeta_2. >#activate/activate.c:2815 Activating test/raid10_3_rmeta_2. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ noopencount flush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/sdf1 RO O_DIRECT >#device/dev-io.c:390 /dev/sdf1: read_ahead is 8192 sectors >#device/dev-io.c:650 Closed /dev/sdf1 >#mm/memlock.c:619 Entering prioritized section (activating). >#activate/dev_manager.c:3224 Creating ACTIVATE tree for test/raid10_3_rmeta_2. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rmeta_2 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rmeta_2 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rmeta_2 is 8192 >#libdm-deptree.c:1944 Creating test-raid10_3_rmeta_2 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rmeta_2 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rmeta_2 (253:36). >#libdm-deptree.c:2641 Adding target to (253:36): 0 8192 linear 8:81 245760 >#ioctl/libdm-iface.c:1857 dm table (253:36) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:36) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 8192 for test-raid10_3_rmeta_2 (253:36). >#libdm-deptree.c:1302 Resuming test-raid10_3_rmeta_2 (253:36). >#libdm-common.c:2325 Udev cookie 0xd4d702a (semid 688129) incremented to 4 >#libdm-common.c:2575 Udev cookie 0xd4d702a (semid 688129) assigned to RESUME task(5) with flags DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK (0x2c) >#ioctl/libdm-iface.c:1857 dm resume (253:36) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1484 test-raid10_3_rmeta_2: Stacking NODE_ADD (253,36) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_2: Stacking NODE_READ_AHEAD 8192 (flags=1) >#activate/dev_manager.c:3224 Creating CLEAN tree for test/raid10_3_rmeta_2. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:36) [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow [ opencount flush ] [16384] (*1) >#mm/memlock.c:631 Leaving section (activated). >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_3 is not active >#locking/file_locking.c:100 Locking LV prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk (R) >#activate/activate.c:479 activation/volume_list configuration setting not defined: Checking only host tags for test/raid10_3_rmeta_3. >#activate/activate.c:2815 Activating test/raid10_3_rmeta_3. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ noopencount flush ] [16384] (*1) >#device/dev-io.c:601 Opened /dev/sdb1 RO O_DIRECT >#device/dev-io.c:390 /dev/sdb1: read_ahead is 8192 sectors >#device/dev-io.c:650 Closed /dev/sdb1 >#mm/memlock.c:619 Entering prioritized section (activating). >#activate/dev_manager.c:3224 Creating ACTIVATE tree for test/raid10_3_rmeta_3. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rmeta_3 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rmeta_3 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rmeta_3 is 8192 >#libdm-deptree.c:1944 Creating test-raid10_3_rmeta_3 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rmeta_3 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rmeta_3 (253:37). >#libdm-deptree.c:2641 Adding target to (253:37): 0 8192 linear 8:17 245760 >#ioctl/libdm-iface.c:1857 dm table (253:37) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:37) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 8192 for test-raid10_3_rmeta_3 (253:37). >#libdm-deptree.c:1302 Resuming test-raid10_3_rmeta_3 (253:37). >#libdm-common.c:2325 Udev cookie 0xd4d702a (semid 688129) incremented to 5 >#libdm-common.c:2575 Udev cookie 0xd4d702a (semid 688129) assigned to RESUME task(5) with flags DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK (0x2c) >#ioctl/libdm-iface.c:1857 dm resume (253:37) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1484 test-raid10_3_rmeta_3: Stacking NODE_ADD (253,37) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_3: Stacking NODE_READ_AHEAD 8192 (flags=1) >#activate/dev_manager.c:3224 Creating CLEAN tree for test/raid10_3_rmeta_3. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:37) [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow [ opencount flush ] [16384] (*1) >#mm/memlock.c:631 Leaving section (activated). >#metadata/lv_manip.c:4073 Clearing metadata area of test/raid10_3_rmeta_0. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_0 is active locally >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:1 locked:0 critical:0 daemon:0 suspended:0 >#mm/memlock.c:495 Restoring original task priority 0. >#activate/fs.c:491 Syncing device names >#libdm-common.c:2360 Udev cookie 0xd4d702a (semid 688129) decremented to 4 >#libdm-common.c:2646 Udev cookie 0xd4d702a (semid 688129) waiting for zero >#libdm-common.c:2375 Udev cookie 0xd4d702a (semid 688129) destroyed >#libdm-common.c:1484 test-raid10_3_rmeta_0: Skipping NODE_ADD (253,34) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_0: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rmeta_0 (253:34): read ahead is 8192 >#libdm-common.c:1373 test-raid10_3_rmeta_0: retaining kernel read ahead of 8192 (requested 8192) >#libdm-common.c:1484 test-raid10_3_rmeta_1: Skipping NODE_ADD (253,35) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_1: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rmeta_1 (253:35): read ahead is 8192 >#libdm-common.c:1373 test-raid10_3_rmeta_1: retaining kernel read ahead of 8192 (requested 8192) >#libdm-common.c:1484 test-raid10_3_rmeta_2: Skipping NODE_ADD (253,36) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_2: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rmeta_2 (253:36): read ahead is 8192 >#libdm-common.c:1373 test-raid10_3_rmeta_2: retaining kernel read ahead of 8192 (requested 8192) >#libdm-common.c:1484 test-raid10_3_rmeta_3: Skipping NODE_ADD (253,37) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_3: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rmeta_3 (253:37): read ahead is 8192 >#libdm-common.c:1373 test-raid10_3_rmeta_3: retaining kernel read ahead of 8192 (requested 8192) >#device/dev-cache.c:353 /dev/test/raid10_3_rmeta_0: Added to device cache (253:34) >#metadata/lv_manip.c:7207 Initializing 512 B of logical volume "test/raid10_3_rmeta_0" with value 0. >#metadata/lv_manip.c:4073 Clearing metadata area of test/raid10_3_rmeta_1. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_1 is active locally >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0 >#activate/fs.c:491 Syncing device names >#device/dev-cache.c:353 /dev/test/raid10_3_rmeta_1: Added to device cache (253:35) >#metadata/lv_manip.c:7207 Initializing 512 B of logical volume "test/raid10_3_rmeta_1" with value 0. >#metadata/lv_manip.c:4073 Clearing metadata area of test/raid10_3_rmeta_2. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_2 is active locally >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0 >#activate/fs.c:491 Syncing device names >#device/dev-cache.c:353 /dev/test/raid10_3_rmeta_2: Added to device cache (253:36) >#metadata/lv_manip.c:7207 Initializing 512 B of logical volume "test/raid10_3_rmeta_2" with value 0. >#metadata/lv_manip.c:4073 Clearing metadata area of test/raid10_3_rmeta_3. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_3 is active locally >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0 >#activate/fs.c:491 Syncing device names >#device/dev-cache.c:353 /dev/test/raid10_3_rmeta_3: Added to device cache (253:37) >#metadata/lv_manip.c:7207 Initializing 512 B of logical volume "test/raid10_3_rmeta_3" with value 0. >#locking/file_locking.c:95 Locking LV prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 (NL) >#activate/activate.c:2645 Deactivating test/raid10_3_rmeta_0. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ noopencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ opencount flush ] [16384] (*1) >#mm/memlock.c:619 Entering prioritized section (deactivating). >#mm/memlock.c:482 Raised task priority 0 -> -18. >#activate/dev_manager.c:3224 Creating DEACTIVATE tree for test/raid10_3_rmeta_0. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:34) [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm info (253:34) [ opencount flush ] [16384] (*1) >#libdm-deptree.c:993 Removing test-raid10_3_rmeta_0 (253:34) >#libdm-common.c:2433 Udev cookie 0xd4d8eef (semid 720897) created >#libdm-common.c:2453 Udev cookie 0xd4d8eef (semid 720897) incremented to 1 >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 2 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to REMOVE task(2) with flags DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK (0x2c) >#ioctl/libdm-iface.c:1857 dm remove (253:34) [ noopencount flush retryremove ] [16384] (*1) >#libdm-common.c:1487 test-raid10_3_rmeta_0: Stacking NODE_DEL [trust_udev] >#mm/memlock.c:631 Leaving section (deactivated). >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ noopencount flush ] [16384] (*1) >#locking/file_locking.c:95 Locking LV prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF (NL) >#activate/activate.c:2645 Deactivating test/raid10_3_rmeta_1. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ noopencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ opencount flush ] [16384] (*1) >#mm/memlock.c:619 Entering prioritized section (deactivating). >#activate/dev_manager.c:3224 Creating DEACTIVATE tree for test/raid10_3_rmeta_1. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:35) [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm info (253:35) [ opencount flush ] [16384] (*1) >#libdm-deptree.c:993 Removing test-raid10_3_rmeta_1 (253:35) >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 3 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to REMOVE task(2) with flags DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK (0x2c) >#ioctl/libdm-iface.c:1857 dm remove (253:35) [ noopencount flush retryremove ] [16384] (*1) >#libdm-common.c:1487 test-raid10_3_rmeta_1: Stacking NODE_DEL [trust_udev] >#mm/memlock.c:631 Leaving section (deactivated). >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ noopencount flush ] [16384] (*1) >#locking/file_locking.c:95 Locking LV prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu (NL) >#activate/activate.c:2645 Deactivating test/raid10_3_rmeta_2. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ noopencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ opencount flush ] [16384] (*1) >#mm/memlock.c:619 Entering prioritized section (deactivating). >#activate/dev_manager.c:3224 Creating DEACTIVATE tree for test/raid10_3_rmeta_2. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:36) [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm info (253:36) [ opencount flush ] [16384] (*1) >#libdm-deptree.c:993 Removing test-raid10_3_rmeta_2 (253:36) >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 3 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to REMOVE task(2) with flags DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK (0x2c) >#ioctl/libdm-iface.c:1857 dm remove (253:36) [ noopencount flush retryremove ] [16384] (*1) >#libdm-common.c:1487 test-raid10_3_rmeta_2: Stacking NODE_DEL [trust_udev] >#mm/memlock.c:631 Leaving section (deactivated). >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ noopencount flush ] [16384] (*1) >#locking/file_locking.c:95 Locking LV prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk (NL) >#activate/activate.c:2645 Deactivating test/raid10_3_rmeta_3. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ noopencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ opencount flush ] [16384] (*1) >#mm/memlock.c:619 Entering prioritized section (deactivating). >#activate/dev_manager.c:3224 Creating DEACTIVATE tree for test/raid10_3_rmeta_3. >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:37) [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm info (253:37) [ opencount flush ] [16384] (*1) >#libdm-deptree.c:993 Removing test-raid10_3_rmeta_3 (253:37) >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 2 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to REMOVE task(2) with flags DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK (0x2c) >#ioctl/libdm-iface.c:1857 dm remove (253:37) [ noopencount flush retryremove ] [16384] (*1) >#libdm-common.c:1487 test-raid10_3_rmeta_3: Stacking NODE_DEL [trust_udev] >#mm/memlock.c:631 Leaving section (deactivated). >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ noopencount flush ] [16384] (*1) >#metadata/lv_manip.c:5973 LV raid10_3_rmeta_0 in VG test is now hidden. >#metadata/lv_manip.c:5973 LV raid10_3_rmeta_1 in VG test is now hidden. >#metadata/lv_manip.c:5973 LV raid10_3_rmeta_2 in VG test is now hidden. >#metadata/lv_manip.c:5973 LV raid10_3_rmeta_3 in VG test is now hidden. >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:1 locked:0 critical:0 daemon:0 suspended:0 >#mm/memlock.c:495 Restoring original task priority 0. >#metadata/pv_manip.c:417 /dev/sdg1 0: 0 2: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 1: 2 1: POOL_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 2: 3 256: POOL_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 3: 259 1: raid1_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 4: 260 25: raid1_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 5: 285 1: raid10_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 6: 286 13: raid10_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 7: 299 1: raid10_2_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 8: 300 13: raid10_2_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 9: 313 1: raid10_3_rmeta_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 10: 314 13: raid10_3_rimage_0(0:0) >#metadata/pv_manip.c:417 /dev/sdg1 11: 327 6070: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 0: 0 2: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 1: 2 1: POOL_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 2: 3 256: POOL_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 3: 259 1: raid1_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 4: 260 25: raid1_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 5: 285 1: raid10_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 6: 286 13: raid10_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 7: 299 1: raid10_2_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 8: 300 13: raid10_2_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 9: 313 1: raid10_3_rmeta_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 10: 314 13: raid10_3_rimage_1(0:0) >#metadata/pv_manip.c:417 /dev/sdd1 11: 327 6070: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 0: 0 1: raid10_rmeta_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 1: 1 13: raid10_rimage_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 2: 14 1: raid10_2_rmeta_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 3: 15 13: raid10_2_rimage_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 4: 28 1: raid10_3_rmeta_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 5: 29 13: raid10_3_rimage_2(0:0) >#metadata/pv_manip.c:417 /dev/sdf1 6: 42 6355: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 0: 0 1: raid10_rmeta_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 1: 1 13: raid10_rimage_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 2: 14 1: raid10_2_rmeta_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 3: 15 13: raid10_2_rimage_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 4: 28 1: raid10_3_rmeta_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 5: 29 13: raid10_3_rimage_3(0:0) >#metadata/pv_manip.c:417 /dev/sdb1 6: 42 6355: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdh1 0: 0 6397: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sdi1 0: 0 6397: NULL(0:0) >#metadata/pv_manip.c:417 /dev/sde1 0: 0 6397: NULL(0:0) >#locking/locking.c:353 Dropping cache for test. >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdb1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdb1 at 2603008 len 13987 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdd1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdd1 at 2603008 len 13987 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sde1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sde1 at 2603008 len 13987 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdf1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdf1 at 2603008 len 13987 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdg1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdg1 at 2603008 len 13987 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdh1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdh1 at 2603008 len 13987 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdi1 at 4096 >#format_text/format-text.c:678 Writing metadata for VG test to /dev/sdi1 at 2603008 len 13987 (wrap 0) >#format_text/format-text.c:331 Reading mda header sector from /dev/sdb1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (327) to /dev/sdb1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdd1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (327) to /dev/sdd1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sde1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (327) to /dev/sde1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdf1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (327) to /dev/sdf1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdg1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (327) to /dev/sdg1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdh1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (327) to /dev/sdh1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdi1 at 4096 >#format_text/format-text.c:790 Pre-Committing test metadata (327) to /dev/sdi1 header at 4096 >#metadata/vg.c:74 Allocated VG test at 0x56102e421de0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/POOL_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid1_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rimage_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_rmeta_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rimage_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_2_rmeta_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rimage_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rmeta_0. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rimage_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rmeta_1. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rimage_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rmeta_2. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rimage_3. >#format_text/import_vsn1.c:591 Importing logical volume test/raid10_3_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/POOL:0[0] on LV test/POOL_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/POOL:0[0] on LV test/POOL_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rimage_0. >#metadata/lv_manip.c:1191 Stack test/POOL:0[1] on LV test/POOL_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/POOL:0[1] on LV test/POOL_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/POOL:0 as an user of test/POOL_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid1:0[0] on LV test/raid1_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid1:0[0] on LV test/raid1_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid1:0[1] on LV test/raid1_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid1:0[1] on LV test/raid1_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid1:0 as an user of test/raid1_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[0] on LV test/raid10_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10:0[0] on LV test/raid10_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10:0[1] on LV test/raid10_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[1] on LV test/raid10_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10:0[2] on LV test/raid10_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10:0[2] on LV test/raid10_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10:0[3] on LV test/raid10_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10:0[3] on LV test/raid10_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10:0 as an user of test/raid10_rimage_3. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[0] on LV test/raid10_2_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[0] on LV test/raid10_2_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[1] on LV test/raid10_2_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[1] on LV test/raid10_2_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[2] on LV test/raid10_2_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[2] on LV test/raid10_2_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[3] on LV test/raid10_2_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10_2:0[3] on LV test/raid10_2_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_2:0 as an user of test/raid10_2_rimage_3. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[0] on LV test/raid10_3_rmeta_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_0. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[0] on LV test/raid10_3_rimage_0:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_0. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[1] on LV test/raid10_3_rmeta_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_1. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[1] on LV test/raid10_3_rimage_1:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_1. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[2] on LV test/raid10_3_rmeta_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_2. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[2] on LV test/raid10_3_rimage_2:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_2. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[3] on LV test/raid10_3_rmeta_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rmeta_3. >#metadata/lv_manip.c:1191 Stack test/raid10_3:0[3] on LV test/raid10_3_rimage_3:0. >#metadata/lv_manip.c:747 Adding test/raid10_3:0 as an user of test/raid10_3_rimage_3. >#format_text/format-text.c:331 Reading mda header sector from /dev/sdb1 at 4096 >#format_text/format-text.c:790 Committing test metadata (327) to /dev/sdb1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdd1 at 4096 >#format_text/format-text.c:790 Committing test metadata (327) to /dev/sdd1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sde1 at 4096 >#format_text/format-text.c:790 Committing test metadata (327) to /dev/sde1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdf1 at 4096 >#format_text/format-text.c:790 Committing test metadata (327) to /dev/sdf1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdg1 at 4096 >#format_text/format-text.c:790 Committing test metadata (327) to /dev/sdg1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdh1 at 4096 >#format_text/format-text.c:790 Committing test metadata (327) to /dev/sdh1 header at 4096 >#format_text/format-text.c:331 Reading mda header sector from /dev/sdi1 at 4096 >#format_text/format-text.c:790 Committing test metadata (327) to /dev/sdi1 header at 4096 >#locking/locking.c:353 Dropping cache for test. >#metadata/vg.c:89 Freeing VG test at 0x56102e419dc0. >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0 >#format_text/archiver.c:576 Creating volume group backup "/etc/lvm/backup/test" (seqno 327). >#format_text/format-text.c:999 Writing test metadata to /etc/lvm/backup/.lvm_host-087.virt.lab.msp.redhat.com_3187_504330449 >#format_text/format-text.c:1018 Renaming /etc/lvm/backup/.lvm_host-087.virt.lab.msp.redhat.com_3187_504330449 to /etc/lvm/backup/test.tmp >#format_text/format-text.c:1043 Committing test metadata (327) >#format_text/format-text.c:1044 Renaming /etc/lvm/backup/test.tmp to /etc/lvm/backup/test >#metadata/lv.c:1518 Activating logical volume test/raid10_3 exclusively. >#activate/dev_manager.c:778 Getting device info for test-raid10_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3 is not active >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rimage_0 is not active >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rimage_1 is not active >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rimage_2 is not active >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rimage_3 is not active >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_0 is not active >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_1 is not active >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_2 is not active >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3_rmeta_3 is not active >#locking/file_locking.c:114 Locking LV prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay (EX) >#activate/activate.c:479 activation/volume_list configuration setting not defined: Checking only host tags for test/raid10_3. >#activate/activate.c:2815 Activating test/raid10_3 exclusively noscan. >#activate/dev_manager.c:778 Getting device info for test-raid10_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ noopencount flush ] [16384] (*1) >#mm/memlock.c:619 Entering prioritized section (activating). >#mm/memlock.c:482 Raised task priority 0 -> -18. >#activate/dev_manager.c:3224 Creating ACTIVATE tree for test/raid10_3. >#activate/dev_manager.c:778 Getting device info for test-raid10_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_0-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_0-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_1-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_1-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_2-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_2-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_3-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_3-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:2868 Adding new LV test/raid10_3 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay in deptree. >#activate/dev_manager.c:2790 Checking kernel supports raid10 segment type for test/raid10_3 >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rimage_0 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rimage_0 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rimage_0 is 8192 >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rmeta_0 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rmeta_0 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rmeta_0 is 8192 >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rimage_1 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rimage_1 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rimage_1 is 8192 >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rmeta_1 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rmeta_1 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rmeta_1 is 8192 >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rimage_2 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rimage_2 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rimage_2 is 8192 >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rmeta_2 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rmeta_2 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rmeta_2 is 8192 >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rimage_3 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rimage_3 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rimage_3 is 8192 >#activate/dev_manager.c:2868 Adding new LV test/raid10_3_rmeta_3 to dtree >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk in deptree. >#libdm-deptree.c:604 Not matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk in deptree. >#activate/dev_manager.c:2790 Checking kernel supports striped segment type for test/raid10_3_rmeta_3 >#metadata/metadata.c:2133 Calculated readahead of LV raid10_3_rmeta_3 is 8192 >#libdm-deptree.c:572 Matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 in deptree. >#libdm-deptree.c:572 Matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV in deptree. >#libdm-deptree.c:572 Matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF in deptree. >#libdm-deptree.c:572 Matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO in deptree. >#libdm-deptree.c:572 Matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu in deptree. >#libdm-deptree.c:572 Matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU in deptree. >#libdm-deptree.c:572 Matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk in deptree. >#libdm-deptree.c:572 Matched uuid LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t in deptree. >#libdm-deptree.c:1944 Creating test-raid10_3_rmeta_0 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rmeta_0 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rmeta_0 (253:34). >#libdm-deptree.c:2641 Adding target to (253:34): 0 8192 linear 8:97 2580480 >#ioctl/libdm-iface.c:1857 dm table (253:34) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:34) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 8192 for test-raid10_3_rmeta_0 (253:34). >#libdm-deptree.c:1302 Resuming test-raid10_3_rmeta_0 (253:34). >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 2 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0 (0x12e) >#ioctl/libdm-iface.c:1857 dm resume (253:34) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1487 test-raid10_3_rmeta_0: Unstacking NODE_DEL [trust_udev] >#libdm-common.c:1484 test-raid10_3_rmeta_0: Stacking NODE_ADD (253,34) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_0: Stacking NODE_READ_AHEAD 8192 (flags=1) >#libdm-deptree.c:1944 Creating test-raid10_3_rimage_0 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rimage_0 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rimage_0 (253:35). >#libdm-deptree.c:2641 Adding target to (253:35): 0 106496 linear 8:97 2588672 >#ioctl/libdm-iface.c:1857 dm table (253:35) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:35) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 106496 for test-raid10_3_rimage_0 (253:35). >#libdm-deptree.c:1302 Resuming test-raid10_3_rimage_0 (253:35). >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 3 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0 (0x12e) >#ioctl/libdm-iface.c:1857 dm resume (253:35) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1484 test-raid10_3_rimage_0: Stacking NODE_ADD (253,35) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rimage_0: Stacking NODE_READ_AHEAD 8192 (flags=1) >#libdm-deptree.c:1944 Creating test-raid10_3_rmeta_1 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rmeta_1 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rmeta_1 (253:36). >#libdm-deptree.c:2641 Adding target to (253:36): 0 8192 linear 8:49 2580480 >#ioctl/libdm-iface.c:1857 dm table (253:36) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:36) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 8192 for test-raid10_3_rmeta_1 (253:36). >#libdm-deptree.c:1302 Resuming test-raid10_3_rmeta_1 (253:36). >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 4 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0 (0x12e) >#ioctl/libdm-iface.c:1857 dm resume (253:36) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1487 test-raid10_3_rmeta_1: Unstacking NODE_DEL [trust_udev] >#libdm-common.c:1484 test-raid10_3_rmeta_1: Stacking NODE_ADD (253,36) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_1: Stacking NODE_READ_AHEAD 8192 (flags=1) >#libdm-deptree.c:1944 Creating test-raid10_3_rimage_1 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rimage_1 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rimage_1 (253:37). >#libdm-deptree.c:2641 Adding target to (253:37): 0 106496 linear 8:49 2588672 >#ioctl/libdm-iface.c:1857 dm table (253:37) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:37) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 106496 for test-raid10_3_rimage_1 (253:37). >#libdm-deptree.c:1302 Resuming test-raid10_3_rimage_1 (253:37). >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 5 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0 (0x12e) >#ioctl/libdm-iface.c:1857 dm resume (253:37) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1484 test-raid10_3_rimage_1: Stacking NODE_ADD (253,37) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rimage_1: Stacking NODE_READ_AHEAD 8192 (flags=1) >#libdm-deptree.c:1944 Creating test-raid10_3_rmeta_2 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rmeta_2 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rmeta_2 (253:38). >#libdm-deptree.c:2641 Adding target to (253:38): 0 8192 linear 8:81 245760 >#ioctl/libdm-iface.c:1857 dm table (253:38) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:38) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 8192 for test-raid10_3_rmeta_2 (253:38). >#libdm-deptree.c:1302 Resuming test-raid10_3_rmeta_2 (253:38). >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 6 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0 (0x12e) >#ioctl/libdm-iface.c:1857 dm resume (253:38) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1487 test-raid10_3_rmeta_2: Unstacking NODE_DEL [trust_udev] >#libdm-common.c:1484 test-raid10_3_rmeta_2: Stacking NODE_ADD (253,38) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_2: Stacking NODE_READ_AHEAD 8192 (flags=1) >#libdm-deptree.c:1944 Creating test-raid10_3_rimage_2 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rimage_2 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rimage_2 (253:39). >#libdm-deptree.c:2641 Adding target to (253:39): 0 106496 linear 8:81 253952 >#ioctl/libdm-iface.c:1857 dm table (253:39) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:39) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 106496 for test-raid10_3_rimage_2 (253:39). >#libdm-deptree.c:1302 Resuming test-raid10_3_rimage_2 (253:39). >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 7 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0 (0x12e) >#ioctl/libdm-iface.c:1857 dm resume (253:39) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1484 test-raid10_3_rimage_2: Stacking NODE_ADD (253,39) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rimage_2: Stacking NODE_READ_AHEAD 8192 (flags=1) >#libdm-deptree.c:1944 Creating test-raid10_3_rmeta_3 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rmeta_3 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rmeta_3 (253:40). >#libdm-deptree.c:2641 Adding target to (253:40): 0 8192 linear 8:17 245760 >#ioctl/libdm-iface.c:1857 dm table (253:40) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:40) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 8192 for test-raid10_3_rmeta_3 (253:40). >#libdm-deptree.c:1302 Resuming test-raid10_3_rmeta_3 (253:40). >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 7 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0 (0x12e) >#ioctl/libdm-iface.c:1857 dm resume (253:40) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1487 test-raid10_3_rmeta_3: Unstacking NODE_DEL [trust_udev] >#libdm-common.c:1484 test-raid10_3_rmeta_3: Stacking NODE_ADD (253,40) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_3: Stacking NODE_READ_AHEAD 8192 (flags=1) >#libdm-deptree.c:1944 Creating test-raid10_3_rimage_3 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3_rimage_3 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3_rimage_3 (253:41). >#libdm-deptree.c:2641 Adding target to (253:41): 0 106496 linear 8:17 253952 >#ioctl/libdm-iface.c:1857 dm table (253:41) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:41) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 106496 for test-raid10_3_rimage_3 (253:41). >#libdm-deptree.c:1302 Resuming test-raid10_3_rimage_3 (253:41). >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 6 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0 (0x12e) >#ioctl/libdm-iface.c:1857 dm resume (253:41) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1484 test-raid10_3_rimage_3: Stacking NODE_ADD (253,41) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rimage_3: Stacking NODE_READ_AHEAD 8192 (flags=1) >#libdm-deptree.c:1944 Creating test-raid10_3 >#ioctl/libdm-iface.c:1857 dm create test-raid10_3 LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2696 Loading table for test-raid10_3 (253:42). >#libdm-deptree.c:2251 Getting target version for raid >#ioctl/libdm-iface.c:1857 dm versions [ opencount flush ] [16384] (*1) >#libdm-deptree.c:2270 Found raid target v1.13.2. >#libdm-deptree.c:2641 Adding target to (253:42): 0 212992 raid raid10 5 128 region_size 4096 raid10_copies 2 4 253:34 253:35 253:36 253:37 253:38 253:39 253:40 253:41 >#ioctl/libdm-iface.c:1857 dm table (253:42) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm reload (253:42) [ noopencount flush ] [16384] (*1) >#libdm-deptree.c:2746 Table size changed from 0 to 212992 for test-raid10_3 (253:42). >#libdm-deptree.c:1302 Resuming test-raid10_3 (253:42). >#libdm-common.c:2325 Udev cookie 0xd4d8eef (semid 720897) incremented to 4 >#libdm-common.c:2575 Udev cookie 0xd4d8eef (semid 720897) assigned to RESUME task(5) with flags DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0 (0x120) >#ioctl/libdm-iface.c:1857 dm resume (253:42) [ noopencount flush ] [16384] (*1) >#libdm-common.c:1484 test-raid10_3: Stacking NODE_ADD (253,42) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3: Stacking NODE_READ_AHEAD 1024 (flags=1) >#activate/dev_manager.c:3224 Creating CLEAN tree for test/raid10_3. >#activate/dev_manager.c:778 Getting device info for test-raid10_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:42) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:41) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:40) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:39) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:38) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:37) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:36) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:35) [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm deps (253:34) [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_0-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_0-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvAS6EqcQkEWqMjHC1lnKtE3SYFLu8qWVV-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6 [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_0-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvKVkcqGT3cntEvsuow1M11LiCr6jFmmK6-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_1-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_1-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvE7kwwNZ0EDloqz3NuWScsuUKfcQneDTO-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_1-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvzzmJf1nTJ5liX1rl282YTuFTeC4ktobF-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_2-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_2-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvepQJ8fc2umurKA79ZHlfbxMU373lRCmU-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_2-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvK39res7tVWpN0dIGQR7xGW14b2rCRneu-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_3-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rimage_3-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51Jv4w8CBxu4iZFK3w0VCOogyy4zNrYg0T0t-cow [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-real [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-real [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3_rmeta_3-cow [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvMr7Gl0gd7wPW3cW4Vu5vOg9cqqzKErSk-cow [ opencount flush ] [16384] (*1) >#mm/memlock.c:631 Leaving section (activated). >#libdm-config.c:975 dmeventd/executable not found in config: defaulting to /usr/sbin/dmeventd >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ opencount flush ] [16384] (*1) >#libdevmapper-event.c:760 test-raid10_3: device not registered. >#activate/activate.c:2025 Monitoring test/raid10_3 with libdevmapper-event-lvm2raid.so. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ opencount flush ] [16384] (*1) >#activate/activate.c:1837 Monitored LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay for events >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ opencount flush ] [16384] (*1) >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ opencount flush ] [16384] (*1) >#activate/dev_manager.c:778 Getting device info for test-raid10_3 [LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay]. >#ioctl/libdm-iface.c:1857 dm info LVM-prawKhZOlbTjc9me9XsaTB6SzDQx51JvTLHUyyuymKEFwuYuaW5gQINyMyqGaoay [ noopencount flush ] [16384] (*1) >#activate/activate.c:1591 test/raid10_3 is active locally >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:1 locked:0 critical:0 daemon:0 suspended:0 >#mm/memlock.c:495 Restoring original task priority 0. >#activate/fs.c:491 Syncing device names >#libdm-common.c:2360 Udev cookie 0xd4d8eef (semid 720897) decremented to 1 >#libdm-common.c:2646 Udev cookie 0xd4d8eef (semid 720897) waiting for zero >#libdm-common.c:2375 Udev cookie 0xd4d8eef (semid 720897) destroyed >#libdm-common.c:1484 test-raid10_3_rmeta_0: Skipping NODE_ADD (253,34) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_0: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rmeta_0 (253:34): read ahead is 256 >#libdm-common.c:1298 test-raid10_3_rmeta_0 (253:34): Setting read ahead to 8192 >#libdm-common.c:1484 test-raid10_3_rimage_0: Skipping NODE_ADD (253,35) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rimage_0: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rimage_0 (253:35): read ahead is 256 >#libdm-common.c:1298 test-raid10_3_rimage_0 (253:35): Setting read ahead to 8192 >#libdm-common.c:1484 test-raid10_3_rmeta_1: Skipping NODE_ADD (253,36) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_1: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rmeta_1 (253:36): read ahead is 256 >#libdm-common.c:1298 test-raid10_3_rmeta_1 (253:36): Setting read ahead to 8192 >#libdm-common.c:1484 test-raid10_3_rimage_1: Skipping NODE_ADD (253,37) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rimage_1: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rimage_1 (253:37): read ahead is 256 >#libdm-common.c:1298 test-raid10_3_rimage_1 (253:37): Setting read ahead to 8192 >#libdm-common.c:1484 test-raid10_3_rmeta_2: Skipping NODE_ADD (253,38) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_2: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rmeta_2 (253:38): read ahead is 8192 >#libdm-common.c:1373 test-raid10_3_rmeta_2: retaining kernel read ahead of 8192 (requested 8192) >#libdm-common.c:1484 test-raid10_3_rimage_2: Skipping NODE_ADD (253,39) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rimage_2: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rimage_2 (253:39): read ahead is 8192 >#libdm-common.c:1373 test-raid10_3_rimage_2: retaining kernel read ahead of 8192 (requested 8192) >#libdm-common.c:1484 test-raid10_3_rmeta_3: Skipping NODE_ADD (253,40) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rmeta_3: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rmeta_3 (253:40): read ahead is 8192 >#libdm-common.c:1373 test-raid10_3_rmeta_3: retaining kernel read ahead of 8192 (requested 8192) >#libdm-common.c:1484 test-raid10_3_rimage_3: Skipping NODE_ADD (253,41) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3_rimage_3: Processing NODE_READ_AHEAD 8192 (flags=1) >#libdm-common.c:1248 test-raid10_3_rimage_3 (253:41): read ahead is 8192 >#libdm-common.c:1373 test-raid10_3_rimage_3: retaining kernel read ahead of 8192 (requested 8192) >#libdm-common.c:1484 test-raid10_3: Skipping NODE_ADD (253,42) 0:6 0660 [trust_udev] >#libdm-common.c:1494 test-raid10_3: Processing NODE_READ_AHEAD 1024 (flags=1) >#libdm-common.c:1248 test-raid10_3 (253:42): read ahead is 256 >#libdm-common.c:1298 test-raid10_3 (253:42): Setting read ahead to 1024 >#device/dev-cache.c:353 /dev/test/raid10_3: Added to device cache (253:42) >#metadata/lv_manip.c:7192 Wiping known signatures on logical volume "test/raid10_3" >#metadata/lv_manip.c:7207 Initializing 4.00 KiB of logical volume "test/raid10_3" with value 0. >#device/bcache.c:649 bcache io error -5 fd 3 >#label/label.c:1074 dev_write_zeros /dev/test/raid10_3 at 0 bcache flush failed invalidate fd 3 >#metadata/lv_manip.c:7211 <backtrace> >#metadata/lv_manip.c:8094 Logical volume "raid10_3" created. >#mm/memlock.c:587 Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0 >#activate/fs.c:491 Syncing device names >#locking/locking.c:353 Dropping cache for test. >#misc/lvm-flock.c:70 Unlocking /run/lock/lvm/V_test >#misc/lvm-flock.c:47 _undo_flock /run/lock/lvm/V_test >#metadata/vg.c:89 Freeing VG test at 0x56102e421de0. >#metadata/vg.c:89 Freeing VG test at 0x56102e3f9d70. >#cache/lvmcache.c:2528 Dropping VG info >#cache/lvmcache.c:750 lvmcache has no info for vgname "#orphans_lvm1" with VGID #orphans_lvm1. >#cache/lvmcache.c:750 lvmcache has no info for vgname "#orphans_lvm1". >#cache/lvmcache.c:2076 lvmcache: Initialised VG #orphans_lvm1. >#cache/lvmcache.c:750 lvmcache has no info for vgname "#orphans_pool" with VGID #orphans_pool. >#cache/lvmcache.c:750 lvmcache has no info for vgname "#orphans_pool". >#cache/lvmcache.c:2076 lvmcache: Initialised VG #orphans_pool. >#cache/lvmcache.c:750 lvmcache has no info for vgname "#orphans_lvm2" with VGID #orphans_lvm2. >#cache/lvmcache.c:750 lvmcache has no info for vgname "#orphans_lvm2". >#cache/lvmcache.c:2076 lvmcache: Initialised VG #orphans_lvm2. >#device/bcache.c:41 io_submit failed: Inappropriate ioctl for device >#device/bcache.c:649 bcache io error -5 fd 3 >#lvmcmdline.c:3036 Completed: lvcreate --type raid10 -m 1 -vvvv -n raid10_3 -L 100M test
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1591498
: 1451620