Bug 1269837 - [hosted-engine-setup] Deployment over iSCSI fails due to broken symlink under /rhev/data-center/
Summary: [hosted-engine-setup] Deployment over iSCSI fails due to broken symlink under...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-hosted-engine-ha
Classification: oVirt
Component: General
Version: 1.2.6
Hardware: x86_64
OS: Unspecified
unspecified
urgent
Target Milestone: ovirt-3.5.5
: 1.2.7.1
Assignee: Martin Sivák
QA Contact: Elad
URL:
Whiteboard: integration
Depends On: 1159240
Blocks: 1263111
TreeView+ depends on / blocked
 
Reported: 2015-10-08 10:32 UTC by Elad
Modified: 2015-10-26 13:44 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-26 13:44:20 UTC
oVirt Team: ---
Embargoed:
gklein: ovirt-3.5.z?
gklein: blocker?
sherold: planning_ack+
sbonazzo: devel_ack+
ebenahar: testing_ack?


Attachments (Terms of Use)
/var/log/ from host (1.06 MB, application/x-gzip)
2015-10-08 10:32 UTC, Elad
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 47141 0 ovirt-hosted-engine-ha-1.2 MERGED get rid of obsoleted getVolumePath calls 2020-04-20 23:57:37 UTC

Description Elad 2015-10-08 10:32:29 UTC
Created attachment 1080966 [details]
/var/log/ from host

Description of problem:
Deployment using iSCSI fails with:

2015-10-08 11:44:35 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:932 execute-output: ('/bin/systemctl', 'start', 'sanlock.service') stdout:


2015-10-08 11:44:35 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:937 execute-output: ('/bin/systemctl', 'start', 'sanlock.service') stderr:


2015-10-08 11:44:36 DEBUG otopi.context context._executeMethod:152 method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 142, in _executeMethod
    method['method']()
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/sanlock/lockspace.py", line 167, in _misc
    lockspace + '.metadata': md_size,
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 350, in create
    service_size=size)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 322, in create_volume
    stat = os.stat(path)
OSError: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/b3164ebe-4f1b-478c-9225-ebaa5715a061/images/308540ac-ddc3-41bf-843e-1a0338b5850d/1025d6fe-434f-4583-9a1c-641983fd5f0e'
2015-10-08 11:44:36 ERROR otopi.context context._executeMethod:161 Failed to execute stage 'Misc configuration': [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/b3164ebe-4f1b-478c-9225-ebaa5715
a061/images/308540ac-ddc3-41bf-843e-1a0338b5850d/1025d6fe-434f-4583-9a1c-641983fd5f0e'


Version-Release number of selected component (if applicable):

ovirt-host-deploy-1.3.2-1.el7ev.noarch
libgovirt-0.3.1-3.el7.x86_64
ovirt-hosted-engine-ha-1.2.7-1.el7ev.noarch
ovirt-hosted-engine-setup-1.2.6-1.el7ev.noarch
vdsm-4.16.27-1.el7ev.x86_64
vdsm-cli-4.16.27-1.el7ev.noarch
vdsm-jsonrpc-4.16.27-1.el7ev.noarch
vdsm-xmlrpc-4.16.27-1.el7ev.noarch
vdsm-yajsonrpc-4.16.27-1.el7ev.noarch
vdsm-python-zombiereaper-4.16.27-1.el7ev.noarch
vdsm-python-4.16.27-1.el7ev.noarch


How reproducible:
Always

Steps to Reproduce:
1. Deploy hosted-engine using iSCSI


Actual results:
Deployment fails

Expected results:
Deployment should work

Additional info:

[root@green-vdsc ~]# tree /rhev/data-center/mnt/blockSD/
/rhev/data-center/mnt/blockSD/
|-- 7b857280-a282-4a90-8622-d4e8c66eb01d
|   |-- dom_md
|   |   |-- ids -> /dev/7b857280-a282-4a90-8622-d4e8c66eb01d/ids
|   |   |-- inbox -> /dev/7b857280-a282-4a90-8622-d4e8c66eb01d/inbox
|   |   |-- leases -> /dev/7b857280-a282-4a90-8622-d4e8c66eb01d/leases
|   |   |-- master -> /dev/7b857280-a282-4a90-8622-d4e8c66eb01d/master
|   |   |-- metadata -> /dev/7b857280-a282-4a90-8622-d4e8c66eb01d/metadata
|   |   `-- outbox -> /dev/7b857280-a282-4a90-8622-d4e8c66eb01d/outbox
|   |-- ha_agent
|   |-- images
|   |   |-- 305855e8-a6b9-4d5e-b640-c1e46af282b9
|   |   |   `-- ce165420-d9c0-4503-b2f5-eb588d404c25 -> /dev/7b857280-a282-4a90-8622-d4e8c66eb01d/ce165420-d9c0-4503-b2f5-eb588d404c25
|   |   `-- f3725309-2d3e-42de-a52f-6b82f69b4b04
|   |       `-- 84e0f8f6-032b-4146-8a24-cfd4f65d6908 -> /dev/7b857280-a282-4a90-8622-d4e8c66eb01d/84e0f8f6-032b-4146-8a24-cfd4f65d6908
|   `-- master
|-- b3164ebe-4f1b-478c-9225-ebaa5715a061
|   |-- dom_md
|   |   |-- ids -> /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/ids
|   |   |-- inbox -> /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/inbox
|   |   |-- leases -> /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/leases
|   |   |-- master -> /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/master
|   |   |-- metadata -> /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/metadata
|   |   `-- outbox -> /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/outbox
|   |-- ha_agent
|   |-- images
|   |   `-- 308540ac-ddc3-41bf-843e-1a0338b5850d
|   |       `-- 1025d6fe-434f-4583-9a1c-641983fd5f0e -> /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/1025d6fe-434f-4583-9a1c-641983fd5f0e
|   `-- master
`-- e3234d5b-7ff8-46c7-9084-78c630e984aa
    |-- dom_md
    |   |-- ids -> /dev/e3234d5b-7ff8-46c7-9084-78c630e984aa/ids
    |   |-- inbox -> /dev/e3234d5b-7ff8-46c7-9084-78c630e984aa/inbox
    |   |-- leases -> /dev/e3234d5b-7ff8-46c7-9084-78c630e984aa/leases
    |   |-- master -> /dev/e3234d5b-7ff8-46c7-9084-78c630e984aa/master
    |   |-- metadata -> /dev/e3234d5b-7ff8-46c7-9084-78c630e984aa/metadata
    |   `-- outbox -> /dev/e3234d5b-7ff8-46c7-9084-78c630e984aa/outbox
    |-- ha_agent
    |-- images
    |   |-- 571ee82e-ce90-4cbc-8f7b-5e54b269defc
    |   |   `-- fca51dae-3bcd-4b8d-aaee-0d67fc1e5828 -> /dev/e3234d5b-7ff8-46c7-9084-78c630e984aa/fca51dae-3bcd-4b8d-aaee-0d67fc1e5828
    |   `-- b7aba301-d621-4e2d-b87c-4b8c455a967b
    |       `-- 474cf951-67ca-4fb2-bdff-e3599d1bac40 -> /dev/e3234d5b-7ff8-46c7-9084-78c630e984aa/474cf951-67ca-4fb2-bdff-e3599d1bac40
    `-- master

20 directories, 23 files



/var/log/ from host

Comment 1 Simone Tiraboschi 2015-10-08 10:40:23 UTC
|   |-- images
|   |   `-- 308540ac-ddc3-41bf-843e-1a0338b5850d
|   |       `-- 1025d6fe-434f-4583-9a1c-641983fd5f0e -> /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/1025d6fe-434f-4583-9a1c-641983fd5f0e

[root@green-vdsc ~]# ls -l /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/1025d6fe-434f-4583-9a1c-641983fd5f0e
ls: cannot access /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/1025d6fe-434f-4583-9a1c-641983fd5f0e: No such file or directory

[root@green-vdsc ~]# tree /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061
/dev/b3164ebe-4f1b-478c-9225-ebaa5715a061
├── ids -> ../dm-25
├── inbox -> ../dm-26
├── leases -> ../dm-24
├── master -> ../dm-27
├── metadata -> ../dm-10
└── outbox -> ../dm-11

0 directories, 6 files


/rhev/data-center/mnt/blockSD/b3164ebe-4f1b-478c-9225-ebaa5715
a061/images/308540ac-ddc3-41bf-843e-1a0338b5850d/1025d6fe-434f-4583-9a1c-641983fd5f0e is a broken symlink

Comment 2 Simone Tiraboschi 2015-10-08 11:09:28 UTC
And the correspondent LV is not available:

  --- Logical volume ---
  LV Path                /dev/b3164ebe-4f1b-478c-9225-ebaa5715a061/1025d6fe-434f-4583-9a1c-641983fd5f0e
  LV Name                1025d6fe-434f-4583-9a1c-641983fd5f0e
  VG Name                b3164ebe-4f1b-478c-9225-ebaa5715a061
  LV UUID                lTiAUl-TKKQ-w2Ou-Y0mW-94zT-hvk0-328Hns
  LV Write Access        read/write
  LV Creation host, time green-vdsc.qa.lab.tlv.redhat.com, 2015-10-08 11:44:35 +0300
  LV Status              NOT available
  LV Size                128,00 MiB
  Current LE             1
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

Comment 3 Simone Tiraboschi 2015-10-08 11:12:04 UTC
It seams that the createVolume was failing:

ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::INFO::2015-10-08 11:44:35,820::volume::415::Storage.Volume::(create) Creating volume 1025d6fe-434f-4583-9a1c-641983fd5f0e
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::DEBUG::2015-10-08 11:44:35,820::task::752::Storage.TaskManager.Task::(_save) Task=`ce478c85-18db-4a9c-aff2-1e4fd8d48f8c`::_save: orig /rhev/data-center/4eda58e3-c188-462d-9bee-9c90a187a155/mastersd/master/tasks/ce478c85-18db-4a9c-aff2-1e4fd8d48f8c temp /rhev/data-center/4eda58e3-c188-462d-9bee-9c90a187a155/mastersd/master/tasks/ce478c85-18db-4a9c-aff2-1e4fd8d48f8c.temp
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::WARNING::2015-10-08 11:44:35,825::outOfProcess::252::Storage.oop::(rename) renaming a directory is not an atomic operation
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::WARNING::2015-10-08 11:44:35,825::outOfProcess::252::Storage.oop::(rename) renaming a directory is not an atomic operation
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::DEBUG::2015-10-08 11:44:35,831::task::752::Storage.TaskManager.Task::(_save) Task=`ce478c85-18db-4a9c-aff2-1e4fd8d48f8c`::_save: orig /rhev/data-center/4eda58e3-c188-462d-9bee-9c90a187a155/mastersd/master/tasks/ce478c85-18db-4a9c-aff2-1e4fd8d48f8c temp /rhev/data-center/4eda58e3-c188-462d-9bee-9c90a187a155/mastersd/master/tasks/ce478c85-18db-4a9c-aff2-1e4fd8d48f8c.temp
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::WARNING::2015-10-08 11:44:35,836::outOfProcess::252::Storage.oop::(rename) renaming a directory is not an atomic operation
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::WARNING::2015-10-08 11:44:35,836::outOfProcess::252::Storage.oop::(rename) renaming a directory is not an atomic operation
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::DEBUG::2015-10-08 11:44:35,842::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/3514f0c5a516007ff|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --contiguous n --size 1m --addtag OVIRT_VOL_INITIALIZING --name 1025d6fe-434f-4583-9a1c-641983fd5f0e b3164ebe-4f1b-478c-9225-ebaa5715a061 (cwd None)
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::DEBUG::2015-10-08 11:44:35,987::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  /dev/mapper/3514f0c5a51600002: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/3514f0c5a51600002: read failed after 0 of 4096 at 48318316544: Input/output error\n  /dev/mapper/3514f0c5a51600002: read failed after 0 of 4096 at 48318373888: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/3514f0c5a51600002 was disabled\n  /dev/mapper/3514f0c5a51600001: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/3514f0c5a51600001: read failed after 0 of 4096 at 32212189184: Input/output error\n  /dev/mapper/3514f0c5a51600001: read failed after 0 of 4096 at 32212246528: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/3514f0c5a51600001 was disabled\n  /dev/mapper/3514f0c5a5160006f: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/3514f0c5a5160006f: read failed after 0 of 4096 at 37580898304: Input/output error\n  /dev/mapper/3514f0c5a5160006f: read failed after 0 of 4096 at 37580955648: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/3514f0c5a5160006f was disabled\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-metadata: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-metadata: read failed after 0 of 4096 at 536805376: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-metadata: read failed after 0 of 4096 at 536862720: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-metadata was disabled\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-outbox: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-outbox: read failed after 0 of 4096 at 134152192: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-outbox: read failed after 0 of 4096 at 134209536: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-outbox was disabled\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-leases: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-leases: read failed after 0 of 4096 at 2147418112: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-leases: read failed after 0 of 4096 at 2147475456: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-leases was disabled\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-ids: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-ids: read failed after 0 of 4096 at 134152192: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-ids: read failed after 0 of 4096 at 134209536: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-ids was disabled\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-inbox: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-inbox: read failed after 0 of 4096 at 134152192: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-inbox: read failed after 0 of 4096 at 134209536: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-inbox was disabled\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-master: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-master: read failed after 0 of 4096 at 1073676288: Input/output error\n  /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-master: read failed after 0 of 4096 at 1073733632: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7b857280--a282--4a90--8622--d4e8c66eb01d-master was disabled\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-metadata: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-metadata: read failed after 0 of 4096 at 536805376: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-metadata: read failed after 0 of 4096 at 536862720: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-metadata was disabled\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-metadata: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-metadata: read failed after 0 of 4096 at 536805376: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-metadata: read failed after 0 of 4096 at 536862720: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-metadata was disabled\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-outbox: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-outbox: read failed after 0 of 4096 at 134152192: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-outbox: read failed after 0 of 4096 at 134209536: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-outbox was disabled\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-outbox: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-outbox: read failed after 0 of 4096 at 134152192: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-outbox: read failed after 0 of 4096 at 134209536: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-outbox was disabled\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-leases: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-leases: read failed after 0 of 4096 at 2147418112: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-leases: read failed after 0 of 4096 at 2147475456: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-leases was disabled\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-leases: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-leases: read failed after 0 of 4096 at 2147418112: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-leases: read failed after 0 of 4096 at 2147475456: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-leases was disabled\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-ids: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-ids: read failed after 0 of 4096 at 134152192: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-ids: read failed after 0 of 4096 at 134209536: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-ids was disabled\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-ids: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-ids: read failed after 0 of 4096 at 134152192: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-ids: read failed after 0 of 4096 at 134209536: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-ids was disabled\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-inbox: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-inbox: read failed after 0 of 4096 at 134152192: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-inbox: read failed after 0 of 4096 at 134209536: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-inbox was disabled\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-inbox: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-inbox: read failed after 0 of 4096 at 134152192: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-inbox: read failed after 0 of 4096 at 134209536: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-inbox was disabled\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-master: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-master: read failed after 0 of 4096 at 1073676288: Input/output error\n  /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-master: read failed after 0 of 4096 at 1073733632: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/e3234d5b--7ff8--46c7--9084--78c630e984aa-master was disabled\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-master: read failed after 0 of 4096 at 0: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-master: read failed after 0 of 4096 at 1073676288: Input/output error\n  /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-master: read failed after 0 of 4096 at 1073733632: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/792e8ae1--7692--41b1--82be--42c9945b223e-master was disabled\n  WARNING: This metadata update is NOT backed up\n'; <rc> = 0
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::DEBUG::2015-10-08 11:44:35,988::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::DEBUG::2015-10-08 11:44:35,988::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::DEBUG::2015-10-08 11:44:35,988::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::DEBUG::2015-10-08 11:44:35,988::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::WARNING::2015-10-08 11:44:35,989::utils::129::root::(rmFile) File: /rhev/data-center/4eda58e3-c188-462d-9bee-9c90a187a155/b3164ebe-4f1b-478c-9225-ebaa5715a061/images/308540ac-ddc3-41bf-843e-1a0338b5850d/1025d6fe-434f-4583-9a1c-641983fd5f0e already removed

Comment 4 Simone Tiraboschi 2015-10-08 11:15:02 UTC
And we have some kernel I/O error at the correspondent time:

Oct  8 11:44:35 green-vdsc kernel: blk_update_request: 872 callbacks suppressed
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-0, sector 0
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-0, sector 94371712
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-0, sector 94371824
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-1, sector 0
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-1, sector 62914432
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-1, sector 62914544
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-2, sector 0
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-2, sector 73400192
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-2, sector 73400304
Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-0, sector 264192
Oct  8 11:44:35 green-vdsc kernel: EXT4-fs (dm-27): mounting ext3 file system using the ext4 subsystem
Oct  8 11:44:35 green-vdsc kernel: EXT4-fs (dm-27): mounted filesystem with ordered data mode. Opts: (null)
Oct  8 11:44:35 green-vdsc systemd: Started Shared Storage Lease Manager.
Oct  8 11:44:35 green-vdsc journal: vdsm root WARNING File: /rhev/data-center/4eda58e3-c188-462d-9bee-9c90a187a155/b3164ebe-4f1b-478c-9225-ebaa5715a061/images/308540ac-ddc3-41bf-843e-1a0338b5850d/1025d6fe-434f-4583-9a1c-641983fd5f0e already removed
Oct  8 11:44:36 green-vdsc multipathd: dm-28: remove map (uevent)
Oct  8 11:44:36 green-vdsc multipathd: dm-28: devmap not registered, can't remove
Oct  8 11:44:36 green-vdsc multipathd: dm-28: remove map (uevent)

Comment 5 Simone Tiraboschi 2015-10-08 11:20:40 UTC
(In reply to Simone Tiraboschi from comment #4)
> And we have some kernel I/O error at the correspondent time:
> 
> Oct  8 11:44:35 green-vdsc kernel: blk_update_request: 872 callbacks
> suppressed
> Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-0, sector 0
> Oct  8 11:44:35 green-vdsc kernel: end_request: I/O error, dev dm-0, sector
> 94371712

On your opinion is it something we can somehow prevent or is it just a faulty host?

Comment 6 Nir Soffer 2015-10-08 11:54:19 UTC
(In reply to Simone Tiraboschi from comment #3)
> It seams that the createVolume was failing:

What do you mean by "It seams"? did get a non-zero status code? how did you
handle it?

I don't know the flow of commands you are using here. I suggest you add a
way in hosted engine setup to log all vdsm commands and results. If you show
me such log we can continue to diagnose this issue.

Comment 7 Simone Tiraboschi 2015-10-08 13:13:16 UTC
(In reply to Nir Soffer from comment #6)
> (In reply to Simone Tiraboschi from comment #3)
> > It seams that the createVolume was failing:
> 
> What do you mean by "It seams"? 

VDSM reported a success: result {'uuid': '1025d6fe-434f-4583-9a1c-641983fd5f0e'} but we can notice kernel IO errors on that operation and if I check the LV status is 'NOT availalbe' and so the device so at the end the symlink is broken.

Thread-49::DEBUG::2015-10-08 11:44:35,686::task::595::Storage.TaskManager.Task::(_updateState) Task=`ce478c85-18db-4a9c-aff2-1e4fd8d48f8c`::moving from state init -> state preparing
Thread-49::INFO::2015-10-08 11:44:35,686::logUtils::44::dispatcher::(wrapper) Run and protect: createVolume(sdUUID='b3164ebe-4f1b-478c-9225-ebaa5715a061', spUUID='4eda58e3-c188-462d-9bee-9c90a187a155', imgUUID='308540ac-ddc3-41bf-843e-1a0338b5850d', size='1048576', volFormat=5, preallocate=1, diskType=2, volUUID='1025d6fe-434f-4583-9a1c-641983fd5f0e', desc='hosted-engine.lockspace', srcImgUUID='00000000-0000-0000-0000-000000000000', srcVolUUID='00000000-0000-0000-0000-000000000000')
....
ce478c85-18db-4a9c-aff2-1e4fd8d48f8c::DEBUG::2015-10-08 11:44:36,391::task::916::Storage.TaskManager.Task::(_runJobs) Task=`ce478c85-18db-4a9c-aff2-1e4fd8d48f8c`::Task.run: exit - success: result {'uuid': '1025d6fe-434f-4583-9a1c-641983fd5f0e'}



> did get a non-zero status code?

No, it didn't.

> how did you
> handle it?

        logger.debug("getVolumePath: '%s'", response)
        if response["status"]["code"] != 0:
            raise RuntimeError(response["status"]["message"])


But VDSM returned a 0 exit code.

We have the whole VDSM log here and the host is still accessible.

Comment 8 Simone Tiraboschi 2015-10-08 14:19:22 UTC
Found it with a great contribution from Nir, thanks.

The issue is related to this patch:

https://gerrit.ovirt.org/#/c/45305/3

which tries to fill the volume with 0 before start using it.
But we have to prepare the image to being able to write while this patch doesn't

Comment 9 Martin Sivák 2015-10-08 15:45:29 UTC
We already have a fix for that on the master branch and we have had it for 11 months :/ It just was always working until now somehow. I posted the backport.

Comment 10 Sandro Bonazzola 2015-10-13 07:45:23 UTC
3.6.0 bug already on qa: https://bugzilla.redhat.com/show_bug.cgi?id=1159240

Comment 11 Elad 2015-10-18 10:03:49 UTC
Deployment over iSCSI succeeds with vt17.3. Used the following:
ovirt-hosted-engine-ha-1.2.7.2-1.el7ev.noarch
ovirt-hosted-engine-setup-1.2.6.1-1.el7ev.noarch

Comment 12 Sandro Bonazzola 2015-10-26 13:44:20 UTC
oVirt 3.5.5 has been released including fixes for this issue.


Note You need to log in before you can comment on or make changes to this bug.