Bug 1348552 - 3.6 HE appliance on iscsi migration to 4.0 fails with error while converting raw: Could not create file: Permission denied
Summary: 3.6 HE appliance on iscsi migration to 4.0 fails with error while converting ...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: General
Version: 2.0.0.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ovirt-4.0.2
: ---
Assignee: Sandro Bonazzola
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-21 12:26 UTC by Jiri Belka
Modified: 2019-04-28 13:33 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-07 12:16:19 UTC
oVirt Team: Integration
Embargoed:
ylavi: ovirt-4.0.z?
gklein: blocker?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

Description Jiri Belka 2016-06-21 12:26:55 UTC
Description of problem:

the setup instructs qemu-img to work with a symlink which points to nowhere, ie. is broken.

2016-06-21 12:11:15 DEBUG otopi.plugins.gr_he_common.vm.boot_disk plugin.execute:926 execute-output: ('/bin/sudo', '-u', 'vdsm', '-g', 'kvm', '/bin/qemu-img', 'convert', '-O', 'raw', '/var/tmp/tmpDRhDGJ', u'/rhev
/data-center/mnt/blockSD/30e5f2cd-064d-415e-bb81-c44d87bd1ac7/images/5dab8a01-ef60-489c-a15f-9fd6d282ed69/4134770e-7938-407c-a41b-a0f90666d2d6') stderr:
qemu-img: /rhev/data-center/mnt/blockSD/30e5f2cd-064d-415e-bb81-c44d87bd1ac7/images/5dab8a01-ef60-489c-a15f-9fd6d282ed69/4134770e-7938-407c-a41b-a0f90666d2d6: error while converting raw: Could not create file: Pe
rmission denied

2016-06-21 12:11:15 DEBUG otopi.plugins.gr_he_common.vm.boot_disk boot_disk._uploadVolume:160 error uploading the image: Command '/bin/sudo' failed to execute
2016-06-21 12:11:15 DEBUG otopi.context context._executeMethod:142 method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod
    method['method']()
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/boot_disk.py", line 690, in _misc
    ohostedcons.Upgrade.BACKUP_FILE
  File "/usr/lib/python2.7/site-packages/otopi/transaction.py", line 156, in __exit__
    self.commit()
  File "/usr/lib/python2.7/site-packages/otopi/transaction.py", line 148, in commit
    element.commit()
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/boot_disk.py", line 234, in commit
    raise RuntimeError(message)
RuntimeError: Command '/bin/sudo' failed to execute
2016-06-21 12:11:15 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Misc configuration': Command '/bin/sudo' failed to execute
2016-06-21 12:11:15 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT DUMP - BEGIN
2016-06-21 12:11:15 DEBUG otopi.context context.dumpEnvironment:770 ENV BASE/error=bool:'True'
2016-06-21 12:11:15 DEBUG otopi.context context.dumpEnvironment:770 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Command '/bin/sudo' failed to execute",), <traceback object at 0x3e0eab8>)]'

# ls -l /rhev/data-center/mnt/blockSD/30e5f2cd-064d-415e-bb81-c44d87bd1ac7/images/5dab8a01-ef60-489c-a15f-9fd6d282ed69/4134770e-7938-407c-a41b-a0f90666d2d6 
lrwxrwxrwx. 1 vdsm kvm 78 Jun 21 12:09 /rhev/data-center/mnt/blockSD/30e5f2cd-064d-415e-bb81-c44d87bd1ac7/images/5dab8a01-ef60-489c-a15f-9fd6d282ed69/4134770e-7938-407c-a41b-a0f90666d2d6 -> /dev/30e5f2cd-064d-415e-bb81-c44d87bd1ac7/4134770e-7938-407c-a41b-a0f90666d2d6

[root@dell-r210ii-04 ~]# ls -l /dev/30e5f2cd-064d-415e-bb81-c44d87bd1ac7/4134770e-7938-407c-a41b-a0f90666d2d6
ls: cannot access /dev/30e5f2cd-064d-415e-bb81-c44d87bd1ac7/4134770e-7938-407c-a41b-a0f90666d2d6: No such file or directory
[root@dell-r210ii-04 ~]# ls -l /dev/30e5f2cd-064d-415e-bb81-c44d87bd1ac7/
total 0
lrwxrwxrwx. 1 root root 8 Jun 21 10:20 3074b2ce-7eac-4c2c-8fdb-c216b9f9d633 -> ../dm-14
lrwxrwxrwx. 1 root root 8 Jun 20 15:19 317b7b30-cbcc-4af1-8dbe-726be2e6cf37 -> ../dm-15
lrwxrwxrwx. 1 root root 8 Jun 20 18:10 ab367581-d0c4-4b1d-bd3f-fe759ba2f05d -> ../dm-13
lrwxrwxrwx. 1 root root 8 Jun 20 15:19 ids -> ../dm-10
lrwxrwxrwx. 1 root root 8 Jun 20 15:19 inbox -> ../dm-11
lrwxrwxrwx. 1 root root 7 Jun 21 12:10 leases -> ../dm-9
lrwxrwxrwx. 1 root root 8 Jun 20 15:19 master -> ../dm-12
lrwxrwxrwx. 1 root root 7 Jun 21 12:09 metadata -> ../dm-7
lrwxrwxrwx. 1 root root 7 Jun 20 15:19 outbox -> ../dm-8


# /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/1IET_000c0002|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' 2>&1 | tail -n +3
  087723d3-20a7-4c84-942a-f97aa4e40096 30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-------  50.00g                                                    
  3074b2ce-7eac-4c2c-8fdb-c216b9f9d633 30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-a----- 128.00m                                                    
  317b7b30-cbcc-4af1-8dbe-726be2e6cf37 30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-a-----   1.00g                                                    
  4134770e-7938-407c-a41b-a0f90666d2d6 30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-------  50.00g                                                    
  ab367581-d0c4-4b1d-bd3f-fe759ba2f05d 30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-a----- 128.00m                                                    
  da265c49-3f85-4387-94fd-ee8d04c9857f 30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi------- 128.00m                                                    
  dd382718-e4a2-4b87-8119-856369302b51 30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi------- 128.00m                                                    
  ids                                  30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-ao---- 128.00m                                                    
  inbox                                30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-a----- 128.00m                                                    
  leases                               30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-a-----   2.00g                                                    
  master                               30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-a-----   1.00g                                                    
  metadata                             30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-a----- 512.00m                                                    
  outbox                               30e5f2cd-064d-415e-bb81-c44d87bd1ac7 -wi-a----- 128.00m

4134770e-7938-407c-a41b-a0f90666d2d6 was created newly
087723d3-20a7-4c84-942a-f97aa4e40096 is old disk of HE VM

Not sure what's going on, just I was messing with "dirty" iscsi storage as there was forgotten lv after another hosted-engine --upgrade-appliance failure.

I got this issue twice.

Version-Release number of selected component (if applicable):
ovirt-hosted-engine-setup-2.0.0.2-1.el7ev.noarch
Red Hat Enterprise Linux Server release 7.2 (Maipo)

How reproducible:
100%

Steps to Reproduce:
1. 3.6 HE appliance on EL7, working fine (even engine, VMs)
2. add 4.0 repo on EL7
3. hosted-engine --upgrade-appliance

Actual results:
fails because it tries to use a file which does not exist

Expected results:
should work

Additional info:

Comment 2 Yaniv Kaul 2016-06-30 03:30:40 UTC
Jiri, 

Is that reproducible with a clean normal iSCSI environment?
I'm asking due to the comment:
"Not sure what's going on, just I was messing with "dirty" iscsi storage as there was forgotten lv after another hosted-engine --upgrade-appliance failure."

Comment 3 Jiri Belka 2016-07-07 09:02:44 UTC
(In reply to Yaniv Kaul from comment #2)
> Jiri, 
> 
> Is that reproducible with a clean normal iSCSI environment?
> I'm asking due to the comment:
> "Not sure what's going on, just I was messing with "dirty" iscsi storage as
> there was forgotten lv after another hosted-engine --upgrade-appliance
> failure."

I did create new clean lv on iSCSI target (ie. dd if=/dev/zero...) and later on I did hosted-engine --deploy and it was always discovered as not clean storage. Official EL 6.8 doc does not talk about any "zeroing" or any other cleaning of storage backends which take part of provided iSCSI storage as LUNs.

Anyway, I could not reproduce this issue anymore and it seems to be some magic side-effect of BZ1346341.

If it would appear again after BZ1346341 is solved, I'll report such issue.

Comment 4 Yaniv Kaul 2016-07-07 12:16:19 UTC
Thanks, closing for the time being as not reproducible - if it does later, please re-open of course.


Note You need to log in before you can comment on or make changes to this bug.