Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1795206

Summary: VDSM 'ascii' code exception is thrown upon attempt to start VM with Cyrillic letters in name.Continues after restart
Product: [oVirt] vdsm Reporter: Polina <pagranat>
Component: GeneralAssignee: Milan Zamazal <mzamazal>
Status: CLOSED CURRENTRELEASE QA Contact: Polina <pagranat>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.40.1CC: akrejcir, bugs, mzamazal, rbarry
Target Milestone: ovirt-4.4.1Flags: rbarry: ovirt-4.4?
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: rhv-4.4.0-36 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-08 08:26:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1713288    
Attachments:
Description Flags
vdsm and engine logs
none
migration failure none

Description Polina 2020-01-27 12:50:26 UTC
Created attachment 1655654 [details]
vdsm and engine logs

Description of problem:VDSM 'ascii' code exception is thrown upon an attempt to start VM with Cyrillic letters in the VM's name. The exception continues after the restart of vdsm and engine. After deleting of the VM it still appears on all of three hosts in output of 'virsh -r list --all'

Version-Release number of selected component (if applicable):
http://bob-dr.lab.eng.brq.redhat.com/builds/4.4/rhv-4.4.0-17
vdsm-4.40.1-1.el8ev.x86_64

How reproducible:100%

Steps to Reproduce:
1.Create VM on the base of template from glance (os8.1) with name containing Cyrillic letters , like 'мом_тест_1'.
2.Try to start

Actual results: engine tries to start the VM on each of three hosts in the setup. Fails. VDSMs sends exception VDSErrorException: Failed to DestroyVDS, error = General Exception: ("'ascii' codec can't decode byte 0xd0 in position 29: ordinal not in range(128)",). Remove the VM, restart of VDSMs & engine doesn't fix the situation  -  the exception is still thrown because of the VM is found on all three hosts in 

root@puma42 ~]# virsh -r list --all
 Id   Name         State
-----------------------------
 -    мом_тест_1   shut off



Expected results:


Additional info:

Comment 1 Sandro Bonazzola 2020-03-13 10:19:17 UTC
This bug is in modified state and targeting to 4.4.1. Can we retarget to 4.4.0 and move to QE?

Comment 2 Polina 2020-03-29 19:34:33 UTC
Created attachment 1674579 [details]
migration failure

the bug only partially fixed. the start VM works ok. Though such VM could not be migrated - fails with 
2020-03-29 22:24:18,165+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-11612) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed  (VM: мом_тест_1, Source: host_mixed_1, Destination: host_mixed_2).

in vdsm.log:

2020-03-29 22:31:11,442+0300 ERROR (jsonrpc/7) [virt.vm] (vmId='fc9129fa-bb3f-4d7d-a6b3-4636e12dd7cc') Operation failed (vm:4806)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 4783, in setCpuTunePeriod
    self._dom.setSchedulerParameters({'vcpu_period': int(period)})
  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2555, in setSchedulerParameters
    if ret == -1: raise libvirtError ('virDomainSetSchedulerParameters() failed', dom=self)
libvirt.libvirtError: Requested operation is not valid: cgroup CPU controller is not mounted
2020-03-29 22:31:11,442+0300 INFO  (jsonrpc/7) [api.virt] FINISH setCpuTunePeriod return={'status': {'code': 62, 'message': 'Requested operation is not valid: cgroup CPU controller is not mounted'}} from=::1,34242, vmId=fc9129fa-bb3f-4d7d-a6b3-4636e12dd7cc (api:54)
2020-03-29 22:31:11,442+0300 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.setCpuTunePeriod failed (error 62) in 0.00 seconds (__init__:312)

logs attached . please let me know if the bz must be marked as failed qe or new bug must be inserted

Comment 3 Ryan Barry 2020-03-29 20:10:27 UTC
FailedQA. No point in starting a VM we can't migrate

Comment 4 Milan Zamazal 2020-04-06 18:03:26 UTC
We may be facing multiple bugs here.

I can't reproduce the error above; Polina, what are your libvirt, QEMU, systemd, kernel versions?

But migration with non-ASCII characters still fails for me due to an XML parse error in our migration libvirt hook. Such a failure is very silent in Vdsm logs BTW. I'll look at it.

Comment 5 Polina 2020-04-07 10:53:39 UTC
about the reproduce: you could ping me on irc. I can create the reproduce on compute-ge-6.scl.lab.tlv.redhat.com
about versions:

uname -r
4.18.0-193.el8.x86_64

uname -a
Linux compute-ge-6.scl.lab.tlv.redhat.com 4.18.0-193.el8.x86_64 #1 SMP Fri Mar 27 14:35:58 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

rpm -qa |grep qemu
libvirt-daemon-driver-qemu-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
qemu-img-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64
qemu-kvm-block-rbd-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64
qemu-kvm-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64
qemu-kvm-block-gluster-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64
qemu-kvm-core-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64
ipxe-roms-qemu-20181214-5.git133f4c47.el8.noarch
qemu-kvm-common-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64
qemu-kvm-block-curl-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64
qemu-kvm-block-ssh-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64
qemu-kvm-block-iscsi-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64

rpm -qa |grep libvirt
libvirt-daemon-driver-network-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-qemu-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-gluster-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-kvm-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-admin-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-libs-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-config-network-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-client-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-rbd-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-interface-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-iscsi-direct-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-scsi-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-core-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-nodedev-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-disk-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-logical-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-config-nwfilter-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-bash-completion-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-mpath-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
python3-libvirt-6.0.0-1.module+el8.2.0+5453+31b2b136.x86_64
libvirt-daemon-driver-nwfilter-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-secret-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-daemon-driver-storage-iscsi-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64
libvirt-lock-sanlock-6.0.0-15.module+el8.2.0+6106+b6345808.x86_64

Comment 6 Milan Zamazal 2020-04-07 19:05:54 UTC
Looking at Polina's setup, I can confirm the only problem is processing the accented characters in the libvirt hook. When I disable the hook, the VM migrates. The cgroup error is unrelated.

Comment 7 Polina 2020-06-02 13:35:07 UTC
I've verified that the bug is fixed in rhv-4.4.1-1. 
Milan , I think it must be moved to on_qa, since it is fixed, right ?

Comment 8 Milan Zamazal 2020-06-02 14:24:13 UTC
I think so, it has already been fixed some time ago. Let's try...

Comment 9 Sandro Bonazzola 2020-07-08 08:26:54 UTC
This bugzilla is included in oVirt 4.4.1 release, published on July 8th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.1 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.