Опишите проблему и ожидаемые результаты If Cyrillic characters are used as Virtual Machine name, a VM is either can not be created or started: 1. A new VM can not be created if Russian characters are used in VM Name filed. It fails on disk creating stage with Russian letters: 2015-08-31 04:44:03,802 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-23) [58150bb1] Correlation ID: 613a0c7c, Job ID: 2567f4bf-0647-40ec-ba54-e5d1494187b1, Call Stack: null, Custom Event ID: -1, Message: Add-Disk operation of РусВирт_Disk1 was initiated on VM РусВирт by admin@internal. 2015-08-31 04:44:03,803 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (ajp-/127.0.0.1:8702-23) [58150bb1] BaseAsyncTask::startPollingTask: Starting to poll task 7608c008-8766-4631-a00e-1de3ac331a6c. 2015-08-31 04:44:13,111 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler_Worker-9) [d627b7d] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2015-08-31 04:44:13,116 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-9) [d627b7d] Failed in HSMGetAllTasksStatusesVDS method 2015-08-31 04:44:13,116 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-9) [d627b7d] SPMAsyncTask:ollTask: Polling task 7608c008-8766-4631-a00e-1de3ac331a6c (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'. 2015-08-31 04:44:13,137 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-9) [d627b7d] BaseAsyncTask::logEndTaskFailure: Task 7608c008-8766-4631-a00e-1de3ac331a6c (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure: -- Result: cleanSuccess -- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Error creating a new volume, code = 205, -- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Error creating a new volume, code = 205 2. Run a VM which contains Russian characters in VM Name filed. Creation process completes successfully if English characters are used in Disk Alias Name (either a new VM creation process or based from Template): 2015-08-31 04:39:08,683 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-46) [26cbfe44] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM рус_лин is down with error. Exit message: 'ascii' codec can't decode byte 0xd1 in position 67: ordinal not in range(128) Опишите окружение, в котором вы столкнулись в вышеописанной проблемой 1. Default RHEL 7.1 installation (as hypervisor) 2. RHEV-M virtual appliance (as self-hosted engine). Расскажите, когда это случилось. Это произошло один раз или повторяется регулярно? It happens regularly: - VM can not be created if Russian characters are used in Disk Alias Field. - VM can not not be started if Russian characters are used in VM Name Field. Обоснуйте срочность запроса Russian characters support in RHEV is a part of Red Hat UI localization process in Russian language driven by Globalization Operations Team and managed by Michelle Kim.
The disk's alias and description are saved in the volume's metadata, which, unfortunately does not handle non-ascii characters properly. Bug 1249130 will already address this, closing as a duplicate. *** This bug has been marked as a duplicate of bug 1249130 ***
Actually, bug 1249130 is an oVirt bug. I'm reopening and we'll use this BZ to track for the relevant RHEV release(s).
allon, this bug has both rhevm-3.5.z and rhevm-3.6.0 flags, which means its a clone candidate and wasn't fixed/cloned to 3.5. can you clarify the status or fix flags?
(In reply to Eyal Edri from comment #3) > allon, > this bug has both rhevm-3.5.z and rhevm-3.6.0 flags, which means > its a clone candidate and wasn't fixed/cloned to 3.5. > > can you clarify the status or fix flags? Idan/Tal?
(In reply to Allon Mureinik from comment #4) > (In reply to Eyal Edri from comment #3) > > allon, > > this bug has both rhevm-3.5.z and rhevm-3.6.0 flags, which means > > its a clone candidate and wasn't fixed/cloned to 3.5. > > > > can you clarify the status or fix flags? > Idan/Tal? Talked to Idan. This bug should be solved by the same patch as oVirt's bug 1249130 (where we don't clone), but it was left open as the scenario is slightly different and should be verified independently. Also, we can't mark RHEV bugs as duplicates of oVirt bugs (see comment 2). Now that this one has all the acks, it can be properly cloned to a z-stream bug.
Yaniv, Aharon - I reset the component to the right one, which removed your ack. Please re-instate them.
Created a VM with Cyrillic characters as a VM name, the VM was created but failed to start, getting the following messages: 2015-11-01 15:38:18,277 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM Опишите is down with error. Exit message: 'ascii' codec can't decode byte 0xd0 in position 111: ordinal not in range(128). 2015-11-01 15:38:18,287 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (ForkJoinPool-1-worker-11) [] Rerun VM '22ee032b-7371-40e8-a1b8-098a8a0ede63'. Called from VDS 'aqua-vds5' 2015-11-01 15:38:24,809 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM Опишите is down with error. Exit message: 'ascii' codec can't decode byte 0xd0 in position 111: ordinal not in range(128). 2015-11-01 15:38:24,831 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (DefaultQuartzScheduler_Worker-20) [] Rerun VM '22ee032b-7371-40e8-a1b8-098a8a0ede63'. Called from VDS 'aqua-vds4' 2015-11-01 15:38:25,005 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-7) [] Correlation ID: 532ee162, Job ID: be231f59-5f78-4e93-90d0-9295351fc101, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM Опишите (User: admin@internal). Tested version: rhevm-3.6.0.2-0.1.el6.noarch
Created attachment 1088367 [details] engine.log 3.6.0.2
Tal, Idan, please take a look.
Hi, 1. I created a vm named ожидаемые with a disk named simple_alias. The creation went ok but the vm didn't start. It failed with the error mentioned above. 2. I created a vm named vm1 with a disk named ожидаемые. The creation went ok and the vm started and moved to up status as expected. IMHO, there's no storage issue here.
Moving to virt based on Idan's comment
(In reply to Natalie Gavrielov from comment #10) > Created attachment 1088367 [details] > engine.log 3.6.0.2 Since the message is reported from vdsm and possibly from underlying components, please include logs from vdsm, libvirt and qemu as well Thanks!
I identified two bugs regarding this problem: - VDSM crashes in logging code (I'll fix this one). - libvirt fails to create domain with non-ASCII characters, see https://bugzilla.redhat.com/show_bug.cgi?id=1062943
(In reply to Milan Zamazal from comment #15) > I identified two bugs regarding this problem: > - VDSM crashes in logging code (I'll fix this one). > - libvirt fails to create domain with non-ASCII characters, see > https://bugzilla.redhat.com/show_bug.cgi?id=1062943 Since libvirt does not support such names, the correct fix would be to block such vm names on the engine side, not fix vdsm. When libvirt start supporting such names, we can change engine and vdsm to support non-ascii vm names. The proposed change, converting *all* log messages to unicode is a vdsm performance regression.
(In reply to Natalie Gavrielov from comment #9) > Created a VM with Cyrillic characters as a VM name, the VM was created but > failed to start, getting the following messages: Natalie, please attach vdsm.log showing the timeframe when you attempted to start a vm with a non-ascii name. This is always needed even when you open an engine bug.
(In reply to Milan Zamazal from comment #15) > I identified two bugs regarding this problem: > - VDSM crashes in logging code (I'll fix this one). This is probably bug 1281940. > - libvirt fails to create domain with non-ASCII characters, see > https://bugzilla.redhat.com/show_bug.cgi?id=1062943 This can be solved by encoding the vm name using ascii charset when creating the domain xml. Hopefully we are not using the vm name to detect the vm, but the domain uuid. Or, using the vm uuid instead of the name if the name cannot be encoded as ascii. I guess users will be much happier if a vm runs, but its name is displayed in some places the guest as '???_???' instead of "рус_лин", compared to the current situations when the vm will not run at all.
Attached patch http://gerrit.ovirt.org/48570 is implementing what the proposed solution in comment 18. It works, but a fix on the engine side may be better. The advantage of a vdsm side fix is working with older engines; it seems that user tends to upgrade engine slower then vdsm on the host side.
on engine side we should be using UUID mostly. need to check REST API if it can return unicode names, if yes then I'm for using ascii only for vdsm purposes (in favor of uuids - for most cases this makes it easy to see via vdsClient or virsh) Simple transliteration to ascii-only, cutting off the trailing chars and use digits suffix for colliding names
Created attachment 1094882 [details] VDSM log demonstrating the unicode bug I'm attaching excerpt from my vdsm.log demonstrating the crash in logging code due to string/unicode mismatch when using non-ASCII characters.
(In reply to Milan Zamazal from comment #22) > Created attachment 1094882 [details] > VDSM log demonstrating the unicode bug > > I'm attaching excerpt from my vdsm.log demonstrating the crash in logging > code due to string/unicode mismatch when using non-ASCII characters. Can you find the logging call that triggered this failure? This error happens when you mix u'ascii' and 'utf8' strings. The u'ascii' value comes from vmId because builtin json library return all strings as unicode. We need to know what was the utf8 encoded string. I discussed this issue with Dan, and we agreed to go with the all-unicode way, since we want to make it easy to run on both python 2 and 3. To make this work, we want all strings in vdsm to use unicode. Any value we get from the outside world, e.g, xml contents from xml libraries, or from libvirt, data read from files, output from various commands, must be decoded to unicode before using it in the application. Then, all logging calls can use unicode freely without any encoding issues. I suspect the logging call that print the domain xml, which is a utf8 encoded string.
Milan, lets continue the discussion on logging breakage on bug 1281940, (or another one) and leave this bug for the getting vms with non-ascii names to run on vdsm side.
(In reply to Nir Soffer from comment #23) > I suspect the logging call that print the domain xml, which is a utf8 > encoded string. Exactly.
We discussed how to handle non-ASCII VM names, here is the summary: - Conversion on VDSM side is not very safe, we should avoid using different names at different places. - But we could use an ASCII name internally everywhere, while displaying the original (possibly) non-ASCII name to the user in the Web interface where possible. - A new database column could be added to engine database, containing the (possibly) non-ASCII name as given by the user. - The meaning of the original VM name database column and its values remains unchanged. Engine interface just ensures it contains only permitted characters (note that even some ASCII characters, such as spaces, are invalid). - No changes are needed in VDSM in such a case. Some ideas about transforming user given VM names to "safe" forms: - It would be nice if the safe name had some relation to the original name, to be identifiable in VDSM logs for example. - Maybe something like UTF-7 could be used, it avoids some problems and can be decoded to the original name anywhere. - But beware there is a limit on VM name length and UTF-7 is not very concise (although it's still better than some alternatives serving the same purpose). - Transformed names must be mutually unique, so techniques like truncation or simplified character substitution may require adding unique suffixes or so.
(In reply to Milan Zamazal from comment #26) It seems like an over-specification to me. We currently have a field for the VM name that can contain non-ascii characters. It fails because libvirt doesn't support it, so we need to modify the name we pass to libvirt. 1. I don't see a reason to store the name we pass to libvirt in the engine - what is it good for? 2. Is someone interested to know the name of the VM in libvirt? from the engine's point of view, the name in libvirt/vdsm is not interesting. if in 4.0 we can break backward-compatibility, maybe we can just pass the ID of the VM as its name or to encode it as in the storage the disks are encoded.
Sending the vm id instead of the name, or "vm-<uuid>" should be enough on the engine side. However doing this in vdsm side has the benefit of supporting old engines. Users seems to be more careful about updating engine.
(In reply to Nir Soffer from comment #28) I agree that it is better to do that in VDSM.
It's fine to pass ASCII name to libvirt as long as we can be sure that we always do so and that there is no danger the original name appears somewhere the libvirt name is expected. I personally can't tell but some people here are a bit skeptical about that when working with two different names of the same object. As for uuid, is it user friendly to use that? If not, do we have a better option?
let's wait for a resolution of bug 1285720
we need some resolution by GA - either fix it in underlying platform or block it in UI - and somehow handle existing VMs (not sure how, though)
bug 1282846 should provide a solution which is transparent to RHEV, no changes are needed and the internationalized VMs should start working once the bug is fixed keeping open to depend on the right libvirt once it's available
It seems the problem has been fixed in https://bugzilla.redhat.com/1282846 and https://bugzilla.redhat.com/1281940. I verified that with current libvirt master and Vdsm master it's possible to start a VM with non-ASCII characters in its name.
Waiting for the libvirt fix in downstream backport: https://bugzilla.redhat.com/1308494
will be addressed by https://bugzilla.redhat.com/show_bug.cgi?id=1292096#c10
New libvirt available downstream, Vdsm dependency in 3.6 updated.
Works for me. bug #1323140 have been verified, tested the same work flow as in bug #1323140, on these components: Engine: ovirt-engine-setup-base-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-websocket-proxy-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-vmconsole-proxy-helper-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-restapi-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-cli-4.0.0.0-0.3.20160208.gitded440f.el7.centos.noarch ovirt-engine-lib-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-extension-aaa-jdbc-1.1.0-0.0.master.20160215110938.git8ebdaba.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-common-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-setup-plugin-websocket-proxy-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-webadmin-portal-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-userportal-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-setup-plugin-ovirt-engine-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-backend-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch ovirt-engine-wildfly-10.0.0-1.el7.x86_64 ovirt-engine-tools-backup-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-tools-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-setup-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-dbscripts-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch ovirt-engine-sdk-python-3.6.5.1-0.1.20160330.gitbec59a2.el7.centos.noarch ovirt-engine-extensions-api-impl-4.0.0-0.0.master.20160404161620.git4ffd5a4.el7.centos.noarch Linux 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 31 16:04:38 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux CentOS Linux release 7.2.1511 (Core) Host: qemu-kvm-rhev-2.3.0-31.el7_2.11.x86_64 libvirt-client-1.2.17-13.el7_2.4.x86_64 vdsm-4.17.999-817.git03d82f6.el7.centos.noarch mom-0.5.2-0.0.master.20160226114234.git95535d1.el7.noarch sanlock-3.2.4-2.el7_2.x86_64 Linux 3.10.0-327.18.2.el7.x86_64 #1 SMP Fri Apr 8 05:09:53 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux Red Hat Enterprise Linux Server release 7.2 (Maipo) PASS.
Works for me on these components: NGN 4.0 Host: ovirt-vmconsole-host-1.0.2-0.0.master.20160517094103.git06df50a.el7.noarch vdsm-4.17.999-1155.gitcf216a0.el7.centos.x86_64 ovirt-setup-lib-1.0.2-0.0.master.20160502125738.gitf05af9e.el7.centos.noarch ovirt-release40-4.0.0-0.3.beta1.noarch ovirt-vmconsole-1.0.2-0.0.master.20160517094103.git06df50a.el7.noarch libvirt-client-1.2.17-13.el7_2.4.x86_64 ovirt-engine-sdk-python-3.6.5.1-0.1.20160507.git5fb7e0e.el7.centos.noarch ovirt-host-deploy-1.5.0-0.1.alpha1.el7.centos.noarch ovirt-hosted-engine-setup-2.0.0-0.1.beta1.el7.centos.noarch ovirt-release-host-node-4.0.0-0.3.beta1.el7.noarch ovirt-engine-appliance-4.0-20160528.1.el7.centos.noarch sanlock-3.2.4-2.el7_2.x86_64 ovirt-hosted-engine-ha-2.0.0-0.1.beta1.el7.centos.noarch ovirt-node-ng-image-update-placeholder-4.0.0-0.3.beta1.el7.noarch CentOS Linux release 7.2.1511 (Core) Linux 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Linux version 3.10.0-327.18.2.el7.x86_64 (builder.centos.org) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Thu May 12 11:03:55 UTC 2016 Engine: rhevm-guest-agent-common-1.0.12-1.el7ev.noarch rhevm-setup-plugins-4.0.0-0.3.alpha.el7ev.noarch rhevm-branding-rhev-4.0.0-0.0.master.20160219183625.el7ev.noarch rhevm-doc-4.0.0-2.el7ev.noarch rhevm-4.0.0-0.7.master.el7ev.noarch rhevm-dependencies-4.0.0-0.1.alpha.git9ae0cc3.el7ev.noarch Linux 3.10.0-327.22.1.el7.x86_64 #1 SMP Mon May 16 13:31:48 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux Linux version 3.10.0-327.22.1.el7.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Mon May 16 13:31:48 EDT 2016 Red Hat Enterprise Linux Server release 7.2 (Maipo) VM was created and ran as appears within the attached movie.
Created attachment 1163278 [details] 111-2016-05-31_18.03.12.mkv
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-1743.html