Bug 1561964 - hosted-engine VM created with node zero misses the console device
Summary: hosted-engine VM created with node zero misses the console device
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: General
Version: ---
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ovirt-4.3.0
: ---
Assignee: Simone Tiraboschi
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks: 1590943 1628836
TreeView+ depends on / blocked
 
Reported: 2018-03-29 09:22 UTC by Yihui Zhao
Modified: 2019-02-13 07:45 UTC (History)
16 users (show)

Fixed In Version:
Clone Of:
: 1590943 1628836 (view as bug list)
Environment:
Last Closed: 2019-02-13 07:45:06 UTC
oVirt Team: Integration
Embargoed:
rule-engine: ovirt-4.3+
ylavi: exception+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1420115 0 unspecified CLOSED Console button do not work for hosted engine 2021-02-22 00:41:40 UTC

Internal Links: 1420115

Description Yihui Zhao 2018-03-29 09:22:39 UTC
Description of problem: 
1. After finishing the ansible deployment, "hosted-engine --console" login to the HE-VM failed.
2. [root@ibm-x3650m5-05 ~]# hosted-engine --console
The engine VM is running on this host
Connected to domain HostedEngine
Escape character is ^]
error: internal error: cannot find character device <null>
3. 
#virsh dumpxml HostedEngine

<domain type='kvm' id='2'>
  <name>HostedEngine</name>
  <uuid>0aea11dc-ec1f-409e-a258-f122c4ea0034</uuid>
  <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ns0:qos/>
    <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ovirt-vm:clusterVersion>4.2</ovirt-vm:clusterVersion>
    <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
    <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
    <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize>
    <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
    <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
    <ovirt-vm:startTime type="float">1522310880.24</ovirt-vm:startTime>
    <ovirt-vm:device mac_address="52:54:00:5e:8e:c7">
        <ovirt-vm:network>ovirtmgmt</ovirt-vm:network>
        <ovirt-vm:specParams/>
        <ovirt-vm:vm_custom/>
    </ovirt-vm:device>
    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>684367c3-7c33-4240-8be5-6ec17b8f1c90</ovirt-vm:domainID>
        <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>
        <ovirt-vm:imageID>642e3b2d-9747-4790-b37e-6f682fb483cd</ovirt-vm:imageID>
        <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID>
        <ovirt-vm:shared>exclusive</ovirt-vm:shared>
        <ovirt-vm:volumeID>ae04328a-9309-414e-a377-bf9ef6202576</ovirt-vm:volumeID>
        <ovirt-vm:specParams/>
        <ovirt-vm:vm_custom/>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>684367c3-7c33-4240-8be5-6ec17b8f1c90</ovirt-vm:domainID>
                <ovirt-vm:imageID>642e3b2d-9747-4790-b37e-6f682fb483cd</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/10.66.148.11:_home_yzhao_nfs1/684367c3-7c33-4240-8be5-6ec17b8f1c90/images/642e3b2d-9747-4790-b37e-6f682fb483cd/ae04328a-9309-414e-a377-bf9ef6202576.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/10.66.148.11:_home_yzhao_nfs1/684367c3-7c33-4240-8be5-6ec17b8f1c90/images/642e3b2d-9747-4790-b37e-6f682fb483cd/ae04328a-9309-414e-a377-bf9ef6202576</ovirt-vm:path>
                <ovirt-vm:volumeID>ae04328a-9309-414e-a377-bf9ef6202576</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>
</ovirt-vm:vm>
  </metadata>
  <maxMemory slots='16' unit='KiB'>66961408</maxMemory>
  <memory unit='KiB'>16740352</memory>
  <currentMemory unit='KiB'>16740352</currentMemory>
  <vcpu placement='static' current='4'>64</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>oVirt</entry>
      <entry name='product'>RHEV Hypervisor</entry>
      <entry name='version'>7.5-2.0.el7</entry>
      <entry name='serial'>D12431EE-E269-11E7-B860-0894EF59DBF4</entry>
      <entry name='uuid'>0aea11dc-ec1f-409e-a258-f122c4ea0034</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.5.0'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Broadwell</model>
    <topology sockets='16' cores='4' threads='1'/>
    <feature policy='require' name='vme'/>
    <feature policy='require' name='f16c'/>
    <feature policy='require' name='rdrand'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='arat'/>
    <feature policy='require' name='xsaveopt'/>
    <feature policy='require' name='abm'/>
    <numa>
      <cell id='0' cpus='0-3' memory='16740352' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>destroy</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver error_policy='report'/>
      <source startupPolicy='optional'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ua-f07200dc-8b0e-4370-87a3-993619c0a7bb'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
      <source file='/var/run/vdsm/storage/684367c3-7c33-4240-8be5-6ec17b8f1c90/642e3b2d-9747-4790-b37e-6f682fb483cd/ae04328a-9309-414e-a377-bf9ef6202576'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>642e3b2d-9747-4790-b37e-6f682fb483cd</serial>
      <alias name='ua-642e3b2d-9747-4790-b37e-6f682fb483cd'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='virtio-serial' index='0' ports='16'>
      <alias name='ua-e90d2383-5ae5-4a05-a361-ecde950bedbd'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='ua-f0bd44db-e991-4b33-be92-6a4530b3fd2c'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <lease>
      <lockspace>684367c3-7c33-4240-8be5-6ec17b8f1c90</lockspace>
      <key>ae04328a-9309-414e-a377-bf9ef6202576</key>
      <target path='/rhev/data-center/mnt/10.66.148.11:_home_yzhao_nfs1/684367c3-7c33-4240-8be5-6ec17b8f1c90/images/642e3b2d-9747-4790-b37e-6f682fb483cd/ae04328a-9309-414e-a377-bf9ef6202576.lease'/>
    </lease>
    <interface type='bridge'>
      <mac address='52:54:00:5e:8e:c7'/>
      <source bridge='ovirtmgmt'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <alias name='ua-ebcd8dbe-717f-40e4-b61e-952f5b088fce'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/0aea11dc-ec1f-409e-a258-f122c4ea0034.ovirt-guest-agent.0'/>
      <target type='virtio' name='ovirt-guest-agent.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/0aea11dc-ec1f-409e-a258-f122c4ea0034.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/0aea11dc-ec1f-409e-a258-f122c4ea0034.org.ovirt.hosted-engine-setup.0'/>
      <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/>
      <alias name='channel3'/>
      <address type='virtio-serial' controller='0' bus='0' port='4'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='spice' port='5900' tlsPort='5901' autoport='yes' listen='10.73.73.103' passwdValidTo='2018-03-29T08:46:49' connected='keep'>
      <listen type='network' address='10.73.73.103' network='vdsm-ovirtmgmt'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
      <channel name='smartcard' mode='secure'/>
      <channel name='usbredir' mode='secure'/>
    </graphics>
    <sound model='ich6'>
      <alias name='ua-da593391-9564-4540-83cc-7baba8ba8696'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/>
      <alias name='ua-2321e186-5264-4a6d-b8ee-0cd4897dd20c'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <stats period='5'/>
      <alias name='ua-2cdf255b-ed44-4975-b625-3054218240db'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <alias name='ua-630baa25-9a01-42a1-a4b4-33deeb1e20aa'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </rng>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c134,c785</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c134,c785</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>



4. From the XML domain, there is no configuration about the console, for example:
"""
<console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='virtio' port='0'/>
      <alias name='console0'/>
    </console>
"""
Version-Release number of selected component (if applicable): 
rhvh-4.2.2.0-0.20180322.0+1
cockpit-ovirt-dashboard-0.11.19-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.14-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.7-1.el7ev.noarch
rhvm-appliance-4.2-20180322.0.el7.noarch


How reproducible: 
100%

Steps to Reproduce: 
1. Deploy HE with NFS storage via cockpit
2. "hosted-engine --console" login to the HE-VM
 
Actual results:  
The same as the description. "hosted-engine --console" login to the HE-VM failed.

Expected results: 
"hosted-engine --console" login to the HE-VM successfully.

Additional info:

Comment 1 Ido Rosenzwig 2018-04-01 09:25:32 UTC
works for me on ovirt-hosted-engine-setup-2.2.16

[root@host1 ~]# hosted-engine --console
The engine VM is running on this host
Connected to domain HostedEngine
Escape character is ^]

CentOS Linux 7 (Core)
Kernel 3.10.0-693.21.1.el7.x86_64 on an x86_64

she-engine login: root
Password: 
Last login: Sun Apr  1 05:08:37 from 192.168.122.1
[root@she-engine ~]# 

on virsh xmldump I have:

<console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='virtio' port='0'/>
      <alias name='console0'/>
</console>

Comment 2 Yihui Zhao 2018-04-12 05:38:49 UTC
Also met this issue on these versions:

rhvh-4.2.2.1-0.20180410.0+1
cockpit-bridge-160-3.el7.x86_64
cockpit-160-3.el7.x86_64
cockpit-dashboard-160-3.el7.x86_64
cockpit-ws-160-3.el7.x86_64
cockpit-system-160-3.el7.noarch
cockpit-storaged-160-3.el7.noarch
cockpit-ovirt-dashboard-0.11.20-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.16-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.10-1.el7ev.noarch
rhvm-appliance-4.2-20180410.0.el7.noarch


[root@dell-per515-02 ~]# hosted-engine --console
The engine VM is running on this host
Connected to domain HostedEngine
Escape character is ^]
error: internal error: cannot find character device <null>

Comment 3 Sandro Bonazzola 2018-04-16 11:37:47 UTC
Ido please check on Yihui Zhao system why this happens while on your system it works.

Comment 5 Simone Tiraboschi 2018-04-16 13:56:20 UTC
I think it could be due to https://gerrit.ovirt.org/#/c/90288/

Comment 6 Sandro Bonazzola 2018-04-26 12:19:58 UTC
Moving to 4.2.4 not being acknowledged as blocker for 4.2.3

Comment 7 Simone Tiraboschi 2018-04-26 13:27:11 UTC
As pointed out by Francesco on https://bugzilla.redhat.com/show_bug.cgi?id=1420115#c34
the XML for libvirt generated by the engine misses the console device.

From the ansible playbook we are creating a regular VM and we are requiring the serial console device:
https://github.com/oVirt/ovirt-hosted-engine-setup/blob/master/src/ansible/create_target_vm.yml#L109

At this point the VM is just a regular VM and the serial console works; but then we flag that VM as the hosted-engine VM explicitly setting 
origin=6 at DB level and at that point the console device disappears.
https://github.com/oVirt/ovirt-hosted-engine-setup/blob/master/src/ansible/create_target_vm.yml#L160

The use can still try to manually edit the engine VM and set "Enable VirtIO serial console" but the checkbox seams to be always reset to False status.

Manually setting origin=3 at DB level makes it working again so it seams we have some explicit restriction in the engine fro hosted-engine VM about the console device.

This breaks:
- hosted-engine --console
- the serial console through vmconsole proxy

Comment 8 Sandro Bonazzola 2018-05-02 12:18:37 UTC
Michal, can you help with this?

Comment 9 Ido Rosenzwig 2018-05-07 11:42:23 UTC
cleaning NEEDINFO flag

Comment 10 Yaniv Kaul 2018-05-23 10:50:22 UTC
(In reply to Sandro Bonazzola from comment #8)
> Michal, can you help with this?

Michal, has anyone looked at this?

Comment 11 Michal Skrivanek 2018-05-24 12:48:07 UTC
(In reply to Simone Tiraboschi from comment #7)
> As pointed out by Francesco on
> https://bugzilla.redhat.com/show_bug.cgi?id=1420115#c34
> the XML for libvirt generated by the engine misses the console device.
> 
> From the ansible playbook we are creating a regular VM and we are requiring
> the serial console device:
> https://github.com/oVirt/ovirt-hosted-engine-setup/blob/master/src/ansible/
> create_target_vm.yml#L109
> 
> At this point the VM is just a regular VM and the serial console works; but
> then we flag that VM as the hosted-engine VM explicitly setting 
> origin=6 at DB level and at that point the console device disappears.
> https://github.com/oVirt/ovirt-hosted-engine-setup/blob/master/src/ansible/
> create_target_vm.yml#L160

right, this should create the proper serial console devices, same as when you use "Enable VIRTIO serial console" checkbox.

> The use can still try to manually edit the engine VM and set "Enable VirtIO
> serial console" but the checkbox seams to be always reset to False status.

this is most likely coming form the HE specific code then. Previously the serial console used to be disabled for HE VM (I assume), as it used it's own bare console device not supported by the rest of oVirt.
These restrictions and differences are handled by SLA team - reassigning.
 
> Manually setting origin=3 at DB level makes it working again so it seams we
> have some explicit restriction in the engine fro hosted-engine VM about the
> console device.

likely

> This breaks:
> - hosted-engine --console

Simone, was it changed to use the vmconsole's way of accessing console via ssh? 

> - the serial console through vmconsole proxy

Comment 12 Simone Tiraboschi 2018-05-24 12:56:41 UTC
(In reply to Michal Skrivanek from comment #11)
> Simone, was it changed to use the vmconsole's way of accessing console via
> ssh? 

No it's still directly wrapping 'virsh console'.
'hosted-engine --console' is basically a troubleshooting tool to be used when the engine VM is not working as expected.
Will the vmconsole work when the engine VM is not accessible over the network or something like that?

Comment 13 Michal Skrivanek 2018-05-25 08:13:24 UTC
(In reply to Simone Tiraboschi from comment #12)
> (In reply to Michal Skrivanek from comment #11)
> > Simone, was it changed to use the vmconsole's way of accessing console via
> > ssh? 
> 
> No it's still directly wrapping 'virsh console'.

I see. No, AFAIK virsh console requires a PTY but oVirt implementation binds it to a socket accessed through ssh from vmconsole-proxy host

> 'hosted-engine --console' is basically a troubleshooting tool to be used
> when the engine VM is not working as expected.
> Will the vmconsole work when the engine VM is not accessible over the
> network or something like that?

you'd have to hook in the middle, I guess. The vmconsole-proxy requires running engine for authentication and then invokes the vmconsole-host on the target host. If you'd invoke that helper from within "hosted-engine --console" it could work

Plus engine change to allow enabling it, apparently it's blocked somewhere from before when the ovirt vmconsole was disallowed for HE

Comment 14 Sandro Bonazzola 2018-06-14 07:11:05 UTC
Tracking 4.3 side here: 4.2.5 is tracked in bug #1590943

Comment 15 Nikolai Sednev 2018-07-26 09:31:07 UTC
Moving back to assigned forth to https://bugzilla.redhat.com/show_bug.cgi?id=1590943#c10.

Comment 16 Yihui Zhao 2018-08-03 03:33:48 UTC
Update:

Tested with rhvh-4.2.5.1-0.20180801.0+1, also met.

packages:
rhvm-appliance-4.2-20180801.0.el7.noarch

ovirt-vmconsole-host-1.0.5-4.el7ev.noarch
ovirt-node-ng-nodectl-4.2.0-0.20170814.0.el7.noarch
ovirt-hosted-engine-setup-2.2.25-1.el7ev.noarch
cockpit-machines-ovirt-169-1.el7.noarch
ovirt-imageio-common-1.4.2-0.el7ev.noarch
ovirt-imageio-daemon-1.4.2-0.el7ev.noarch
cockpit-ovirt-dashboard-0.11.31-1.el7ev.noarch
ovirt-host-deploy-1.7.4-1.el7ev.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch
python-ovirt-engine-sdk4-4.2.7-1.el7ev.x86_64
ovirt-vmconsole-1.0.5-4.el7ev.noarch
ovirt-hosted-engine-ha-2.2.16-1.el7ev.noarch
ovirt-host-4.2.3-1.el7ev.x86_64
ovirt-setup-lib-1.1.4-1.el7ev.noarch
ovirt-provider-ovn-driver-1.2.13-1.el7ev.noarch
ovirt-host-dependencies-4.2.3-1.el7ev.x86_64


Result:

[root@localhost ~]# hosted-engine --console
The engine VM is running on this host
Connected to domain HostedEngine
Escape character is ^]
error: internal error: cannot find character device <null>

Comment 17 Simone Tiraboschi 2018-08-03 08:03:11 UTC
Yes, met as well once over 4 attempts.

I fear we still have a kind of race condition between the definition of the console device and the update of the OVF_STORE on disk.

Comment 18 Nikolai Sednev 2018-08-26 06:25:42 UTC
Forth to https://bugzilla.redhat.com/show_bug.cgi?id=1590943#c21, moving to verified.

Comment 19 Sandro Bonazzola 2018-11-02 14:36:11 UTC
This bugzilla is included in oVirt 4.2.7 release, published on November 2nd 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.

Comment 20 Sandro Bonazzola 2018-11-02 14:59:23 UTC
Closed by mistake, moving back to qa -> verified

Comment 21 Sandro Bonazzola 2019-02-13 07:45:06 UTC
This bugzilla is included in oVirt 4.3.0 release, published on February 4th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.0 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.