RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1160995 - Should disallow to use Negative values for 'max_*' in libvirtd.conf
Summary: Should disallow to use Negative values for 'max_*' in libvirtd.conf
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-06 05:37 UTC by Yang Yang
Modified: 2015-11-19 05:55 UTC (History)
8 users (show)

Fixed In Version: libvirt-1.2.13-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 05:55:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Yang Yang 2014-11-06 05:37:25 UTC
Description of problem:
Set negative value to 'max_*' (e.g. max_clients, max_queued_clients, max_anonymous_clients, max_workers), it is translated to a large positive value. Libvirt should disallow negative value to be used for 'max_*'

Version-Release number of selected component (if applicable):
libvirt-1.2.8-6.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Set 'max_*' as following
# grep max /etc/libvirt/libvirtd.conf
 max_clients= -40
 max_queued_clients = 0
 max_anonymous_clients = -1
 max_workers = -40

2. start 50 lxc container
 # for i in {1..50}; do virsh -c lxc:/// start lxc-test-$i & done
 
3. 
# grep virNetServerCheckLimits /var/log/libvirtd.log
2014-11-06 05:20:36.470+0000: 16574: debug : virNetServerCheckLimits:1078 : Considering re-enabling services: nclients=5 nclients_max=18446744073709551576 nclients_unauth=4 nclients_unauth_max=18446744073709551615
2014-11-06 05:20:36.470+0000: 16574: debug : virNetServerCheckLimits:1083 : Re-enabling services

Actual results:
Setting '-40' to 'max_clients', it is translated into 18446744073709551576
Setting '-1' to 'max_anonymous_clients', it is translated into 18446744073709551615

Expected results:
disable negative value 

Additional info:

Comment 1 Yang Yang 2014-11-06 05:40:17 UTC
Hi Michal,
As described in comment #8 in bug 992980, I opened this one to track the negative value issue.

Thanks
Yang

Comment 3 Michal Privoznik 2014-12-09 15:52:52 UTC
Patches proposed upstream:

https://www.redhat.com/archives/libvir-list/2014-December/msg00522.html

Comment 4 Michal Privoznik 2014-12-15 09:42:08 UTC
And I've just pushed the patches upstream:

commit ca4f9518b8d381f8f14e37dc4d0367d33d5f3737
Author:     Michal Privoznik <mprivozn>
AuthorDate: Tue Dec 9 16:22:09 2014 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Mon Dec 15 10:34:18 2014 +0100

    virconf: Introduce VIR_CONF_ULONG
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1160995
    
    In our config files users are expected to pass several integer values
    for different configuration knobs. However, majority of them expect a
    nonnegative number and only a few of them accept a negative number too
    (notably keepalive_interval in libvirtd.conf).
    Therefore, a new type to config value is introduced: VIR_CONF_ULONG
    that is set whenever an integer is positive or zero. With this
    approach knobs accepting VIR_CONF_LONG should accept VIR_CONF_ULONG
    too.
    
    Signed-off-by: Michal Privoznik <mprivozn>

commit f81a70218081b68754280a6b71bbf48c2c1f4d6f
Author:     Michal Privoznik <mprivozn>
AuthorDate: Tue Dec 9 14:53:28 2014 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Mon Dec 15 10:34:18 2014 +0100

    virConfType: switch to VIR_ENUM_{DECL,IMPL}
    
    There's no need to implement ToString() function like we do if we
    can use our shiny macros.
    
    Signed-off-by: Michal Privoznik <mprivozn>

commit 4523b7769dc2baf91d63cd3cb2bf164e76825d78
Author:     Michal Privoznik <mprivozn>
AuthorDate: Tue Dec 9 14:48:54 2014 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Mon Dec 15 10:34:18 2014 +0100

    virConfSetValue: Simplify condition
    
    There's no need for condition of the following form:
    
      if (str && STREQ(str, dst))
    
    since we have STREQ_NULLABLE macro that handles NULL cases.
    
    Signed-off-by: Michal Privoznik <mprivozn>

v1.2.11-9-gca4f951

Comment 7 vivian zhang 2015-06-01 09:35:49 UTC
I can produce this bug with build
libvirt-1.2.8-6.el7.x86_64

Verify it with build libvirt-1.2.15-2.el7.x86_64

1. set anyone of below value to negative value
 
#/etc/libvirt/libvirtd.conf

listen_tcp = -1
listen_tls = -40
mdns_adv = -1
tls_no_sanity_certificate = -1


max_clients = -1
max_queued_clients = -1
max_anonymous_clients = -1
max_workers = -1
min_workers = -1
prio_workers = -1
max_client_requests = -5

audit_level = -2
audit_logging = -1
log_level = -1
keepalive_count = -5
keepalive_required = -1

#/etc/libvirt/qemu.conf
remote_websocket_port_min = -5702
remote_websocket_port_max = -65535

remote_display_port_min = -5900
remote_display_port_max = -65535

migration_port_min = -49152
migration_port_max = -49215

max_processes = -5
max_files = -5

max_queued = -3

keepalive_count = -5



2. restart libvirtd service failed 
# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service
Job for libvirtd.service failed. See 'systemctl status libvirtd.service' and 'journalctl -xn' for details.
[root@server libvirt]# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; disabled)
   Active: failed (Result: start-limit) since Mon 2015-06-01 12:56:44 HKT; 2s ago
     Docs: man:libvirtd(8)
           http://libvirt.org
  Process: 29089 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=1/FAILURE)
 Main PID: 29089 (code=exited, status=1/FAILURE)

Jun 01 12:56:44 server.englab.nay.redhat.com systemd[1]: Failed to start Virtualization daemon.
Jun 01 12:56:44 server.englab.nay.redhat.com systemd[1]: Unit libvirtd.service entered failed state.
Jun 01 12:56:44 server.englab.nay.redhat.com systemd[1]: libvirtd.service holdoff time over, scheduling restart.
Jun 01 12:56:44 server.englab.nay.redhat.com systemd[1]: Stopping Virtualization daemon...
Jun 01 12:56:44 server.englab.nay.redhat.com systemd[1]: Starting Virtualization daemon...
Jun 01 12:56:44 server.englab.nay.redhat.com systemd[1]: libvirtd.service start request repeated too quickly, refusing to start.
Jun 01 12:56:44 server.englab.nay.redhat.com systemd[1]: Failed to start Virtualization daemon.
Jun 01 12:56:44 server.englab.nay.redhat.com systemd[1]: Unit libvirtd.service entered failed state.

#cat /var/log/messages, find below error messages

...
Jun  1 12:56:44 server libvirtd: 2015-06-01 04:56:44.775+0000: 29089: error : main:1234 : Can't load config file: unsupported configuration: remoteReadConfigFile: /etc/libvirt/libvirtd.conf: max_clients: invalid type: got long; expected unsigned long: /etc/libvirt/libvirtd.conf
Jun  1 12:56:44 server systemd: libvirtd.service: main process exited, code=exited, status=1/FAILURE
Jun  1 12:56:44 server systemd: Failed to start Virtualization daemon.
Jun  1 12:56:44 server systemd: Unit libvirtd.service entered failed state.
Jun  1 12:56:44 server systemd: libvirtd.service holdoff time over, scheduling restart.
Jun  1 12:56:44 server systemd: Stopping Virtualization daemon...
Jun  1 12:56:44 server systemd: Starting Virtualization daemon...
Jun  1 12:56:44 server systemd: libvirtd.service start request repeated too quickly, refusing to start.
Jun  1 12:56:44 server systemd: Failed to start Virtualization daemon.
Jun  1 12:56:44 server systemd: Unit libvirtd.service entered failed state.
~                     


3. set anyone of them to positive value or zero, libvirtd service start success


4. when configure max_requests and keepalive_interval to negative value, such as -1, -5, -20, libvirtd service start success

check patch, these two parameters missing to modify, is this expected result?

#git show ca4f9518b8d381f8f14e37dc4d0367d33d5f3737
-    GET_CONF_INT(conf, filename, min_workers);
-    GET_CONF_INT(conf, filename, max_workers);
-    GET_CONF_INT(conf, filename, max_clients);
-    GET_CONF_INT(conf, filename, max_queued_clients);
-    GET_CONF_INT(conf, filename, max_anonymous_clients);
+    GET_CONF_UINT(conf, filename, min_workers);
+    GET_CONF_UINT(conf, filename, max_workers);
+    GET_CONF_UINT(conf, filename, max_clients);
+    GET_CONF_UINT(conf, filename, max_queued_clients);
+    GET_CONF_UINT(conf, filename, max_anonymous_clients);
 
-    GET_CONF_INT(conf, filename, prio_workers);
+    GET_CONF_UINT(conf, filename, prio_workers);
 
     GET_CONF_INT(conf, filename, max_requests);  <---- notice this line
-    GET_CONF_INT(conf, filename, max_client_requests);
+    GET_CONF_UINT(conf, filename, max_client_requests);



     GET_CONF_INT(conf, filename, keepalive_interval);  <---- notice this line
-    GET_CONF_INT(conf, filename, keepalive_count);
-    GET_CONF_INT(conf, filename, keepalive_required);
+    GET_CONF_UINT(conf, filename, keepalive_count);
+    GET_CONF_UINT(conf, filename, keepalive_required);


5. # grep lock_ /etc/libvirt/qemu.conf
lock_manager = "lockd"


6.set anyone of below parameter to negative value in 
#/etc/libvirt/qemu-lockd.conf

auto_disk_leases = -2
require_lease_for_disks = -10   

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service
[root@server libvirt]# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; disabled)
   Active: inactive (dead) since Mon 2015-06-01 17:32:44 HKT; 897ms ago
     Docs: man:libvirtd(8)
           http://libvirt.org
  Process: 6959 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 6959 (code=exited, status=0/SUCCESS)

Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.164+0000: 7035: debug : virFileCl... 13
Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.164+0000: 7035: debug : virFileCl... 14
Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.164+0000: 7035: debug : virFileCl... 15
Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.164+0000: 7035: debug : virFileCl... 16
Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.164+0000: 7035: debug : virFileCl... 18
Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.164+0000: 7035: debug : virFileCl... 20
Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.166+0000: 7036: debug : virFileCl...d 3
Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.166+0000: 7036: debug : virFileCl...d 5
Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.166+0000: 7036: debug : virFileCl...d 6
Jun 01 17:32:44 server.englab.nay.redhat.com libvirtd[6959]: 2015-06-01 09:32:44.166+0000: 7036: debug : virFileCl...d 7
Hint: Some lines were ellipsized, use -l to show in full.

7. set anyone of below value
#/etc/libvirt/virtlockd.conf

log_level = -3
max_clients = -1024


 service virtlockd start failed
systemctl status virtlockd.service 
virtlockd.service - Virtual machine lock manager
   Loaded: loaded (/usr/lib/systemd/system/virtlockd.service; static)
   Active: failed (Result: exit-code) since Mon 2015-06-01 17:27:15 HKT; 5s ago
     Docs: man:virtlockd(8)
           http://libvirt.org
  Process: 6487 ExecStart=/usr/sbin/virtlockd $VIRTLOCKD_ARGS (code=exited, status=1/FAILURE)
 Main PID: 6487 (code=exited, status=1/FAILURE)

Jun 01 17:27:14 server.englab.nay.redhat.com systemd[1]: Started Virtual machine lock manager.
Jun 01 17:27:15 server.englab.nay.redhat.com virtlockd[6487]: 2015-06-01 09:27:15.053+0000: 6487: info : libvirt v...om)
Jun 01 17:27:15 server.englab.nay.redhat.com virtlockd[6487]: 2015-06-01 09:27:15.053+0000: 6487: error : main:123...onf
Jun 01 17:27:15 server.englab.nay.redhat.com systemd[1]: virtlockd.service: main process exited, code=exited, sta...LURE
Jun 01 17:27:15 server.englab.nay.redhat.com systemd[1]: Unit virtlockd.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.



please help me confirm whether step 4 result are expected? and are the above steps enough to verify this bug?

thanks

Comment 8 Michal Privoznik 2015-06-01 14:43:45 UTC
(In reply to vivian zhang from comment #7)

> 4. when configure max_requests and keepalive_interval to negative value,
> such as -1, -5, -20, libvirtd service start success
> 
> check patch, these two parameters missing to modify, is this expected result?
> 
> #git show ca4f9518b8d381f8f14e37dc4d0367d33d5f3737

>  
>      GET_CONF_INT(conf, filename, max_requests);  <---- notice this line

>      GET_CONF_INT(conf, filename, keepalive_interval);  <---- notice this
> line

This is correct. As it says in libvirtd.conf, max_requests is not used anywhere, not enforced. keepalive_interval can be a negative value in which case it means that keepalive is disabled.

> please help me confirm whether step 4 result are expected? and are the above
> steps enough to verify this bug?

Yes, everything seems to be working.

> 
> thanks

You are welcome.

Comment 9 vivian zhang 2015-06-02 01:27:14 UTC
according to commet8, set this bug to verified

Comment 11 errata-xmlrpc 2015-11-19 05:55:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.