Because nova-manage imports rpc, it will be default try to import kombu as this is the default upstream. A quick fix would be to set qpid as the default, though there may be other significant configs in nova-dist.conf (in future) so the most general solution would be to get nova-* commands to use nova-dist.conf too. It's a pity /etc/nova/nova.conf can't just include nova-dist.conf
Created attachment 695652 [details] Auto select dist config at correct precedence
If nova-manage --help works, then this is OK.
Release notes for snapshot2: If not using packstack, to work around this issue please uncomment the rpc_backend and sql_connection settings in /etc/nova/nova.conf after the packages are installed
VERIFIED. I tested this by manual configuration. => Version info: #-----------------------------------------------------------# [tuser1@interceptor ~(keystone_user1)]$ rpm -q python-nova openstack-nova ; arch; cat /etc/redhat-release python-nova-2012.2.3-1.el6ost.noarch openstack-nova-2012.2.3-1.el6ost.noarch x86_64 Red Hat Enterprise Linux Server release 6.4 (Santiago) [tuser1@interceptor ~(keystone_user1)]$ #-----------------------------------------------------------# => Verification info: #-----------------------------------------------------------# [tuser1@interceptor ~(keystone_user1)]$ sudo grep rpc_backend /etc/nova/nova.conf rpc_backend=nova.openstack.common.rpc.impl_qpid [tuser1@interceptor ~(keystone_user1)]$ #-----------------------------------------------------------# [tuser1@interceptor ~(keystone_user1)]$ nova-manage --help Usage: nova-manage [options] Options: -h, --help show this help message and exit --bandwidth_poll_interval=BANDWIDTH_POLL_INTERVAL, --bandwith_poll_interval=BANDWIDTH_POLL_INTERVAL interval to pull bandwidth usage info --default_floating_pool=DEFAULT_FLOATING_POOL Default pool for floating ips --ca_file=CA_FILE Filename of root CA --sql_connection_debug=SQL_CONNECTION_DEBUG Verbosity of SQL debugging information. 0=None, 100=Everything --fixed_range=FIXED_RANGE Fixed IP address block --compute_topic=COMPUTE_TOPIC the topic compute nodes listen on --glance_port=GLANCE_PORT default glance port . . . . #-----------------------------------------------------------# Additional Info: =============== => Ensure the fix specified in comment #1 is in python-nova pkg: #-----------------------------------------------------------# [tuser1@interceptor ~(keystone_user1)]$ sudo grep config_files.append -A1 /usr/lib/python2.6/site-packages/nova/openstack/common/cfg.py config_files.append(_search_dirs(['/usr/share/%s/' % project], project, '-dist%s' % extension)) config_files.append(_search_dirs(cfg_dirs, project, extension)) config_files.append(_search_dirs(cfg_dirs, prog, extension)) [tuser1@interceptor ~(keystone_user1)]$ #-----------------------------------------------------------# => Contents of nova-dist.conf: #-----------------------------------------------------------# [tuser1@interceptor ~(keystone_user1)]$ sudo cat /usr/share/nova/nova-dist.conf [DEFAULT] logdir = /var/log/nova state_path = /var/lib/nova lock_path = /var/lib/nova/tmp volumes_dir = /etc/nova/volumes dhcpbridge = /usr/bin/nova-dhcpbridge dhcpbridge_flagfile = /usr/share/nova/nova-dist.conf force_dhcp_release = True injected_network_template = /usr/share/nova/interfaces.template libvirt_nonblocking = True libvirt_inject_partition = -1 network_manager = nova.network.manager.FlatDHCPManager iscsi_helper = tgtadm sql_connection = mysql://nova:nova@localhost/nova sql_max_retries = -1 compute_driver = libvirt.LibvirtDriver firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver rpc_backend = nova.openstack.common.rpc.impl_qpid rootwrap_config = /etc/nova/rootwrap.conf resume_guests_state_on_host_boot=True [keystone_authtoken] admin_tenant_name = %SERVICE_TENANT_NAME% admin_user = %SERVICE_USER% admin_password = %SERVICE_PASSWORD% auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http [tuser1@interceptor ~(keystone_user1)]$ #-----------------------------------------------------------#
*** Bug 903671 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0593.html