Bug 1002889 - Failure to install openstack via RDO on Fedora 19
Failure to install openstack via RDO on Fedora 19
Status: CLOSED NOTABUG
Product: Fedora
Classification: Fedora
Component: openstack-keystone (Show other bugs)
19
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Alan Pevec
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-30 03:27 EDT by Boris Derzhavets
Modified: 2013-09-03 06:05 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-03 06:05:19 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
/var/log/keystone/keystone.log (356.44 KB, text/plain)
2013-08-30 06:11 EDT, Boris Derzhavets
no flags Details
/var/log/keystone/keystone_first_run.log (39.61 KB, text/plain)
2013-08-30 08:05 EDT, Boris Derzhavets
no flags Details

  None (edit)
Description Boris Derzhavets 2013-08-30 03:27:20 EDT
Description of problem:

During second run :-
$sudo packstack --answer-file=/home/boris/packstack-answers-20130829-222817.txt



Adding Nagios server manifest entries...               [ DONE ]
Adding Nagios host manifest entries...                 [ DONE ]
Adding post install manifest entries...                [ DONE ]
Installing Dependencies...                             [ DONE ]
Copying Puppet modules and manifests...                [ DONE ]
Applying Puppet manifests...
Applying 192.168.1.54_prescript.pp
192.168.1.54_prescript.pp :                                          [ DONE ]
Applying 192.168.1.54_mysql.pp
Applying 192.168.1.54_qpid.pp
192.168.1.54_mysql.pp :                                              [ DONE ]
192.168.1.54_qpid.pp :                                               [ DONE ]
Applying 192.168.1.54_keystone.pp
Applying 192.168.1.54_glance.pp
Applying 192.168.1.54_cinder.pp
                                                                                            [ ERROR ]

ERROR : Error during puppet run : Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: Could not evaluate: Execution of '/usr/bin/keystone --endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 1: Unable to communicate with identity service: {"error": {"message": "An unexpected error prevented the server from fulfilling your request. (OperationalError) (1045, \"Access denied for user 'keystone_admin'@'Server19' (using password: YES)\") None None", "code": 500, "title": "Internal Server Error"}}. (HTTP 500)

Openstack worked fine been installed 2 weeks ago per:-
https://gist.github.com/tuxdna/6047147

The most recent "yum update" caused it to fail manage instances.
Attempt of fresh F19&Openstack install via RDO shows up old issues fixed
and new problem to appear.



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
$ sudo yum install -y http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly.rpm
$ sudo yum install -y openstack-packstack
$ packstack --allinone

Actual results:

Failure.

Expected results:

Dashboard completely functional.

Additional info:
Comment 1 Boris Derzhavets 2013-08-30 04:46:40 EDT
Steps to Reproduce:

$ sudo yum update
$ sudo yum install -y http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly.rpm
$ sudo yum install -y openstack-packstack
$ packstack --allinone
Comment 2 Boris Derzhavets 2013-08-30 06:11:05 EDT
Created attachment 792111 [details]
/var/log/keystone/keystone.log
Comment 3 Boris Derzhavets 2013-08-30 07:59:02 EDT
First run :-

$ sudo yum update
$ sudo yum install -y http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly.rpm
$ sudo yum install -y openstack-packstack
$ packstack --allinone
. . . . . . 

Applying Puppet manifests...
Applying 192.168.1.54_prescript.pp
192.168.1.54_prescript.pp :                                          [ DONE ]
Applying 192.168.1.54_mysql.pp
Applying 192.168.1.54_qpid.pp
192.168.1.54_mysql.pp :                                              [ DONE ]
192.168.1.54_qpid.pp :                                               [ DONE ]
Applying 192.168.1.54_keystone.pp
Applying 192.168.1.54_glance.pp
Applying 192.168.1.54_cinder.pp
                                                                                            [ ERROR ]

ERROR : Error during puppet run : Notice: /Stage[main]/Keystone/Exec[keystone-manage db_sync]/returns: sqlalchemy.exc.OperationalError: (OperationalError) (1045, "Access denied for user 'keystone_admin'@'ServerUbuntu13' (using password: YES)") None None
Please check log file /var/tmp/packstack/20130830-154956-OEvn3i/openstack-setup.log for more information
Comment 4 Boris Derzhavets 2013-08-30 08:05:48 EDT
Created attachment 792141 [details]
/var/log/keystone/keystone_first_run.log
Comment 5 Boris Derzhavets 2013-08-31 11:30:52 EDT
ServerUbuntu13 is just confusing server name for F19 box.

[root@ServerUbuntu13 ~]# uname -a
Linux ServerUbuntu13 3.10.9-200.fc19.x86_64 #1 SMP Wed Aug 21 19:27:58 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

[root@ServerUbuntu13 ~]# keystone-manage db_sync
Traceback (most recent call last):
  File "/bin/keystone-manage", line 28, in <module>
    cli.main(argv=sys.argv, config_files=config_files)
  File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 175, in main
    CONF.command.cmd_class.main()
  File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 54, in main
    driver.db_sync()
  File "/usr/lib/python2.7/site-packages/keystone/identity/backends/sql.py", line 156, in db_sync
    migration.db_sync()
  File "/usr/lib/python2.7/site-packages/keystone/common/sql/migration.py", line 49, in db_sync
    current_version = db_version()
  File "/usr/lib/python2.7/site-packages/keystone/common/sql/migration.py", line 63, in db_version
    return db_version_control(0)
  File "/usr/lib/python2.7/site-packages/keystone/common/sql/migration.py", line 68, in db_version_control
    versioning_api.version_control(CONF.sql.connection, repo_path, version)
  File "<string>", line 2, in version_control
  File "/usr/lib/python2.7/site-packages/migrate/versioning/util/__init__.py", line 159, in with_engine
    return f(*a, **kw)
  File "/usr/lib/python2.7/site-packages/migrate/versioning/api.py", line 250, in version_control
    ControlledSchema.create(engine, repository, version)
  File "/usr/lib/python2.7/site-packages/migrate/versioning/schema.py", line 139, in create
    table = cls._create_table_version(engine, repository, version)
  File "/usr/lib/python2.7/site-packages/migrate/versioning/schema.py", line 180, in _create_table_version
    if not table.exists():
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/schema.py", line 599, in exists
    self.name, schema=self.schema)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1594, in run_callable
    with self.contextual_connect() as conn:
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1661, in contextual_connect
    self.pool.connect(),
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 272, in connect
    return _ConnectionFairy(self).checkout()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 425, in __init__
    rec = self._connection_record = pool._do_get()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 778, in _do_get
    con = self._create_connection()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 225, in _create_connection
    return _ConnectionRecord(self)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 318, in __init__
    self.connection = self.__connect()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 368, in __connect
    connection = self.__pool._creator()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 80, in connect
    return dialect.connect(*cargs, **cparams)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 283, in connect
    return self.dbapi.connect(*cargs, **cparams)
  File "/usr/lib64/python2.7/site-packages/MySQLdb/__init__.py", line 81, in Connect
    return Connection(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 187, in __init__
    super(Connection, self).__init__(*args, **kwargs2)
sqlalchemy.exc.OperationalError: (OperationalError) (1045, "Access denied for user 'keystone_admin'@'ServerUbuntu13' (using password: YES)") None None
Comment 6 Boris Derzhavets 2013-08-31 16:50:16 EDT
Workaround suggested in
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=891700
also doesn't work for me.
Comment 7 Pádraig Brady 2013-08-31 20:57:34 EDT
Seems something is wrong with the user/host setup in the keystone db
Perhaps you could reset manually with:

openstack-db --service keystone --drop
openstack-db --service keystone --init

Alternatively you might only do the first, and let packstack do the init?
Comment 8 Boris Derzhavets 2013-09-03 05:28:20 EDT
On new fresh install issue seems to be gone
Comment 9 Pádraig Brady 2013-09-03 06:05:19 EDT
OK given it's no longer an issue, and probably not an issue with keystone anyway and that openstack-db can possibly be used to reset such stale DB issues anyway, I'm closing this for now. thanks

Note You need to log in before you can comment on or make changes to this bug.