Description of problem: I'm creating new tripleo-heat-templates to support NetApp deployments. The descriptions for each parameter are a few sentences each and there are a number of parameters - the point is that I increased the size of each Heat template by a few hundred characters. When I run `instack-deploy-overcloud --tuskar`, I get this error: InternalServerError: (DataError) (1406, "Data too long for column 'contents' at row 1") Full program run here: http://paste.fedoraproject.org/223145/14319867/ I confirmed that removing the long descriptions entirely made the problem disappear and everything runs normally. Version-Release number of selected component (if applicable): Delorean nightly builds How reproducible: Steps to Reproduce: 1. Make a longer Heat template in /usr/share/openstack-tripleo-heat-templates 2. Run instack-deploy-undercloud --tuskar 3. It fails Actual results: I see the aforementioned error message. Full program run: http://paste.fedoraproject.org/223145/14319867/ Expected results: Tuskar intakes the template and begins the Heat deployment Additional info:
Created attachment 1027737 [details] brain dump for debugging general notes I made whilst debugging (eg how i connected to mariadb to drop dbase, recreate, run migration etc) if it is useful for anyone
Ryan thanks for reporting this. It is indeed a nasty one and I spent longer than anticipated on it today. The fix for this upstream looks like (well, imo anyway ;) ). The py27 tests are failing (as they were locally for me) but the fix works as I tested on a live environment - I haven't investigated that further yet (looks like a venv specific thing, since it fails on a box with no actual mysql backend) It isn't the length of a given description per-se (I tried triguerring with this first) but the overall length of 'contents' for the stored_file object. This is created as 'Text' type (see [2] and the initial migration at [3]). I am away tomorrow until mid next week so will pickup on return unless someone else beats me to it (wrt getting it merged upstream then you consuming the it - not sure what your setup is there). I attached some notes which may help if you need to apply this to your setup in the meantime [1]https://review.openstack.org/184481 [2] https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py#L228 [3] https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/migrate_repo/versions/002_add_stored_file.py#L34
sorry since i missed the link earlier... the upstream fix is at https://review.openstack.org/184481
Looks like the right fix to me - I'll patch it into my environment on my next Cinder run and see if it works out. Thanks for working on this so quickly!
the fix here has merged and I confirmed on my undercloud (deployed from friday or at most last thursday poodle) that it is applied (so at some point we merged in downstream too) [root@instack ~]# cat /usr/lib/python2.7/site-packages/tuskar/db/sqlalchemy/models.py | grep LongText from tuskar.db.sqlalchemy.types import LongText contents = Column(LongText(), nullable=False) moving to ON_QA