Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1783070

Summary: ip_address field in Octavia's vip table is varchar(32) when it should be varchar(64)
Product: Red Hat OpenStack Reporter: Mark Jones <marjones>
Component: openstack-octaviaAssignee: Assaf Muller <amuller>
Status: CLOSED WONTFIX QA Contact: Bruna Bonguardo <bbonguar>
Severity: high Docs Contact:
Priority: unspecified    
Version: 13.0 (Queens)CC: cgoncalves, ihrachys, lpeer, majopela, scohen
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-12-18 16:05:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1780132    
Bug Blocks:    

Description Mark Jones 2019-12-12 23:28:04 UTC
Description of problem:

When defining a LoadBalancer using an IPv6 VIP that is longer than 36 characters, the command throws an exception:

$ openstack loadbalancer create --vip-subnet-id lbsubnet --name lb1
(pymysql.err.DataError) (1406, u"Data too long for column 'ip_address' at row 1") [SQL: u'UPDATE vip SET ip_address=%(ip_address)s, port_id=%(port_id)s, subnet_id=%(subnet_id)s, network_id=%(network_id)s WHERE vip.load_balancer_id = %(load_balancer_id_1)s'] [parameters: {'network_id': u'2cebd086-e10d-4093-858d-860b2596c05b', 'subnet_id': u'2714d677-b11d-4f31-a1d4-fd3a5f889ba5', 'ip_address': u'fd00:1234:1234:1234:f816:3eff:fe42:3b4d', 'port_id': u'e74ec200-cacb-48fe-b5da-30381a48c881', u'load_balancer_id_1': '96fcbe32-9dad-444a-b309-982d484cd625'}] (Background on this error at: http://sqlalche.me/e/9h9h) (HTTP 500) (Request-ID: req-85db8752-5a8f-499c-b64c-4ae3a8c0da4d)

Examining the vip table in the Octavia database shows the ip_address has varchar length 36:

MariaDB [octavia]> describe vip;
+------------------+-------------+------+-----+---------+-------+
| Field            | Type        | Null | Key | Default | Extra |
+------------------+-------------+------+-----+---------+-------+
| load_balancer_id | varchar(36) | NO   | PRI | NULL    |       |
| ip_address       | varchar(36) | YES  |     | NULL    |       |
| port_id          | varchar(36) | YES  |     | NULL    |       |
| subnet_id        | varchar(36) | YES  |     | NULL    |       |
| network_id       | varchar(36) | YES  |     | NULL    |       |
| qos_policy_id    | varchar(36) | YES  |     | NULL    |       |
+------------------+-------------+------+-----+---------+-------+


There appears to be an alembic migration that is designed to increase the ip_address field to 64 bytes and this does appear to run during octavia-db-manage on deployment:

...
INFO  [alembic.runtime.migration] Running upgrade 4a6ec0ab7284 -> 82b9402e71fd, Update vip address size
...

However, when looking at a SQL trace when performing the octavia-db-manage upgrade head manually on a new octavia database I do not see the associated SQL that would increase the ip_address field size.


Version-Release number of selected component (if applicable):
osp13z9

How reproducible:
Every time


Steps to Reproduce:
1. Deploy a configuration with Octavia enabled in 13z9
2. Create an ipv6 64 bit subnet that does not "shortcut" any zero values i.e. fd00:1234:1234:1234::
3. Create a loadbalancer using that subnet

Actual results:

Exception occurs during loadbalancer creation

Expected results:

Supports creation of a loadbalancer using any valid IPv6 


Additional info:

Comment 2 Carlos Goncalves 2019-12-13 16:46:40 UTC
This issue was fixed by patch https://review.opendev.org/#/c/633572/ and is available in OSP 15+. However, the patch cannot be backported to OSP 13 because it requires changes to the database schema.

On a positive note, there's work underway to backport Octavia Train to OSP 13 which would fix the issue herein reported as well as others and bring in new features. Please see https://bugzilla.redhat.com/show_bug.cgi?id=1780132.

Comment 4 Carlos Goncalves 2019-12-17 14:41:16 UTC
Engineering quickly set up a lab environment (OpenStack Queens + Octavia via Devstack) to reproduce this issue. As expected, we were able to reproduce the issue. What follows next are steps to reproduce the issue and workaround it in a way that seems to not conflict with a future OSP/Octavia upgrade. Please note that this procedure may require a Support Exception.

1. Deployed OpenStack Queens with Octavia Queens via Devstack.
2. Reproduced the bug the customer is seeing:

$ openstack loadbalancer create --vip-subnet-id ipv6-expanded-subnet --vip-address FD4C:992E:75E5:21FA:F1F2:F1F2:F1F2:F1F2 --name lb1-ipv6                                                                        
(pymysql.err.DataError) (1406, u"Data too long for column 'ip_address' at row 1") [SQL: u'INSERT INTO vip (load_balancer_id, ip_address, port_id, subnet_id, network_id, qos_policy_id) VALUES (%(load_balancer_id)s, %(ip_address)s, %(port_id)s, %(subnet_id)s, %(network_id)s, %(qos_policy_id)s)'] [parameters: {'qos_policy_id': None, 'subnet_id': u'7eb2ef23-b385-4460-bb4d-beee5d38edc2', 'ip_address': u'FD4C:992E:75E5:21FA:F1F2:F1F2:F1F2:F1F2', 'network_id': u'9cc4ac03-b5f2-473c-b50f-9d631a34ae6a', 'load_balancer_id': '6898e7bd-9289-4dfc-b39b-f97c6905d98f', 'port_id': None}] (Background on this error at: http://sqlalche.me/e/9h9h) (HTTP 500) (Request-ID: req-3b68355b-d430-4cb3-8a07-97b86ee931a2)


3. Confirmed the "ip_address" column in the "vip" table in the "octavia" database is varchar(36):

$ mysql -u root -ppassword octavia

MariaDB [octavia]> describe vip;
+------------------+-------------+------+-----+---------+-------+
| Field            | Type        | Null | Key | Default | Extra |
+------------------+-------------+------+-----+---------+-------+
| load_balancer_id | varchar(36) | NO   | PRI | NULL    |       |
| ip_address       | varchar(36) | YES  |     | NULL    |       |
| port_id          | varchar(36) | YES  |     | NULL    |       |
| subnet_id        | varchar(36) | YES  |     | NULL    |       |
| network_id       | varchar(36) | YES  |     | NULL    |       |
| qos_policy_id    | varchar(36) | YES  |     | NULL    |       |
+------------------+-------------+------+-----+---------+-------+
6 rows in set (0.01 sec)


4. Modified the "ip_address" column to varchar(64) in the "vip" table (similar to the alembic migration in the non-backportable patch):

MariaDB [octavia]> ALTER TABLE vip MODIFY COLUMN ip_address varchar(64);
Query OK, 0 rows affected (0.09 sec)
Records: 0  Duplicates: 0  Warnings: 0


5. Confirmed the column type was updated:

MariaDB [octavia]> describe vip;
+------------------+-------------+------+-----+---------+-------+
| Field            | Type        | Null | Key | Default | Extra |
+------------------+-------------+------+-----+---------+-------+
| load_balancer_id | varchar(36) | NO   | PRI | NULL    |       |
| ip_address       | varchar(64) | YES  |     | NULL    |       |
| port_id          | varchar(36) | YES  |     | NULL    |       |
| subnet_id        | varchar(36) | YES  |     | NULL    |       |
| network_id       | varchar(36) | YES  |     | NULL    |       |
| qos_policy_id    | varchar(36) | YES  |     | NULL    |       |
+------------------+-------------+------+-----+---------+-------+
6 rows in set (0.01 sec)


6. Successfully created a load balancer with the same IPv6 VIP address as earlier:

$ openstack loadbalancer create --vip-subnet-id ipv6-expanded-subnet --vip-address FD4C:992E:75E5:21FA:F1F2:F1F2:F1F2:F1F2 --name lb1-ipv6
+---------------------+-----------------------------------------+
| Field               | Value                                   |
+---------------------+-----------------------------------------+
| admin_state_up      | True                                    |
| created_at          | 2019-12-17T13:59:06                     |
| description         |                                         |
| flavor              |                                         |
| id                  | a4d012a5-3e8f-4b9f-a9bf-570def24e7ad    |
| listeners           |                                         |
| name                | lb1-ipv6                                |
| operating_status    | OFFLINE                                 |
| pools               |                                         |
| project_id          | 95d034c6d7c44f0a993310414e75ed5b        |
| provider            | octavia                                 |
| provisioning_status | PENDING_CREATE                          |
| updated_at          | None                                    |
| vip_address         | fd4c:992e:75e5:21fa:f1f2:f1f2:f1f2:f1f2 |
| vip_network_id      | 9cc4ac03-b5f2-473c-b50f-9d631a34ae6a    |
| vip_port_id         | 2061811c-fe69-4479-a37d-1df32fc3cd80    |
| vip_qos_policy_id   | None                                    |
| vip_subnet_id       | 7eb2ef23-b385-4460-bb4d-beee5d38edc2    |
+---------------------+-----------------------------------------+


7. Confirmed the load balancer is ACTIVE and ONLINE:

$ openstack loadbalancer show lb1-ipv6                                                                                                                                                                            
+---------------------+-----------------------------------------+
| Field               | Value                                   |
+---------------------+-----------------------------------------+
| admin_state_up      | True                                    |
| created_at          | 2019-12-17T13:59:06                     |
| description         |                                         |
| flavor              |                                         |
| id                  | a4d012a5-3e8f-4b9f-a9bf-570def24e7ad    |
| listeners           |                                         |
| name                | lb1-ipv6                                |
| operating_status    | ONLINE                                  |
| pools               |                                         |
| project_id          | 95d034c6d7c44f0a993310414e75ed5b        |
| provider            | octavia                                 |
| provisioning_status | ACTIVE                                  |
| updated_at          | 2019-12-17T13:59:53                     |
| vip_address         | fd4c:992e:75e5:21fa:f1f2:f1f2:f1f2:f1f2 |
| vip_network_id      | 9cc4ac03-b5f2-473c-b50f-9d631a34ae6a    |
| vip_port_id         | 2061811c-fe69-4479-a37d-1df32fc3cd80    |
| vip_qos_policy_id   | None                                    |
| vip_subnet_id       | 7eb2ef23-b385-4460-bb4d-beee5d38edc2    |
+---------------------+-----------------------------------------+

8. Confirmed the "vip" table contains a record with the expected VIP address of the load balancer:

$ mysql -u root -ppassword octavia -e 'select * from vip;'
+--------------------------------------+-----------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+---------------+
| load_balancer_id                     | ip_address                              | port_id                              | subnet_id                            | network_id                           | qos_policy_id |
+--------------------------------------+-----------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+---------------+
| a4d012a5-3e8f-4b9f-a9bf-570def24e7ad | fd4c:992e:75e5:21fa:f1f2:f1f2:f1f2:f1f2 | 2061811c-fe69-4479-a37d-1df32fc3cd80 | 7eb2ef23-b385-4460-bb4d-beee5d38edc2 | 9cc4ac03-b5f2-473c-b50f-9d631a34ae6a | NULL          |
+--------------------------------------+-----------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+---------------+


9. To ensure this modification to the "vip" table schema does not conflict with a future upgrade to a newer Octavia release version, updated Octavia code to the Train release version and ran a database upgrade:

$ octavia-db-manage upgrade head                                                                                                                                                                                   
INFO alembic.runtime.migration [-] Context impl MySQLImpl.
INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
INFO alembic.runtime.migration [-] Running upgrade 0aee2b450512 -> 10d38216ad34, add timestamps to amphora
INFO alembic.runtime.migration [-] Running upgrade 10d38216ad34 -> 034756a182a2, amphora add image id
INFO alembic.runtime.migration [-] Running upgrade 034756a182a2 -> ba35e0fb88e1, add backup field to member
INFO alembic.runtime.migration [-] Running upgrade ba35e0fb88e1 -> 0fd2c131923f, add timeout fields to listener
INFO alembic.runtime.migration [-] Running upgrade 0fd2c131923f -> 0f242cf02c74, Add provider column
INFO alembic.runtime.migration [-] Running upgrade 0f242cf02c74 -> ebbcc72b4e5e, Add Octavia owned VIP column to VIP table
INFO alembic.runtime.migration [-] Running upgrade ebbcc72b4e5e -> 76aacf2e176c, Extend some necessary fields for udp support
INFO alembic.runtime.migration [-] Running upgrade 76aacf2e176c -> 55874a4ceed6, add l7policy action redirect prefix
INFO alembic.runtime.migration [-] Running upgrade 55874a4ceed6 -> 80dba23a159f, tags_support
INFO alembic.runtime.migration [-] Running upgrade 80dba23a159f -> 4f65b4f91c39, amphora add flavor id
INFO alembic.runtime.migration [-] Running upgrade 4f65b4f91c39 -> b9c703669314, add flavor and flavor_profile table
INFO alembic.runtime.migration [-] Running upgrade b9c703669314 -> 211982b05afc, add_flavor_id_to_lb
INFO alembic.runtime.migration [-] Running upgrade 211982b05afc -> 11e4bb2bb8ef, Fix_IPv6_VIP
INFO alembic.runtime.migration [-] Running upgrade 11e4bb2bb8ef -> 2ad093f6353f, Add listener client_ca_tls_certificate_id column
INFO alembic.runtime.migration [-] Running upgrade 2ad093f6353f -> f21ae3f21adc, Add Client Auth options
INFO alembic.runtime.migration [-] Running upgrade f21ae3f21adc -> ffad172e98c1, Add certificate revoke revocation list field
INFO alembic.runtime.migration [-] Running upgrade ffad172e98c1 -> 1afc932f1ca2, Extend the l7rule type for support client certificate cases
INFO alembic.runtime.migration [-] Running upgrade 1afc932f1ca2 -> a1f689aecc1d, Extend pool for support backend re-encryption
INFO alembic.runtime.migration [-] Running upgrade a1f689aecc1d -> 74aae261694c, extend pool for backend CA and CRL
INFO alembic.runtime.migration [-] Running upgrade 74aae261694c -> a7f187cd221f, Add tls boolean type for backend re-encryption
INFO alembic.runtime.migration [-] Running upgrade a7f187cd221f -> 6742ca1b27c2, Add L7policy Redirect http code
INFO alembic.runtime.migration [-] Running upgrade 6742ca1b27c2 -> 7432f1d4ea83, add l7policy action redirect prefix
INFO alembic.runtime.migration [-] Running upgrade 7432f1d4ea83 -> 6ffc710674ef, Spares pool table
INFO alembic.runtime.migration [-] Running upgrade 6ffc710674ef -> 46d914b2a5e5, Seed the spares_pool table
INFO alembic.runtime.migration [-] Running upgrade 46d914b2a5e5 -> 392fb85b4419, add primary key to spares_pool
INFO 392fb85b4419_add_primary_key_to_spares_pool_py [-] The primary key in spares_pool already exists, continuing.
INFO alembic.runtime.migration [-] Running upgrade 392fb85b4419 -> a5762a99609a, add protocol in listener keys
INFO alembic.runtime.migration [-] Running upgrade a5762a99609a -> da371b422669, Add CIDRs for listeners
INFO alembic.runtime.migration [-] Running upgrade da371b422669 -> dcf88e59aae4, Add LB_ALGORITHM_SOURCE_IP_PORT
$

No database migration errors were observed. It can be seen from the output from the last command (step 9) that the alembic revision "11e4bb2bb8ef"/"Fix_IPv6_VIP" was executed.

Comment 5 Carlos Goncalves 2019-12-18 14:07:55 UTC
Mark, if there's no further action required from the Octavia team, may we close this BZ?