Bug 1219383

Summary: [MAC pool] limit range to 2^31 addresses via REST
Product: [oVirt] ovirt-engine Reporter: GenadiC <gcheresh>
Component: GeneralAssignee: Martin Mucha <mmucha>
Status: CLOSED DEFERRED QA Contact: Meni Yakove <myakove>
Severity: medium Docs Contact:
Priority: low    
Version: ---CC: bazulay, bugs, danken, lsurette, mburman, myakove, rbalakri, srevivo, ykaul, ylavi
Target Milestone: ---Flags: rule-engine: ovirt-4.2+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ovirt 4.0.0 alpha1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-07-12 10:50:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1344284    
Bug Blocks:    

Description GenadiC 2015-05-07 08:17:07 UTC
Description of problem:
Creating a MAC pool with range from 00:ff:ff:ff:ff:ff to 02:00:00:00:00:01 will let you add only one VNIC with 00:ff:ff:ff:ff:ff MAC address.
It will fail on adding additional VNICs with error message:
Not enough MAC addresses left in MAC Address Pool.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a new MAC pool with MAC pool range from 00:ff:ff:ff:ff:ff to 02:00:00:00:00:01
2. Add VNICs to VM on the DC this MAC pool is attached to
3.

Actual results:
Only one VNIC is attached (with MAC 00:ff:ff:ff:ff:ff)
Adding additional VNIC fails

Expected results:
1) Or adding of all VNICs should succeed
2) Or if the size of the MAC pool range is a problem, the user should not be able to create such MAC pool

Additional info:

Comment 1 Barak 2015-05-11 08:43:43 UTC
Please check whether this happens in 3.5 as well

Comment 2 GenadiC 2015-05-11 12:04:36 UTC
Indeed the same faulty behaviour in 3.5

Comment 3 Martin Mucha 2015-05-19 07:51:46 UTC
Due to used data structures, we're limited to 2^31-1 macs in one range, and we have to deal with it somehow. Range cannot have 'gaps'.

one way (not in the end used) was to break user given range to multiple smaller ones, clipping out multicasts. That way however, on startup there would be some overhead, and up to few hundreds (if I count correctly) ranges would be created. User would be allowed to add arbitrary range, without error, but also without error that his request requires TOO many macs, which will not fit into memory. So limiting ranges size to some number would be required anyway.

current approach is different. It takes ranges start, and moves it forward, if needed, to first unicast address. Range end is moved if needed to element preceding first multicast address in range, or to (2^31-1)th element. So if user creates range bigger than 2^31-1, it will be trimmed. One unicast mac, followed but 2^32 multicast macs, and that followed by anything, will be reduced to range containing one mac exactly.

I think second approach should be sufficient, if it's sufficient to have 'only' ~2 billions macs in one range. If that's the case, only missing thing is range size validation on gui and rest.

Comment 4 Dan Kenigsberg 2015-09-10 11:58:48 UTC
(In reply to Martin Mucha from comment #3)
> 
> I think second approach should be sufficient, if it's sufficient to have
> 'only' ~2 billions macs in one range. If that's the case, only missing thing
> is range size validation on gui and rest.

yes, we should limit mac pool size to 2^31

Comment 5 Red Hat Bugzilla Rules Engine 2015-10-19 10:55:54 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 6 Meni Yakove 2016-07-02 10:19:42 UTC
Blocked via UI but I still can create the MAC pool via rest

<mac_pool>
    <name>pool0</name>
    <ranges>
        <range>
            <from>00:ff:ff:ff:ff:ff</from>
            <to>02:00:00:00:00:01</to>
        </range>
    </ranges>
</mac_pool>

Comment 7 Red Hat Bugzilla Rules Engine 2016-07-02 10:19:50 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 8 Meni Yakove 2016-07-02 10:24:23 UTC
rhevm-4.0.2-0.2.rc1.el7ev.noarch

Comment 9 Martin Mucha 2016-07-07 17:57:27 UTC
(In reply to Meni Yakove from comment #6)
> Blocked via UI but I still can create the MAC pool via rest
> 
> <mac_pool>
>     <name>pool0</name>
>     <ranges>
>         <range>
>             <from>00:ff:ff:ff:ff:ff</from>
>             <to>02:00:00:00:00:01</to>
>         </range>
>     </ranges>
> </mac_pool>

you're correct. There's need to add validation to business entities passed via rest. There's already patch for that, which alters these by restricting macs from ranges to be from within same OUI. This patches will be pushed soon ...

Comment 10 Martin Mucha 2016-07-19 13:57:46 UTC
this bug is fixed by 'single OUI' refactor. We just need to wait for that to be merged.

Comment 11 Meni Yakove 2017-07-04 06:07:17 UTC
Still, can create via rest

<mac_pool href="/ovirt-engine/api/macpools/bf46f179-4aa6-4f2c-938a-6ef4da579362" id="bf46f179-4aa6-4f2c-938a-6ef4da579362">
    <name>mac_pool_name_0</name>
    <allow_duplicates>false</allow_duplicates>
    <default_pool>false</default_pool>
    <ranges>
        <range>
            <from>00:ff:ff:ff:ff:ff</from>
            <to>02:00:00:00:00:01</to>
        </range>
    </ranges>
</mac_pool>

Comment 12 Martin Mucha 2017-07-04 06:47:21 UTC
ah, my bad. Provided patch solves only UI part. Bll/rest part was (upon agreement) solved via implementing contraint, that single range must not span multiple OUIs. Also other issues was solved (modifying user input, showing how many macs are present in pool) etc.

All of it is done in topic:
https://gerrit.ovirt.org/#/q/status:abandoned+project:ovirt-engine+branch:master+topic:doNotModifyRanges_allowSingleOuiPerRange

which was ignored and let to 'die' twice in row, thus I suggest, there's no intent to merge this. Asking dan to decide/reschedule.

Comment 13 Dan Kenigsberg 2017-07-12 10:50:42 UTC
It is very sad for me personally, but forcing a single OUI in a range is not of high priority.