Description of problem: Current default MAC pool range, since bug 1451272, is: 00:1a:4a:16:01:00 - 00:1a:4a:16:04:ff It changed several times in the past, and for a long time was random. It was always taken, though, from the IANA-assigned range 001A4A of Qumranet (now Red Hat) [1]. IMO it should: 1. Default to a random range again - So that two or (a few) more setups using the same network segment are less likely to use the same range 2. Be in the locally-administered address range [2] - So that we have a larger address range 3. Be larger, say 65000 addresses - So that admins do not have to handle this manually, even for large setups, unless they have specific needs If there is objection to (2.), we can still have the others - among the 24 bits in the 001A4A prefix range, choose among the first 8 a random value during engine-setup, and have the pool of size 16 bits. [1] https://regauth.standards.ieee.org/standards-ra-web/pub/view.html [2] https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local Version-Release number of selected component (if applicable): Current master How reproducible: always Steps to Reproduce: 1. engine-setup 2. 3. Actual results: Default MAC pool range is small and hard-coded Expected results: Default MAC pool range is large and randomized Additional info: http://lists.ovirt.org/pipermail/devel/2017-November/031898.html In the past, it was configurable by engine-setup, but we never documented this, and I do not think it was ever used. Might be useful too, not sure it's important, as user can change it after setup from the ui.
Dan, can you please review? Proposed change looks reasonable to me.
*** Bug 1518627 has been marked as a duplicate of this bug. ***
+1 for me to change the logic. In particular fr 1) and 3). For 2) I think it is important not to overturn too much complexity to the final user/admin
(In reply to Gianluca Cecchi from comment #3) > For 2) I think it is important not to overturn too much complexity to the > final user/admin Not sure what you mean here. IMO using the locally-administered address range will _lower_ the user/admin's complexity, risk and maintenance work. If you think otherwise, please explain why. Thanks. To clarify, "Locally-Administered", here, only means they are not Globally Administered by IANA. If your network is not very big, "administering" them by generating a random sub-range is enough. And in any case, using the Qumranet/Red Hat range does not save the admin from Administering them - the opposite is true - since that range is much smaller than the locally-administered range, choosing stuff at random will run into collisions much faster, thus require more maintenance, not less.
I only meant that a default should be proposed to the user also in the new scenario, eventually with a tip pointing to your [2] link to help in customizing the value if necessary
Please just use option 2 (local administrated), all other "solutions" are bound to clash somehow somewhere. This should at least be the default, maybe with an option to specify custom ranges.
(In reply to Sven Kieske from comment #6) > Please just use option 2 (local administrated), all other "solutions" are > bound > to clash somehow somewhere. > Please be more specific. What exactly do you suggest? If all of 1+2+3, fine. Otherwise, I do not understand, please explain. Thanks. > This should at least be the default, maybe with an option to specify custom > ranges. This bug is about the default. You can already use custom ranges as you wish.
please make it really random. i had to create crazy dhcp settings to block those who keep "default" or use previously unknown mac range in their engine because we use same lan for many engines and we had mac conflicts. our current workaround is: - dhcpd.conf ... class "user-ranges" { match if ( substring(hardware, 1, 4) = 00:1a:44:01 ) or ( substring(hardware, 1, 4) = 00:1a:44:02 ) or ( substring(hardware, 1, 4) = 00:1a:44:03 ) or ( substring(hardware, 1, 4) = 00:1a:44:04 ) or ( substring(hardware, 1, 4) = 00:1a:44:05 ) ... # user-ranges pool... pool { failover peer "rhev.example.com"; range 10.37.139.0 10.37.139.254; deny dynamic bootp clients; allow members of "user-ranges"; } so every engine on same lan needs to predefine mac range otherwise its managed vms won't get an ip via dhcp. we don't use static ips.
(In reply to Jiri Belka from comment #8) > please make it really random. Not sure what you mean, exactly. We have a 128 bit address range, and need to pick a sub-range from it. Really Random (tm) means to randomly pick two addresses in this range, use the lower one as sub-range start and the higher one as sub-range end. I do not think that's what you mean. If you mean something other than 1+2+3 of comment 0, please provide more details. Thanks.
I am ok with the original suggestions in the RFE, but if someone can check why it changed in the past from random to a default range it will be nice (perhaps there is a good reason which we all forgot). One point I think we need to document and warn the users: It is *not* recommended to create independent virtual environments in the same broadcast domain (LAN). There are many reasons why LAN segmentation is preferred beyond the mac collision, I would suggest to consider them before trying to workaround specific problems, such solutions will usually not stand the test of time.
(In reply to Edward Haas from comment #10) > I am ok with the original suggestions in the RFE, but if someone can check > why it changed in the past from random to a default range it will be nice > (perhaps there is a good reason which we all forgot). I spent quite some time understanding this, before opening current bug. I don't remember all the details, but am certain it was an unintended mistake. If you do want to try to understand this yourself, you can try something like this, inside the engine git repo: git log -u --follow packaging | less -j20 -I +/'mac[^ ]*pool[^ ]*range'
(In reply to Yedidyah Bar David from comment #9) > (In reply to Jiri Belka from comment #8) > > please make it really random. > > Not sure what you mean, exactly. We have a 128 bit address range, and need > to pick a sub-range from it. Really Random (tm) means to randomly pick two > addresses in this range, use the lower one as sub-range start and the higher > one as sub-range end. I do not think that's what you mean. If you mean > something other than 1+2+3 of comment 0, please provide more details. Thanks. I meant '1.', our env originates in 3.1 and iirc in that time every engine had same mac range and thus vms macs from multiple engines on same lan would collide. thus we invented user defined mac ranges. later engine got randomized default mac range but we still use user defined ones anyway.
(In reply to Jiri Belka from comment #13) > I meant '1.', "1.", but not "2." nor "3."? Please provide details. Saying you do not agree with "2." means you want to keep the 00:1a:4a prefix. Is that what you mean? Saying you do not agree with "3." means you want 1024 addresses in the default range. Is that what you mean? > our env originates in 3.1 and iirc in that time every engine > had same mac range and thus vms macs from multiple engines on same lan would > collide. thus we invented user defined mac ranges. later engine got > randomized default mac range but we still use user defined ones anyway. Understood, but this bug won't fix existing setups, as it's only about the default. Once you run setup, the default for the next versions does not apply to you. If we fix current bug as suggested in comment 0, multiple engine setups on the same broadcast domain should have very low chances to collide. If we only do "1.", or only do "1." and "2", we still have low chances (although higher), but also have a small range (which is not enough for the larger setups we know about). If we do "1." and "3.", we have much higher chances for collisions, although still much lower than today. All of 1+2+3 provides, imo, the best compromise between: - Low chance for collisions between setups - Large-enough default range for even the largest setups we know about
ok, two installations of 4.2.7 ovirt-engine-4.2.7.2-0.1.el7ev.noarch engine=# select * from mac_pool_ranges ; mac_pool_id | from_mac | to_mac --------------------------------------+-------------------+------------------- 58ca604b-017d-0374-0220-00000000014e | 56:6f:96:fc:00:00 | 56:6f:96:fc:ff:ff (1 row) engine=# select * from mac_pool_ranges ; mac_pool_id | from_mac | to_mac --------------------------------------+-------------------+------------------- 58ca604b-017d-0374-0220-00000000014e | 56:6f:03:c6:00:00 | 56:6f:03:c6:ff:ff (1 row)
This bugzilla is included in oVirt 4.2.7 release, published on November 2nd 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.