Bug 1398334

Summary: Mac address provided by engine is already in use
Product: [oVirt] ovirt-engine Reporter: Ilanit Stein <istein>
Component: BLL.NetworkAssignee: Martin Mucha <mmucha>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Meni Yakove <myakove>
Severity: medium Docs Contact:
Priority: low    
Version: 4.0.5CC: bugs, danken, istein, mmucha
Target Milestone: ovirt-4.2.0Flags: rule-engine: ovirt-4.2+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-01-31 12:19:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
engine.log none

Description Ilanit Stein 2016-11-24 13:21:46 UTC
Description of problem:
On a one time trial, CFME provision a VM from template, along with adding a nic (obligatory field), add nic failed on MAC address is already in use.

Before this CFME operation was done,
on RHV side, I changed a VM's MAC address into a custom MAC address, that is not from the engine's MAC pool ranged, that is defined to default.
It might be related to this one time failure, described above.

On the following CFME VM provision trials, MAC address allocation was successful.

Version-Release number of selected component (if applicable):
RHV-4.0.5
CFME-5.7.0.11

Additional info:
There is also another related bug 1396995, on that the MAC address, in case of the above failure, wasn't mentioned in any log

Comment 1 Ilanit Stein 2016-11-24 13:22:26 UTC
Created attachment 1223859 [details]
engine.log

Comment 2 Meni Yakove 2016-11-28 08:38:13 UTC
Possibly duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1395462

Comment 3 Dan Kenigsberg 2016-11-30 09:20:15 UTC
Martin, do you see something suspicious in the logs?

Comment 4 Martin Mucha 2016-11-30 09:52:35 UTC
I wasn't able to find in logs any hint related to duplicate mac. I only these lines, unrelated to the reported problem:

2016-11-17 10:08:47,418 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase] (ServerService Thread Pool -- 53) [] Could not find enum value for option: 'AllowDuplicateMacAddresses'
2016-11-17 10:08:47,418 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase] (ServerService Thread Pool -- 53) [] Could not find enum value for option: 'MacPoolRanges'
2016-11-17 10:08:47,418 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase] (ServerService Thread Pool -- 53) [] Could not find enum value for option: 'MaxMacsCountInPool'

which probably means, that there's some script not updated from version 3.4? trying to read not any more existing config values. Reporter could check this if he wants. And about mac duplicity - we only can assume, that it's the same cause as one solved in duplicate bug (see comment #2).

Comment 5 Martin Mucha 2016-12-19 10:10:38 UTC
close->won't fix / duplicate / ?

Comment 6 Dan Kenigsberg 2016-12-19 10:43:59 UTC
Ilanit, do you see the warning

Could not find enum value for option: 'MaxMacsCountInPool'

on any living system of yours? If so, could you provide Martin with access to it, to check its DB?

Comment 8 Martin Mucha 2017-01-30 12:05:19 UTC
progress summary:
• I wasn't able to detect any sign of error in provided logs, same for reason for that error.
• I see traces of usage deprecated engine options; I cannot prove/disprove this usage without access to DB (I don't know how to use provided access to engine for this purpose), but this option usage is not related to this bug. I can help with it, if I have access to db, but it won't help us with this bug.

--> please provide info how to reproduce. From provided logs I have no information to work with. Otherwise I'd assume that comment#2 is right (this is duplicate bug to already verified one), and close this one.

Comment 9 Dan Kenigsberg 2017-01-31 12:19:55 UTC
Ilanit, I am sorry for waiting for so long before looking into this, but I suppose that the current state of the DB on your setup is not really relevant by now.

I'd appreciate if you reopen this bug with fresh logs (and hopefully a fresh setup). Also, please do not hesitate to call mmucha so he can ask you for further information.