Bug 2196469
| Summary: | [Hyper-V][RHEL7]hv_set_ifconfig.sh implementation does not utilize IPV6NETMASK to set ipv6 netmask | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | shradhagupta <shradhagupta> |
| Component: | hyperv-daemons | Assignee: | Ani Sinha <anisinha> |
| Status: | CLOSED WONTFIX | QA Contact: | xuli <xuli> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.8 | CC: | andavis, anisinha, bdas, cavery, decui, litian, mlevitsk, ropang, sbroz, shradhagupta, vkuznets, xuli, xxiong, yacao, yuxisun |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-05-24 11:02:40 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2182677, 2182679 | ||
@shradhagupta you wrote:
> This is handled well in other distros like Ubuntu but looks like all RHEL versions are missing this handling
Can you please point me to the Ubuntu daemon code where this is handled well?
you can find debian's implementation here: https://github.com/endlessm/linux/blob/master/debian/cloud-tools/hv_set_ifconfig#L159 There is a patchset here: https://lore.kernel.org/lkml/20230508095340.2ca1630f.olaf@aepfle.de/T/ It makes the daemon spit out nm keyfile instead of ifconfig file. I strongly believe we should go this route - use nm key files and then add our customisation to it on top. It should also fix this issue AFAICS. Yes, this patch with solve the issue, however it is not upstreamed yet and that might take quite a while. Meanwhile, can we please target a fix for this issue by correcting rhel scripts to unblock customers as soon as possible. (In reply to shradhagupta from comment #12) > Yes, this patch with solve the issue, however it is not upstreamed yet and > that might take quite a while. Meanwhile, can we please target a fix for > this issue by correcting rhel scripts to unblock customers as soon as > possible. I am occupied with some other things at this moment and may not be able to get to this issue immediately. Meanwhile, can you please add me to the CC list of your patch set in the kernel mailing list so that I can monitor progress and patch feedbacks? *** Bug 2182674 has been marked as a duplicate of this bug. *** Since RHEL 7 is currently in Maintenance Support 2 Phase, the support of the product in this phase is limited, new functionality is not planned for availability Maintenance Support 2. If there is an available workaround, also should no longer be addressing this in RHEL 7. Based on discussion with our developers, closing this bug as wontfix. We have similar bugs from customers for RHEL 8 and RHEL 9, will keep tracking the issue in the following two bugs: Bug #2182677 [Hyper-V][RHEL-8] hyperv-daemons write incompatible IPv6 prefix (IPV6NETMASK) in connection configuration Bug #2182679 [Hyper-V][RHEL-9] hyperv-daemons write incompatible IPv6 prefix (IPV6NETMASK) in connection configuration Thank you so much. Xuemin Hi Shradha, It looks like your patch https://lore.kernel.org/lkml/20230508095340.2ca1630f.olaf@aepfle.de/T/ is not upstream yet. Could you please help to share the latest status? Or do you have an estimated timeline to get it upstreamed? I’m wondering whether it could be merged in the RHEL 8.9/9.3 timeline, if not, maybe we need to fix this issue by correcting the script in RHEL 8.9/9.3. Thank you so much. Best Regards, Xuemin Hi Xuemin, We have not received an reviewed-by tags by any maintainers. It would really help if someone from redhat also reviews it or provide a tested-by tag on the private testing we were going to do. (In reply to shradhagupta from comment #17) > Hi Xuemin, We have not received an reviewed-by tags by any maintainers. It > would really help if someone from redhat also reviews it or provide a > tested-by tag on the private testing we were going to do. I will try and get this patch tested internally. |
Description of problem:In Rhel's implementation of this distro specific network file from hypervkvpd, "hv_set_ifconfig.sh", the parameter used by hyperv to set ipv6 netmask is not consumed IPV6NETMASK. This is handled well in other distros like Ubuntu but looks like all RHEL versions are missing this handling Version-Release number of selected component (if applicable): All Rhel versions seem to have the problem How reproducible: Inject ipv6 to a hyperv rhel VM using powershell commands Steps to Reproduce: 1. Create a rhel (any version, we tried 7.8) VM on hyperv setup 2. Inject IPv6 address statically in the VM using powershell tools. sample script: Function Set-VMNetworkConfigurationDualIP{ [CmdletBinding()] Param ( [Parameter(Mandatory=$true, Position=0, ParameterSetName='Static', ValueFromPipeline=$true)] [Microsoft.HyperV.PowerShell.VMNetworkAdapter]$NetworkAdapter, [Parameter(Mandatory=$false, Position=0, ParameterSetName='Static')] [Switch]$IPV4V6 ) $VM = Get-WmiObject -Namespace 'root\virtualization\v2' -Class 'Msvm_ComputerSystem' | Where-Object { $_.ElementName -eq $NetworkAdapter.VMName } $VMSettings = $vm.GetRelated('Msvm_VirtualSystemSettingData') | Where-Object { $_.VirtualSystemType -eq 'Microsoft:Hyper-V:System:Realized' } $VMNetAdapters = $VMSettings.GetRelated('Msvm_SyntheticEthernetPortSettingData') $NetworkSettings = @() foreach ($NetAdapter in $VMNetAdapters) { if ($NetAdapter.Address -eq $NetworkAdapter.MacAddress) { $NetworkSettings = $NetworkSettings + $NetAdapter.GetRelated("Msvm_GuestNetworkAdapterConfiguration") } } if ($IPV4V6) { $NetworkSettings[0].IPAddresses = @("192.168.1.106";"1234:1234:1234:1234::119") $NetworkSettings[0].Subnets = @("255.255.255.0";"120") $NetworkSettings[0].DefaultGateways = @("192.168.1.1";"1234:1234:1234:1234::1") $NetworkSettings[0].DNSServers = @("192.168.1.8") $NetworkSettings[0].ProtocolIFType=4098 $NetworkSettings[0].DHCPEnabled = $false } else { $NetworkSettings[0].IPAddresses = @("192.168.1.109") $NetworkSettings[0].Subnets = @("255.255.255.0") $NetworkSettings[0].DefaultGateways = @("192.168.1.1") $NetworkSettings[0].DNSServers = @("192.168.1.10") $NetworkSettings[0].ProtocolIFType=4096 $NetworkSettings[0].DHCPEnabled = $false } $Service = Get-WmiObject -Class "Msvm_VirtualSystemManagementService" -Namespace "root\virtualization\v2" $setIP = $Service.SetGuestNetworkAdapterConfiguration($VM, $NetworkSettings[0].GetText(1)) if ($setip.ReturnValue -eq 4096) { $job=[WMI]$setip.job while ($job.JobState -eq 3 -or $job.JobState -eq 4) { start-sleep 1 $job=[WMI]$setip.job } write-host "**jobstate = $job.JobState " if ($job.JobState -eq 7) { write-host "Success" } else { $job.GetError() } } elseif($setip.ReturnValue -eq 0) { Write-Host "Success" } } Actual results: IPv6 subnet mask still set to 64 in VM Expected results:Ipv6 subnet mask should have been set to 120 as mentioned in steps of reproduction Additional info: