RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2182052 - keep-id generates an entry in /etc/resolv.conf if used with podman
Summary: keep-id generates an entry in /etc/resolv.conf if used with podman
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.7
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Paul Holzinger
QA Contact: Alex Jia
URL:
Whiteboard:
Depends On:
Blocks: 2182485 2182491
TreeView+ depends on / blocked
 
Reported: 2023-03-27 12:21 UTC by Carroline
Modified: 2023-11-14 16:38 UTC (History)
13 users (show)

Fixed In Version: podman-4.4.1-12.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2182485 2182491 (view as bug list)
Environment:
Last Closed: 2023-11-14 15:29:00 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github containers podman pull 17963 0 None Merged fix slirp4netns resolv.conf ip with a userns 2023-03-29 09:56:26 UTC
Red Hat Issue Tracker RHELPLAN-153166 0 None None None 2023-03-27 12:41:07 UTC

Description Carroline 2023-03-27 12:21:12 UTC
Description of problem:
podman generates incorrect resolv.conf entry if executed with keep-id option


Version-Release number of selected component (if applicable):
RHEL 8.7
Podman 4.2

How reproducible:
When running a container with the keep-id option and slirp4netns network, the generated resolv.conf will contain the name server 10.0.2.3 which is in the incorrect network .


Steps to Reproduce:
1. login through rootless user via slirp4netns

 ~]# ssh joe@IP
The authenticity of host 'IP (IP)' can't be established.
ECDSA key fingerprint is SHA256:F5fUffQRxEqf1hyUhA5GPcV3NTtCRmqzlxxJLN+opLQ.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'IP' (ECDSA) to the list of known hosts.
joe@IP's password: 

2. Check the /etc/resolv.conf
[joe@rhel84 ~]$ cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 192.168.122.1

3. Run podman with " keep-id " option.

[joe@rhel84 ~]$ podman --cgroup-manager=cgroupfs run -it --rm --net=slirp4netns:allow_host_loopback=true,cidr=192.168.0.0/24 --add-host=localhost.containers.internal:192.168.0.2 --userns keep-id --entrypoint /bin/cat registry.access.redhat.com/ubi8:latest /etc/resolv.conf
nameserver 10.0.2.3
nameserver 192.168.122.1
[joe@rhel84 ~]$ 

4. Check /etc/resolv.conf - it takes the nameserver " 10.0.2.3 "


Actual results:

The resolv.conf takes the  nameserver " 10.0.2.3 "

Expected results:

The resolv.conf file should not take the  nameserver " 10.0.2.3 "

Additional info:

- Without the option "keep-id" the nameserver does not get added.

Comment 2 Tom Sweeney 2023-03-27 19:22:47 UTC
@pholzing PTAL.  This is a potential ZeroDay fix for RHEL 8.8/9.2

Comment 3 Paul Holzinger 2023-03-28 13:05:12 UTC
I can confirm that this is still broken on main and I know what is causing it. I will open a PR later to fix it.

Comment 4 Paul Holzinger 2023-03-28 13:55:58 UTC
Upstream fix in https://github.com/containers/podman/pull/17963.
@tsweeney please let me know to what branches I should backport this fix.

Comment 5 Tom Sweeney 2023-03-28 20:01:30 UTC
Setting to Post and assigning to Jindrich for any further packaging or BZ needs.

Comment 7 Alex Jia 2023-04-04 02:35:47 UTC
I can reproduce this bug on podman-4.2.0-8.module+el8.7.0+17824+66a0202b.

[test@kvm-04-guest23 ~]$ podman --cgroup-manager=cgroupfs run -it --rm --net=slirp4netns:allow_host_loopback=true,cidr=192.168.0.0/24 --add-host=localhost.containers.internal:192.168.0.2 --userns keep-id --entrypoint /bin/cat registry.access.redhat.com/ubi8:latest /etc/resolv.conf
search lab.eng.rdu2.redhat.com
nameserver 10.0.2.3
nameserver 10.11.5.160
nameserver 10.2.70.215

And for podman-4.4.1-10.module+el8.8.0+18555+491facf3, I didn't see any difference in
the /etc/resolv.conf w/ and w/o --userns keep-id option, is this an expected?

[test@kvm-02-guest08 ~]$ podman unshare cat /proc/self/uid_map
         0       1000          1
         1     100000      65536

1. w/o keep-id
[test@kvm-02-guest08 ~]$ podman --cgroup-manager=cgroupfs run -it --rm --net=slirp4netns:cidr=192.168.0.0/24 --add-host=localhost.containers.internal:192.168.0.2 --entrypoint /bin/cat registry.access.redhat.com/ubi8:latest /etc/resolv.conf
search lab.eng.rdu2.redhat.com
nameserver 192.168.0.3
nameserver 10.11.5.160
nameserver 10.2.70.215

2. w/ keep-id
[test@kvm-02-guest08 ~]$ podman --cgroup-manager=cgroupfs run -it --rm --net=slirp4netns:allow_host_loopback=true,cidr=192.168.0.0/24 --add-host=localhost.containers.internal:192.168.0.2 --userns keep-id --entrypoint /bin/cat registry.access.redhat.com/ubi8:latest /etc/resolv.conf
search lab.eng.rdu2.redhat.com
nameserver 192.168.0.3
nameserver 10.11.5.160
nameserver 10.2.70.215

Comment 9 Daniel Walsh 2023-04-04 18:04:03 UTC
Looks like this should be fixed in podman 4.4.

Comment 11 Alex Jia 2023-04-06 01:41:32 UTC
(In reply to Daniel Walsh from comment #9)
> Looks like this should be fixed in podman 4.4.

Thank you Daniel for your confirmation!

Comment 15 Alex Jia 2023-05-23 05:39:09 UTC
This bug has been verified on podman-4.4.1-18.module+el8.9.0+18893+0b9f3df9.

[test@kvm-01-guest06 ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.9 Beta (Ootpa)

[test@kvm-01-guest06 ~]$ rpm -q podman runc systemd kernel
podman-4.4.1-18.module+el8.9.0+18893+0b9f3df9.x86_64
runc-1.1.7-1.module+el8.9.0+18893+0b9f3df9.x86_64
systemd-239-75.el8.x86_64
kernel-4.18.0-492.el8.x86_64

[test@kvm-01-guest06 ~]$ podman unshare cat /proc/self/uid_map
         0       1000          1
         1     100000      65536

[test@kvm-01-guest06 ~]$ cat /etc/resolv.conf
# Generated by NetworkManager
search lab.eng.rdu2.redhat.com
nameserver 10.11.5.160
nameserver 10.2.70.215

[test@kvm-01-guest06 ~]$ podman --cgroup-manager=cgroupfs run -it --rm --net=slirp4netns:allow_host_loopback=true,cidr=192.168.0.0/24 --add-host=localhost.containers.internal:192.168.0.2 --userns keep-id --entrypoint /bin/cat registry.access.redhat.com/ubi8:latest /etc/resolv.conf
search lab.eng.rdu2.redhat.com
nameserver 192.168.0.3
nameserver 10.11.5.160
nameserver 10.2.70.215

Comment 17 errata-xmlrpc 2023-11-14 15:29:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6939


Note You need to log in before you can comment on or make changes to this bug.