This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2082094 - Cgroup v1 initialization causes NullPointerException when cgroup path does not start with the mount root
Summary: Cgroup v1 initialization causes NullPointerException when cgroup path does no...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: java-17-openjdk
Version: 9.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Severin Gehwolf
QA Contact: OpenJDK QA
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-05 11:46 UTC by Lin Gao
Modified: 2023-09-12 22:39 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-12 22:39:22 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-3436 0 None Migrated None 2023-09-12 22:39:18 UTC
Red Hat Issue Tracker RHELPLAN-121147 0 None None None 2022-05-05 12:16:43 UTC
openjdk bug system JDK-8286212 0 None None None 2022-05-05 18:31:34 UTC

Description Lin Gao 2022-05-05 11:46:43 UTC
Description of problem:


When I ran WildFly testsuite on JDK 17.0.2 within a podman container, I got NPE for all tests, all work fine if I run it on base metal:


[ERROR] Failed to execute goal org.wildfly.plugins:wildfly-maven-plugin:2.0.1.Final:execute-commands (apply-elytron) on project wildfly-ts-integ-smoke: Failed to execute commands: Exception in thread "main"
 java.lang.NullPointerException
[ERROR]         at java.base/java.util.Objects.requireNonNull(Objects.java:208)
[ERROR]         at java.base/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:263)
[ERROR]         at java.base/java.nio.file.Path.of(Path.java:147)
[ERROR]         at java.base/java.nio.file.Paths.get(Paths.java:69)
[ERROR]         at java.base/jdk.internal.platform.CgroupUtil.lambda$readStringValue$1(CgroupUtil.java:67)
[ERROR]         at java.base/java.security.AccessController.doPrivileged(AccessController.java:569)
[ERROR]         at java.base/jdk.internal.platform.CgroupUtil.readStringValue(CgroupUtil.java:69)
[ERROR]         at java.base/jdk.internal.platform.CgroupSubsystemController.getStringValue(CgroupSubsystemController.java:65)
[ERROR]         at java.base/jdk.internal.platform.CgroupSubsystemController.getLongValue(CgroupSubsystemController.java:124)
[ERROR]         at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getLongValue(CgroupV1Subsystem.java:175)
[ERROR]         at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getHierarchical(CgroupV1Subsystem.java:149)
[ERROR]         at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.initSubSystem(CgroupV1Subsystem.java:84)
[ERROR]         at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getInstance(CgroupV1Subsystem.java:60)
[ERROR]         at java.base/jdk.internal.platform.CgroupSubsystemFactory.create(CgroupSubsystemFactory.java:116)
[ERROR]         at java.base/jdk.internal.platform.CgroupMetrics.getInstance(CgroupMetrics.java:167)
[ERROR]         at java.base/jdk.internal.platform.SystemMetrics.instance(SystemMetrics.java:29)
[ERROR]         at java.base/jdk.internal.platform.Metrics.systemMetrics(Metrics.java:58)
[ERROR]         at java.base/jdk.internal.platform.Container.metrics(Container.java:43)
[ERROR]         at jdk.management/com.sun.management.internal.OperatingSystemImpl.<init>(OperatingSystemImpl.java:182)
[ERROR]         at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl.getOperatingSystemMXBean(PlatformMBeanProviderImpl.java:280)
[ERROR]         at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl$3.nameToMBeanMap(PlatformMBeanProviderImpl.java:199)
[ERROR]         at java.management/java.lang.management.ManagementFactory.lambda$getPlatformMBeanServer$0(ManagementFactory.java:488)
[ERROR]         at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
[ERROR]         at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
[ERROR]         at java.base/java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1779)
[ERROR]         at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
[ERROR]         at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
[ERROR]         at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
[ERROR]         at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
[ERROR]         at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
[ERROR]         at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
[ERROR]         at java.management/java.lang.management.ManagementFactory.getPlatformMBeanServer(ManagementFactory.java:489)
[ERROR]         at org.jboss.modules.ModuleLoader$RealMBeanReg$1.run(ModuleLoader.java:1258)
[ERROR]         at org.jboss.modules.ModuleLoader$RealMBeanReg$1.run(ModuleLoader.java:1256)
[ERROR]         at java.base/java.security.AccessController.doPrivileged(AccessController.java:318)
[ERROR]         at org.jboss.modules.ModuleLoader$RealMBeanReg.<init>(ModuleLoader.java:1256)
[ERROR]         at org.jboss.modules.ModuleLoader$TempMBeanReg.installReal(ModuleLoader.java:1240)
[ERROR]         at org.jboss.modules.ModuleLoader.installMBeanServer(ModuleLoader.java:273)
[ERROR]         at org.jboss.modules.Main.main(Main.java:605)


Version-Release number of selected component (if applicable):

The environment that I reproduced the problem:

Java version:
[jenkins@testjenkins ~]$ java -version
openjdk version "17.0.2" 2022-01-18
OpenJDK Runtime Environment 21.9 (build 17.0.2+8)
OpenJDK 64-Bit Server VM 21.9 (build 17.0.2+8, mixed mode, sharing)

RHEL 8.5:
[jenkins@testjenkins ~]$ uname -a
Linux testjenkins 4.18.0-348.el8.x86_64 #1 SMP Mon Oct 4 12:17:22 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
[jenkins@testjenkins ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.5 (Ootpa)

podman version:
[jenkins@testjenkins ~]$ podman --version
podman version 3.4.2


In this case /proc/self/cgroup file in the container has the following line:

9:memory:/user.slice/user-1000.slice/session-3.scope

while /proc/self/mountinfo file in the container has the following line:

941 931 0:36 /user.slice/user-1000.slice/session-50.scope /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,seclabel,memory


For more detail info, please refer to https://gist.github.com/gaol/4d96eace8290e6549635fdc0ea41d0b4

It also relates to https://bugs.openjdk.java.net/browse/JDK-8272124

Comment 1 Severin Gehwolf 2022-05-05 12:06:41 UTC
@lgao To get a better picture of the actual system where this is happening, is this RHEL 9 with an UBI8 OpenJDK container?

Comment 2 Lin Gao 2022-05-05 12:53:50 UTC
The container was built from:

FROM registry.access.redhat.com/ubi8/ubi:latest

the host is RHEL 8.5.

Comment 3 Severin Gehwolf 2022-05-05 13:04:26 UTC
(In reply to Lin Gao from comment #2)
> The container was built from:
> 
> FROM registry.access.redhat.com/ubi8/ubi:latest
> 
> the host is RHEL 8.5.

OK, but something must be special to get this kind of setup. I wonder what it is. How do you run the container? How do you run the tests within it? Is there something else involved?

Is it really just this?

on a RHEL 8.5 host:
$ podman run --rm -ti <foo-container>:<tag>
[root@ff7767c0dcb7 /]# mvn clean verify

Comment 4 Lin Gao 2022-05-05 13:26:56 UTC
Would https://gist.github.com/gaol/4d96eace8290e6549635fdc0ea41d0b4#file-podman_container_inspect-json answer it ? :)

which is the meta created by podman inspect, which contains the 'CreateCommand' section on how this container is started.

Comment 5 Severin Gehwolf 2022-05-05 16:44:17 UTC
Not quite, but it'll have to do. I was hoping to understand how those mismatches between mountinfo and /proc/self/cgroup can happen in practise. Either way, there shouldn't be a NPE. I'll bring this bug and fix upstream.

Comment 6 Lin Gao 2022-05-06 05:29:53 UTC
Thanks, and sorry that I have no idea why these mismatches happen.

Maybe the following information can help:

* The full command used to start the podman container:

podman run             --name automaton-slave-eap-7.4.x-testsuite-23 --userns=keep-id -u 1000:1000             --add-host=olympus:10.88.0.1              --rm -v '/home/jenkins/current//jobs/eap-7.4.x-build/builds/28/archive:/parent_job/:ro'                --workdir /var/jenkins_home/workspace/eap-7.4.x-testsuite              -v /home/jenkins/current/workspace/eap-7.4.x-testsuite:/var/jenkins_home/workspace/eap-7.4.x-testsuite:rw -v /opt:/opt:ro              -v /home/jenkins/.ssh/:/var/jenkins_home/.ssh/:ro             -v /home/jenkins/.gitconfig:/var/jenkins_home/.gitconfig:ro             -v /home/jenkins/.netrc:/var/jenkins_home/.netrc:ro             -d localhost/automatons '/var/jenkins_home/workspace/eap-7.4.x-testsuite/hera/wait.sh'



* The podman process:

jenkins   189582       1  0 13:21 ?        00:00:00 /usr/bin/conmon --api-version 1 -c db5b6c6e60784e6c9b743bafb7f55abb86836b6f346d3d8ff5e739b9409f185f -u db5b6c6e60784e6c9b743bafb7f55abb86836b6f346d3d8ff5e739b9409f185f -r /usr/bin/runc -b /home/jenkins/.local/share/containers/storage/overlay-containers/db5b6c6e60784e6c9b743bafb7f55abb86836b6f346d3d8ff5e739b9409f185f/userdata -p /run/user/1000/containers/overlay-containers/db5b6c6e60784e6c9b743bafb7f55abb86836b6f346d3d8ff5e739b9409f185f/userdata/pidfile -n automaton-slave-eap-7.4.x-testsuite-23 --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -l k8s-file:/home/jenkins/.local/share/containers/storage/overlay-containers/db5b6c6e60784e6c9b743bafb7f55abb86836b6f346d3d8ff5e739b9409f185f/userdata/ctr.log --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/overlay-containers/db5b6c6e60784e6c9b743bafb7f55abb86836b6f346d3d8ff5e739b9409f185f/userdata/oci-log --conmon-pidfile /run/user/1000/containers/overlay-containers/db5b6c6e60784e6c9b743bafb7f55abb86836b6f346d3d8ff5e739b9409f185f/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/jenkins/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg db5b6c6e60784e6c9b743bafb7f55abb86836b6f346d3d8ff5e739b9409f185f



* The process inside the container to run the tests:

jenkins       96      43  0 05:21 pts/0    00:00:01 /opt/oracle/jdk-17.0.2/bin/java -Dmaven.wagon.http.ssl.insecure=true -Dhttps.protocols=TLSv1.2 -Xmx2048m -Xms1024m -Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.maxPerRoute=3 -classpath /var/jenkins_home/workspace/eap-7.4.x-testsuite/workdir/.mvn/wrapper/maven-wrapper.jar -Dmaven.home=/var/jenkins_home/workspace/eap-7.4.x-testsuite -Dmaven.multiModuleProjectDirectory=/var/jenkins_home/workspace/eap-7.4.x-testsuite/workdir org.apache.maven.wrapper.MavenWrapperMain -Dmaven.user.home=/var/jenkins_home/workspace/eap-7.4.x-testsuite/workdir/tools install -DallTests -fae -fae -Delytron -DnoCompile -Dsurefire.forked.process.timeout=90000 -Dskip-download-sources -B -Dmaven.test.failure.ignore= -Dsurefire.rerunFailingTestsCount=0 -Dsurefire.memory.args=-Xmx1024m -s /opt/tools/settings.xml

Comment 7 RHEL Program Management 2023-09-12 22:38:13 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 8 RHEL Program Management 2023-09-12 22:39:22 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.