Bug 1752851 - Podman is freezing when running Mariadb
Summary: Podman is freezing when running Mariadb
Keywords:
Status: CLOSED DUPLICATE of bug 1753328
Alias: None
Product: Fedora
Classification: Fedora
Component: podman
Version: 31
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Giuseppe Scrivano
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-17 12:13 UTC by taaem
Modified: 2019-09-18 16:11 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-18 16:11:23 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
Log of podman run with log level debug (10.15 KB, text/plain)
2019-09-17 12:13 UTC, taaem
no flags Details

Description taaem 2019-09-17 12:13:27 UTC
Created attachment 1615831 [details]
Log of podman run with log level debug

Description of problem:
The mariadb/server is not able to run. When trying to run it just freezes and the command becomes unresponsive.

Version-Release number of selected component (if applicable):
podman 1.5.1-dev

How reproducible:
Always

Steps to Reproduce:
1. podman run --log-level debug -e MARIADB_ROOT_PASSWORD=test mariadb/server

Actual results:
podman command freezes at Starting container ... with command [docker-entrypoint.sh mysqld] 

Expected results:
Container should be running and I should be able to use mariadb

Additional info:

Comment 1 taaem 2019-09-17 12:18:30 UTC
When running podman as root everything works normally.

Comment 2 Giuseppe Scrivano 2019-09-17 18:28:45 UTC
that looks like a known issue in fuse-overlayfs.

What version of fuse-overlayfs are you using?

Comment 3 taaem 2019-09-17 18:58:48 UTC
 the installed version is: fuse-overlayfs-0.6.2-2.git67a4afe.fc31.x86_64

Comment 4 Giuseppe Scrivano 2019-09-17 19:29:10 UTC
fuse-overlayfs seems to work fine, it works locally for me so it must be something else.

When it hangs, what processes do you see lying around?  Is it just Podman?  Are you running with cgroups v2?

Comment 5 taaem 2019-09-18 15:11:42 UTC
I think I'm running with cgroups V2 since that is whats default on Fedora 31 right?
About the processes I have:
- podman
- /usr/bin/fuse-overlayfs -o lowerdir=/home/taaem/.local/share/containers/storage/overlay/l/FN254LIGWLT2
R3XSYT7GJKGSSN:/home/taaem/.local/share/containers/storage/overlay/l/56PQSBQWUZQJF24HVMF4IS7NKJ:/home/taaem/.local/share/containers/storage/overlay/l/62SZ4G3L6TEDVGBJRUV
JMF2GNW:/home/taaem/.local/share/containers/storage/overlay/l/KXYFS2ERDTXP7LX7JZ6ZS3BOPQ:/home/taaem/.local/share/containers/storage/overlay/l/W5CETEN2VDPPNK47GDW2EJV2WF
:/home/taaem/.local/share/containers/storage/overlay/l/QSUZP4FS4GUPS4BJAHBCKMWASY:/home/taaem/.local/share/containers/storage/overlay/l/EYYPUXC4GAL4S6VXNRFGFEEE4D:/home/
taaem/.local/share/containers/storage/overlay/l/OTO2SXCXH5Q6X2XD2THMJMD5W3:/home/taaem/.local/share/containers/storage/overlay/l/7CJR3U6J6VJBU7JI6ASXPCZU2I:/home/taaem/.
local/share/containers/storage/overlay/l/OGGURZGKZQ5VRGFEHGJUGMNXJ3:/home/taaem/.local/share/containers/storage/overlay/l/GEYL7G4S7KR3C35ZLOIT3RFDJQ:/home/taaem/.local/s
hare/containers/storage/overlay/l/42BJ4XQBKE4LY3FHZR5BTDMGM7:/home/taaem/.local/share/containers/storage/overlay/l/2F6FLSFABEMUCRJG4B6V7OG6X4:/home/taaem/.local/share/co
ntainers/storage/overlay/l/WYVPJF7EBGIM5R23FHLQOOHG52,upperdir=/home/taaem/.local/share/containers/storage/overlay/884a9c791c291e73a17577a62e395758c3625794ae01b1502e864f
7739180244/diff,workdir=/home/taaem/.local/share/containers/storage/overlay/884a9c791c291e73a17577a62e395758c3625794ae01b1502e864f7739180244/work,context="system_u:objec
t_r:container_file_t:s0:c101,c641" /home/taaem/.local/share/containers/storage/overlay/884a9c791c291e73a17577a62e395758c3625794ae01b1502e864f7739180244/merged
- /usr/libexec/podman/conmon --api-version 1 -s -c f826323970672e2e8374e610cedc866d80e2e1f2bc28da1e9a0e3
6a41349d35e -u f826323970672e2e8374e610cedc866d80e2e1f2bc28da1e9a0e36a41349d35e -r /usr/bin/crun -b /home/taaem/.local/share/containers/storage/overlay-containers/f82632
3970672e2e8374e610cedc866d80e2e1f2bc28da1e9a0e36a41349d35e/userdata -p /run/user/1000/overlay-containers/f826323970672e2e8374e610cedc866d80e2e1f2bc28da1e9a0e36a41349d35e
/userdata/pidfile -l k8s-file:/home/taaem/.local/share/containers/storage/overlay-containers/f826323970672e2e8374e610cedc866d80e2e1f2bc28da1e9a0e36a41349d35e/userdata/ct
r.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog -t --conmon-pidfile /run/user/1000/overlay
-containers/f826323970672e2e8374e610cedc866d80e2e1f2bc28da1e9a0e36a41349d35e/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/taaem/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg f826323970672e2e8374e610cedc866d80e2e1f2bc28da1e9a0e36a41349d35e
- mysqld
- /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 -c -e 3 -r 4 4983 tap0
- /usr/bin/crun start f826323970672e2e8374e610cedc866d80e2e1f2bc28da1e9a0e36a41349d35e

So somehow the mysqld process is already there but it just doesn't do anything.
I upgraded to 31 from 30, so maybe some old configuration file in my home directory messes that up?

Comment 6 Giuseppe Scrivano 2019-09-18 15:50:32 UTC
looks like a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1753328

Comment 7 taaem 2019-09-18 16:11:23 UTC
Yes that's the same issue as that one.

*** This bug has been marked as a duplicate of bug 1753328 ***


Note You need to log in before you can comment on or make changes to this bug.