Bug 711748 - dbus-daemon runs but connections to /var/run/dbus/system_bus_socket are refused
dbus-daemon runs but connections to /var/run/dbus/system_bus_socket are refused
Status: CLOSED DUPLICATE of bug 709319
Product: Fedora
Classification: Fedora
Component: util-linux (Show other bugs)
15
i686 Linux
unspecified Severity high
: ---
: ---
Assigned To: Karel Zak
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2011-06-08 06:58 EDT by Javier Perez
Modified: 2012-04-13 10:06 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-07-05 06:06:24 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
Yum history Info of upgrade from FC14 to FC15 (112.43 KB, text/plain)
2011-06-09 06:33 EDT, Javier Perez
no flags Details
dmesg (28.34 KB, text/plain)
2011-06-10 04:27 EDT, Javier Perez
no flags Details
/var/log/messages file (215.75 KB, text/plain)
2011-06-10 04:28 EDT, Javier Perez
no flags Details

  None (edit)
Description Javier Perez 2011-06-08 06:58:06 EDT
Description of problem:
systemd-analyze is not working. After I enter the command it crashes, it looks like it cannot connect to the bus

Version-Release number of selected component (if applicable):
systemd 26-2.fc15.i686

How reproducible:
Always

Steps to Reproduce:
1.Run systemd-analyze 
2.watch the command crash and spew errors

  
Actual results:

[root@bloody ~]# systemd-analyze
Traceback (most recent call last):
  File "/usr/bin/systemd-analyze", line 78, in <module>
    bus = dbus.SystemBus()
  File "/usr/lib/python2.7/site-packages/dbus/_dbus.py", line 202, in __new__
    private=private)
  File "/usr/lib/python2.7/site-packages/dbus/_dbus.py", line 108, in __new__
    bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop)
  File "/usr/lib/python2.7/site-packages/dbus/bus.py", line 125, in __new__
    bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /var/run/dbus/system_bus_socket: Connection refused



Expected results:

System Analysis info

Additional info:
Comment 1 Michal Schmidt 2011-06-08 07:15:44 EDT
What's the output of:
systemctl status dbus.socket dbus.service
Comment 2 Javier Perez 2011-06-09 02:33:01 EDT
This is the output of systemctl status dbus.socket dbus.service

[root@bloody ~]# systemctl status dbus.socket dbus.service
dbus.socket - D-Bus System Message Bus Socket
          Loaded: loaded (/lib/systemd/system/dbus.socket)
          Active: active (running) since Wed, 08 Jun 2011 01:02:16 -0500; 24h ago
          CGroup: name=systemd:/system/dbus.socket

dbus.service - D-Bus System Message Bus
          Loaded: loaded (/lib/systemd/system/dbus.service)
          Active: active (running) since Wed, 08 Jun 2011 01:02:17 -0500; 24h ago
        Main PID: 585 (dbus-daemon)
          CGroup: name=systemd:/system/dbus.service
                  â 585 /bin/dbus-daemon --system --address=systemd: --nofork --systemd-activ...
Comment 3 Michal Schmidt 2011-06-09 03:33:57 EDT
The output looks normal.

Maybe something went wrong during the yum update from F14? (It was not mentioned in this bugreport, but I found your message to the users mailing list:
http://lists.fedoraproject.org/pipermail/users/2011-June/399073.html)
Use "yum history" to find the ID of the upgrade transaction and "yum history info $ID" to check for any errors from the RPM scriptlets.

Try doing "yum distro-sync" to check and fix a possible mixup between F14 and F15 packages.

The access may be blocked by SELinux. Try "setenforce 0" to put SELinux into permissive mode temporarily.
Comment 4 Javier Perez 2011-06-09 06:33:37 EDT
Created attachment 503865 [details]
Yum history Info of upgrade from FC14 to FC15

Output of Yum History Info $id of the upgrade from FC14 to FC15
Comment 5 Javier Perez 2011-06-09 06:35:15 EDT
I reviewed Yum history and did not find any obvious error.
All I could find in the scriptlet output was:
  23 FATAL: Could not load /lib/modules/2.6.35.12-90.fc14.i686.PAE/modules.dep: No such file or directory
  24 restorecond: unrecognized service

I am attaching the file.


I do not think SELinux is an issue. I have a selinux=0 instruction on the kernel invocation:

title Fedora (2.6.38.7-30.fc15.i686.PAE)
root (hd0,0)
kernel /vmlinuz-2.6.38.7-30.fc15.i686.PAE ro root=UUID=8e65d7f5-bc96-4dd9-a681-7b84242d3a87 nf_conntrack.acct=1 LANG=en_US.UTF-8 KEYTABLE=us SYSFONT=latarcyrheb-sun16 selinux=0
initrd /initramfs-2.6.38.7-30.fc15.i686.PAE.img



Here is the result of yum distro-sync
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Updating   : liberation-fonts-common-1.07.0-1.fc15.noarch              1/28
  Installing : 1:cups-libs-1.4.6-15.fc15.i686                            2/28
  Updating   : poppler-0.16.6-1.fc15.i686                                3/28
  Updating   : poppler-glib-0.16.6-1.fc15.i686                           4/28
  Updating   : poppler-utils-0.16.6-1.fc15.i686                          5/28
  Installing : 1:cups-1.4.6-15.fc15.i686                                 6/28
  Updating   : sssd-client-1.5.8-1.fc15.i686                             7/28
  Updating   : liberation-serif-fonts-1.07.0-1.fc15.noarch               8/28
  Updating   : liberation-sans-fonts-1.07.0-1.fc15.noarch                9/28
  Updating   : liberation-mono-fonts-1.07.0-1.fc15.noarch               10/28
  Updating   : sssd-1.5.8-1.fc15.i686                                   11/28
  Installing : 1:cups-php-1.4.6-15.fc15.i686                            12/28
  Installing : 1:cups-lpd-1.4.6-15.fc15.i686                            13/28
  Updating   : poppler-qt-0.16.6-1.fc15.i686                            14/28
  Cleanup    : 1:cups-lpd-1.4.6-19.fc15.i686                            15/28
  Cleanup    : 1:cups-php-1.4.6-19.fc15.i686                            16/28
  Cleanup    : 1:cups-1.4.6-19.fc15.i686                                17/28
  Cleanup    : poppler-utils-0.16.5-1.fc15.i686                         18/28
  Cleanup    : poppler-glib-0.16.5-1.fc15.i686                          19/28
  Cleanup    : liberation-mono-fonts-1.06.0.20100721-5.fc15.noarch      20/28
  Cleanup    : sssd-1.5.7-1.fc15.i686                                   21/28
  Cleanup    : liberation-sans-fonts-1.06.0.20100721-5.fc15.noarch      22/28
  Cleanup    : liberation-serif-fonts-1.06.0.20100721-5.fc15.noarch     23/28
  Cleanup    : poppler-qt-0.16.5-1.fc15.i686                            24/28
  Cleanup    : poppler-0.16.5-1.fc15.i686                               25/28
  Cleanup    : liberation-fonts-common-1.06.0.20100721-5.fc15.noarch    26/28
  Cleanup    : sssd-client-1.5.7-1.fc15.i686                            27/28
  Cleanup    : 1:cups-libs-1.4.6-19.fc15.i686                           28/28
Unable to connect to dbus

Removed:
  cups.i686 1:1.4.6-19.fc15        cups-libs.i686 1:1.4.6-19.fc15    
  cups-lpd.i686 1:1.4.6-19.fc15        cups-php.i686 1:1.4.6-19.fc15

Installed:
  cups.i686 1:1.4.6-15.fc15        cups-libs.i686 1:1.4.6-15.fc15    
  cups-lpd.i686 1:1.4.6-15.fc15        cups-php.i686 1:1.4.6-15.fc15

Updated:
  liberation-fonts-common.noarch 0:1.07.0-1.fc15    
  liberation-mono-fonts.noarch 0:1.07.0-1.fc15    
  liberation-sans-fonts.noarch 0:1.07.0-1.fc15
  liberation-serif-fonts.noarch 0:1.07.0-1.fc15     
  poppler.i686 0:0.16.6-1.fc15                    
  poppler-glib.i686 0:0.16.6-1.fc15
  poppler-qt.i686 0:0.16.6-1.fc15                   
  poppler-utils.i686 0:0.16.6-1.fc15              
  sssd.i686 0:1.5.8-1.fc15
  sssd-client.i686 0:1.5.8-1.fc15

Complete!

==================================

I have even done some group updates to make sure I catch anything yum did not upgrade:

[root@bloody ~]# yum groupinstall 'Administration Tools' Base 'Hardware Support' Java Editors Milkymist  'Windows File Server' 'Web Server'  'Text-based Internet' '' 'Sound and Video' 'Server Configuration Tools' 'Network Servers' 'Mail Server' 'MySQL Database' Fonts

[root@bloody ~]# yum groupremove 'GNOME Desktop Environment'

[root@bloody ~]# yum groupinstall 'X Window System' 'Window Managers' 'KDE Software Compilation'

The error still remains with the same message.

BTW, I am running this server at init level 3. But even when I went to level 5 and ran the command from a CLI window, I got the same error.
Comment 6 Javier Perez 2011-06-09 06:39:25 EDT
BTW, after yum distro-sync I re-updated cups to the next version because I had the bug 710689 to contend with. I am not sure if its is related or could shed info on this bug also.
Comment 7 Michal Schmidt 2011-06-09 06:40:19 EDT
Does this command fail with the same error?:
python -c 'import dbus; print dbus.SystemBus()'

And this one?:
dbus-monitor --system
Comment 8 Javier Perez 2011-06-09 08:11:30 EDT
Yes, it fails and it seems to be the same error:

[root@bloody ~]# python -c 'import dbus; print dbus.SystemBus()'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib/python2.7/site-packages/dbus/_dbus.py", line 202, in __new__
    private=private)
  File "/usr/lib/python2.7/site-packages/dbus/_dbus.py", line 108, in __new__
    bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop)
  File "/usr/lib/python2.7/site-packages/dbus/bus.py", line 125, in __new__
    bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /var/run/dbus/system_bus_socket: Connection refused

dbus-monitor fails to connect:

[root@bloody ~]# dbus-monitor --system
Failed to open connection to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: Connection refused
Comment 9 Michal Schmidt 2011-06-09 08:21:22 EDT
(In reply to comment #2)
> dbus.service - D-Bus System Message Bus
>           Loaded: loaded (/lib/systemd/system/dbus.service)
>           Active: active (running) since Wed, 08 Jun 2011 01:02:17 -0500; 24h
> ago
>         Main PID: 585 (dbus-daemon)
>           CGroup: name=systemd:/system/dbus.service
>                   â 585 /bin/dbus-daemon --system --address=systemd: --nofork
> --systemd-activ...

After checking this once more, I see there is something suspicious about this after all. There is no indication of the ExecStartPre 

For instance, on my system I have:

dbus.service - D-Bus System Message Bus
	  Loaded: loaded (/lib/systemd/system/dbus.service)
	  Active: active (running) since Thu, 09 Jun 2011 10:37:59 +0200; 3h 38min ago
	 Process: 933 ExecStartPre=/bin/rm -f /var/run/messagebus.pid (code=exited, status=0/SUCCESS)
	 Process: 930 ExecStartPre=/bin/dbus-uuidgen --ensure (code=exited, status=0/SUCCESS)
	Main PID: 954 (dbus-daemon)
	  CGroup: name=systemd:/system/dbus.service
[...]


Does this report anything?:
rpm -Va 'dbus*'
Comment 10 Javier Perez 2011-06-09 22:56:23 EDT
No. It does not report anything

[root@bloody ~]# rpm -Va 'dbus*'
[root@bloody ~]#

On the other hand, if I do yum list 'dbus*' I get the following:

[root@bloody ~]# yum list 'dbus*'
Loaded plugins: fastestmirror, langpacks, presto, refresh-packagekit
Loading mirror speeds from cached hostfile
 * fedora: mirror.umoss.org
 * rpmfusion-free: mirrors.cat.pdx.edu
 * rpmfusion-free-updates: mirrors.cat.pdx.edu
 * rpmfusion-nonfree: mirrors.cat.pdx.edu
 * rpmfusion-nonfree-updates: mirrors.cat.pdx.edu
 * updates: mirror.ispbrasil.com.br
Installed Packages
dbus.i686                       1:1.4.6-4.fc15                           @fedora
dbus-glib.i686                  0.92-2.fc15                              @fedora
dbus-libs.i686                  1:1.4.6-4.fc15                           @fedora
dbus-python.i686                0.83.0-8.fc15                            @fedora
dbus-x11.i686                   1:1.4.6-4.fc15                           @fedora
dbusmenu-qt.i686                0.6.3-3.fc15                             @fedora
Available Packages
dbus-c++.i686                   0.5.0-0.12.20090203git13281b3.fc15       fedora
dbus-c++-devel.i686             0.5.0-0.12.20090203git13281b3.fc15       fedora
dbus-cxx.i686                   0.7.0-2.fc14.1                           fedora
dbus-cxx-devel.i686             0.7.0-2.fc14.1                           fedora
dbus-cxx-doc.noarch             0.7.0-2.fc14.1                           fedora
dbus-cxx-glibmm.i686            0.7.0-2.fc14.1                           fedora
dbus-cxx-glibmm-devel.i686      0.7.0-2.fc14.1                           fedora
dbus-cxx-tools.i686             0.7.0-2.fc14.1                           fedora
dbus-devel.i686                 1:1.4.6-4.fc15                           fedora
dbus-doc.noarch                 1:1.4.6-4.fc15                           fedora
dbus-glib-devel.i686            0.92-2.fc15                              fedora
dbus-java.noarch                2.7-6.fc15                               fedora
dbus-java-javadoc.noarch        2.7-6.fc15                               fedora
dbus-python-devel.i686          0.83.0-8.fc15                            fedora
dbus-qt3.i686                   0.9-5.fc15                               fedora
dbus-qt3-devel.i686             0.9-5.fc15                               fedora
dbus-sharp.i686                 0.63-17.fc15                             fedora
dbus-sharp-devel.i686           0.63-17.fc15                             fedora
dbusmenu-qt-devel.i686          0.6.3-3.fc15                             fedora
[root@bloody ~]#
Comment 11 Michal Schmidt 2011-06-10 02:04:33 EDT
Please boot with 'log_buf_len=1M systemd.log_level=debug systemd.log_target=kmsg' and attach the output of 'dmesg'.
Comment 12 Javier Perez 2011-06-10 04:27:25 EDT
Created attachment 504046 [details]
dmesg

dmesg file
Comment 13 Javier Perez 2011-06-10 04:28:19 EDT
Created attachment 504047 [details]
/var/log/messages file

/var/log/messages

I trimmed it to show the last reboot
Comment 14 Javier Perez 2011-06-10 04:29:59 EDT
Did the changes and rebooted the system.
I attached both dmesg and /var/log/messages

I did not see much difference in dmesg, but there were a lot more info at /var/log/messages. I trimmed the messages file to show everything from the last reboot only.
Comment 15 Michal Schmidt 2011-06-10 07:28:15 EDT
The dmesg attachment shows a boot of an old kernel. When asking for output of 'dmesg', I meant the command, not a (possibly stale) file called 'dmesg'.

However, the /var/log/messages you attached has the information I wanted to see, thanks.

The log shows no indication of any problem with starting of dbus.service. The ExecStartPre commands were run too.

Could you check what file descriptors the dbus-daemon process has open? Use
  lsof -p ...

I don't see systemd doing anything wrong here. Reassigning to dbus.

One thing I noticed in the log is that you are running quite a lot of services with only 248 MB RAM. I wonder if dbus-daemon could have got stuck because of that. What's your /proc/sys/vm/overcommit_memory setting?
Comment 16 Javier Perez 2011-06-10 23:35:53 EDT
These are the  files opened by dbus-daemon

[root@bloody ~]# ps ax |grep dbus
  595 ?        Ss     0:00 /bin/dbus-daemon --system --address=systemd: --nofork --systemd-activation
19378 pts/0    S+     0:00 grep --color=auto dbus
[root@bloody ~]# lsof -p 595
COMMAND   PID USER   FD   TYPE     DEVICE SIZE/OFF  NODE NAME
dbus-daem 595 dbus  cwd    DIR        8,2     4096     2 /
dbus-daem 595 dbus  rtd    DIR        8,2     4096     2 /
dbus-daem 595 dbus  txt    REG        8,2   374792 25827 /bin/dbus-daemon
dbus-daem 595 dbus  mem    REG        8,2    19464  6660 /lib/libcap-ng.so.0.0.0
dbus-daem 595 dbus  mem    REG        8,2    42060  1017 /lib/librt-2.14.so
dbus-daem 595 dbus  mem    REG        8,2  1855448   201 /lib/libc-2.14.so
dbus-daem 595 dbus  mem    REG        8,2    54456 36212 /lib/libnss_files-2.14.so
dbus-daem 595 dbus  mem    REG        8,2   164592  5523 /lib/libexpat.so.1.5.2
dbus-daem 595 dbus  mem    REG        8,2   142604   191 /lib/ld-2.14.so
dbus-daem 595 dbus  mem    REG        8,2   122652  1039 /lib/libselinux.so.1
dbus-daem 595 dbus  mem    REG        8,2   131624   609 /lib/libpthread-2.14.so
dbus-daem 595 dbus  mem    REG        8,2    19884   809 /lib/libdl-2.14.so
dbus-daem 595 dbus  mem    REG        8,2    93252  1051 /lib/libaudit.so.1.0.0
dbus-daem 595 dbus    0r   CHR        1,3      0t0  4337 /dev/null
dbus-daem 595 dbus    1w   CHR        1,3      0t0  4337 /dev/null
dbus-daem 595 dbus    2w   CHR        1,3      0t0  4337 /dev/null
dbus-daem 595 dbus    3u  unix 0xcbbfa6c0      0t0 10427 /var/run/dbus/system_bus_socket
dbus-daem 595 dbus    4r  0000        0,9        0  4335 anon_inode
dbus-daem 595 dbus    5u  sock        0,6      0t0 11398 can't identify protocol
dbus-daem 595 dbus    6u  unix 0xcb931200      0t0 11399 socket
dbus-daem 595 dbus    7u  unix 0xcb931440      0t0 11400 socket
dbus-daem 595 dbus    8u  unix 0xcb8d5d40      0t0 11401 /var/run/dbus/system_bus_socket
dbus-daem 595 dbus    9u  unix 0xcb8d58c0      0t0 11402 /var/run/dbus/system_bus_socket
[root@bloody ~]#

Regarding overcommit>
[root@bloody ~]# cat /proc/sys/vm/overcommit_memory
0

It gives me a 0 (cero).
Comment 17 Javier Perez 2011-06-29 02:22:30 EDT
I just updated to systemd v 26-5.fc15 and still I have the same error

Running Transaction
  Updating   : systemd-units-26-5.fc15.i686               1
  Updating   : systemd-26-5.fc15.i686                     2
  Updating   : systemd-sysv-26-5.fc15.i686                3
  Updating   : ModemManager-0.4.997-1.fc15.i686           4
  Cleanup    : systemd-sysv-26-4.fc15.i686                5
  Cleanup    : systemd-26-4.fc15.i686                     6
  Cleanup    : systemd-units-26-4.fc15.i686               7
  Cleanup    : ModemManager-0.4-8.git20110427.fc15.i686   8
Unable to connect to dbus

Updated:
  ModemManager.i686 0:0.4.997-1.fc15     systemd.i686 0:26-5.fc15     systemd-sysv.i686 0:26-5.fc15     systemd-units.i686 0:26-5.fc15

Complete!
[root@bloody ~]# systemd-analyze
Traceback (most recent call last):
  File "/usr/bin/systemd-analyze", line 78, in <module>
    bus = dbus.SystemBus()
  File "/usr/lib/python2.7/site-packages/dbus/_dbus.py", line 202, in __new__
    private=private)
  File "/usr/lib/python2.7/site-packages/dbus/_dbus.py", line 108, in __new__
    bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop)
  File "/usr/lib/python2.7/site-packages/dbus/bus.py", line 125, in __new__
    bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /var/run/dbus/system_bus_socket: Connecti            refused
[root@bloody ~]# systemd-analyze -blame
Comment 18 Michal Schmidt 2011-07-02 21:02:49 EDT
Could you attach your /etc/fstab?
Comment 19 Michal Schmidt 2011-07-02 21:03:50 EDT
... and /proc/self/mountinfo too please
Comment 20 Javier Perez 2011-07-03 09:40:27 EDT
============= /etc/fstab  ========================
[root@bloody ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Jul 18 20:29:41 2009
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or vol_id(8) for more info
#
UUID=8a388a8e-8841-4176-946e-5306efd249b3 /usr/local              ext4    defaults        1 2
UUID=b104d523-22c1-483c-adc3-d3570971b730 /boot                   ext3    defaults        1 2
UUID=8e65d7f5-bc96-4dd9-a681-7b84242d3a87 /                       ext4    defaults        1 1
UUID=bee58a09-abd6-436d-ab4a-22297550f813 swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620        0 0
#devpts options modified by setup update to fix #515521 ugly way
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/usr/local/links/var    /var                    none    bind            0 0
/usr/local/links/home   /home                   none    bind            0 0

=============================================================
==================== /proc/self/mountinfo ===================
=============================================================

[root@bloody ~]# cat /proc/self/mountinfo
15 21 0:3 / /proc rw,relatime - proc /proc rw
16 21 0:15 / /sys rw,relatime - sysfs /sys rw
17 21 0:5 / /dev rw,nosuid,relatime - devtmpfs udev rw,size=113328k,nr_inodes=28332,mode=755
18 17 0:10 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000
19 17 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw
20 21 0:17 / /run rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
21 1 8:2 / / rw,relatime - ext4 /dev/sda2 rw,barrier=1,data=ordered
22 16 0:18 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
23 22 0:19 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
24 22 0:20 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpuset
25 22 0:21 / /sys/fs/cgroup/ns rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,ns
26 22 0:22 / /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpu
27 22 0:23 / /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpuacct
28 22 0:24 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,memory
29 22 0:25 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,devices
30 22 0:26 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,freezer
31 22 0:27 / /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,net_cls
32 22 0:28 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,blkio
33 16 0:29 / /sys/kernel/debug rw,relatime - autofs systemd-1 rw,fd=28,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
34 17 0:30 / /dev/hugepages rw,relatime - autofs systemd-1 rw,fd=29,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
35 15 0:31 / /proc/sys/fs/binfmt_misc rw,relatime - autofs systemd-1 rw,fd=30,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
36 17 0:32 / /dev/mqueue rw,relatime - autofs systemd-1 rw,fd=31,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
37 16 0:33 / /sys/kernel/security rw,relatime - autofs systemd-1 rw,fd=33,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
38 21 0:34 / /media rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
40 21 8:1 / /boot rw,relatime - ext3 /dev/sda1 rw,errors=continue,user_xattr,acl,barrier=0,data=ordered
39 21 8:5 / /usr/local rw,relatime - ext4 /dev/sda5 rw,barrier=1,data=ordered
41 21 8:5 /links/var /var rw,relatime - ext4 /dev/sda5 rw,barrier=1,data=ordered
42 21 8:5 /links/home /home rw,relatime - ext4 /dev/sda5 rw,barrier=1,data=ordered
43 41 0:17 / /var/run rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
44 41 0:17 /lock /var/lock rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
45 34 0:35 / /dev/hugepages rw,relatime - hugetlbfs hugetlbfs rw
46 36 0:12 / /dev/mqueue rw,relatime - mqueue mqueue rw
47 41 8:5 /links/var /var rw,relatime - ext4 /dev/sda5 rw,barrier=1,data=ordered
48 42 8:5 /links/home /home rw,relatime - ext4 /dev/sda5 rw,barrier=1,data=ordered
49 47 0:36 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw
50 15 0:37 / /proc/fs/nfsd rw,relatime - nfsd nfsd rw
51 33 0:7 / /sys/kernel/debug rw,relatime - debugfs debugfs rw
52 35 0:38 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc binfmt_misc rw
53 37 0:14 / /sys/kernel/security rw,relatime - securityfs securityfs rw
[root@bloody ~]#
Comment 21 Michal Schmidt 2011-07-03 19:05:41 EDT
(In reply to comment #20)
> /usr/local/links/var    /var                    none    bind            0 0

This must be the trigger of the bug.

> [root@bloody ~]# cat /proc/self/mountinfo
...
> 39 21 8:5 / /usr/local rw,relatime - ext4 /dev/sda5 rw,barrier=1,data=ordered
> 41 21 8:5 /links/var /var rw,relatime - ext4 /dev/sda5
> rw,barrier=1,data=ordered
...
> 43 41 0:17 / /var/run rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
> 44 41 0:17 /lock /var/lock rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs
> rw,mode=755
...
> 47 41 8:5 /links/var /var rw,relatime - ext4 /dev/sda5
> rw,barrier=1,data=ordered

For some reason the /var bind mount is done twice. And its second instance hides the correct /var/run. I do not know what causes the second mount on your system.

I tried to reproduce it. The only way to do that was to run 'mount /var' manually after boot. This did result in the duplicate mount and broke the connections to dbus as expected. I can run 'mount /var' repeatedly and each time a new mount is created. This looks like a bug in util-linux.
Comment 22 Javier Perez 2011-07-04 03:15:38 EDT
Hmmmm
Indeed I think I am mounting /var and /home twice. This is how I am partitioning and assigning my 750GB drive:

Number  Type   	Extent Size       Start    	End    	Used by   
1 	Linux 	    204800 blocks       63     409662 	/boot
2 	Linux 	  31457280 blocks   409663   63324222 	/
3 	Linux 	   2097152 blocks 63324223   67518526 	Virtual memory
4 	Extended 698811435 blocks 67521195 1465144064 	
5 	Linux 	 698811392 blocks 67521258 1465144041 	/usr/local

[root@bloody /]# sfdisk -l

Disk /dev/sda: 91201 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1   *      0+     25-     26-    204800   83  Linux
/dev/sda2         25+   3941-   3917-  31457280   83  Linux
/dev/sda3       3941+   4202-    262-   2097152   83  Linux
/dev/sda4       4203   91200   86998  698811435    5  Extended
/dev/sda5       4203+  91200-  86998- 698811392   83  Linux

I wanted to keep the "/boot" and "/" (where all non-changing stuff is stored) at a relatively short size and then leave the rest for "/var" (which holds mail and all other kind of stuff) and "/home" (with all the user and ever increasing stuff). I did not want to use LVM or any other fancy stuff just in case I had to recover something one day from the disk. Also I did not want to create too many partitions, I am just fine with one all purpose ever increasing partition for this system.

Therefore the first "/var" and the first "/home" are created by whatever routine creates the standard Linux filesystem but then I relink them to /usr/local on /dev/sda5 by using the "bind" in the /etc/fstab.

I thought it would be a fine although non-standard solution, I think and so far I had not had any trouble from it.  I guess something gets written between the creation of /var and the relinking /var and system bus does not like that.

Other than foregoing the relinking of /var, (and I would prefer to keep it as it is because I have an ever increasing IMAP store for my e-mail in there) is there any other solution or technique I could apply to achieve the same end?
Comment 23 Michal Schmidt 2011-07-04 04:14:41 EDT
(In reply to comment #22)
> I relink them to /usr/local on /dev/sda5 by using the "bind" in the
> /etc/fstab.

That's fine. This should account for the first "... 8:5 /links/var /var ..." entry in mountinfo. It does not explain why there are two of them though.

> I thought it would be a fine although non-standard solution, I think and
> so far I had not had any trouble from it.

Yes. It may be unusual, but it should work.
 
> Other than foregoing the relinking of /var, (and I would prefer to keep it as
> it is because I have an ever increasing IMAP store for my e-mail in there) is
> there any other solution or technique I could apply to achieve the same end?

I'd rather not look for workarounds. We should find and fix the bug.

To summarize, there are two issues:
 1) 'mount /var' adding new bind mounts every time it is run. I'll leave this
    to Karel.
 2) We still have no idea what process spawns the extra mount on your system.

To debug 2) you can do a shell wrapper around /bin/mount to log its invocations:

-  mv /bin/mount /bin/mount.real
-  Save this little shell script as /bin/mount:

#!/bin/sh
/usr/bin/logger -- "MMM 'mount $@' called by $PPID ($(cat /proc/$PPID/comm)) $(grep 'name=systemd' /proc/self/cgroup)"
exec /bin/mount.real "$@"

-  chmod +x /bin/mount

Then after reboot look for "MMM" in /var/log/messages.
Comment 24 Javier Perez 2011-07-04 05:28:53 EDT
Here is the output of the MMM messages

[root@bloody log]# grep 'MMM' /var/log/messages
Jul  4 04:11:22 bloody kernel: [    9.300423] logger[318]: MMM 'mount tmpfs /media -t tmpfs -o mode=755,nosuid,nodev,noexec' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [    9.304736] logger[320]: MMM 'mount / -o remount' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [    9.524358] logger[325]: MMM 'mount /dev/pts -o remount' called by 294 (systemd-remount)
Jul  4 04:11:22 bloody kernel: [    9.529115] logger[327]: MMM 'mount /dev/shm -o remount' called by 294 (systemd-remount)
Jul  4 04:11:22 bloody kernel: [    9.538829] logger[326]: MMM 'mount /sys -o remount' called by 294 (systemd-remount)
Jul  4 04:11:22 bloody kernel: [    9.549919] logger[328]: MMM 'mount /proc -o remount' called by 294 (systemd-remount)
Jul  4 04:11:22 bloody kernel: [   14.178022] logger[486]: MMM 'mount /boot' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   14.499694] logger[498]: MMM 'mount /usr/local' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   15.083252] logger[511]: MMM 'mount /var' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   15.098142] logger[512]: MMM 'mount /home' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   15.253920] logger[521]: MMM 'mount /run /var/run -t bind -o bind' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   15.255078] logger[522]: MMM 'mount /run/lock /var/lock -t bind -o bind' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   16.108785] logger[548]: MMM 'mount hugetlbfs /dev/hugepages -t hugetlbfs' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   16.187842] logger[553]: MMM 'mount mqueue /dev/mqueue -t mqueue' called by 1 (systemd)
Jul  4 04:11:31 bloody logger: MMM 'mount -a -t nonfs,nfs4,cifs,ncpfs,gfs' called by 867 (netfs)#0121:name=systemd:/system/netfs.service
Jul  4 04:11:32 bloody logger: MMM 'mount -t rpc_pipefs sunrpc /var/lib/nfs/rpc_pipefs' called by 982 (sh)#0121:name=systemd:/system/rpcidmapd.service
Jul  4 04:11:33 bloody logger: MMM 'mount -t nfsd nfsd /proc/fs/nfsd' called by 1014 (sh)#0121:name=systemd:/system/nfs.service
[root@bloody log]#


I see that first "/var" is mounted by systemd, but then "/run" is mounted as "/var/run" again by systemd

Probably that is the source of the second "/var"
Comment 25 Javier Perez 2011-07-04 05:29:52 EDT
I think this might be useful to put the MMMs in context. Here is everyting 'mount' on /var/log/messages

[root@bloody log]# grep 'mount' /var/log/messages
Jul  4 04:11:22 bloody kernel: [    5.859764] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null)
Jul  4 04:11:22 bloody kernel: [    6.113730] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null)
Jul  4 04:11:22 bloody kernel: [    8.462808] systemd[1]: -.mount changed dead -> mounted
Jul  4 04:11:22 bloody kernel: [    8.470658] systemd[1]: Installed new job home.mount/start as 5
Jul  4 04:11:22 bloody kernel: [    8.471191] systemd[1]: Installed new job usr-local.mount/start as 8
Jul  4 04:11:22 bloody kernel: [    8.472295] systemd[1]: Installed new job var.mount/start as 17
Jul  4 04:11:22 bloody kernel: [    8.472580] systemd[1]: Installed new job boot.mount/start as 18
Jul  4 04:11:22 bloody kernel: [    8.477053] systemd[1]: Installed new job var-run.mount/start as 21
Jul  4 04:11:22 bloody kernel: [    8.479639] systemd[1]: Installed new job var-lock.mount/start as 30
Jul  4 04:11:22 bloody kernel: [    8.479911] systemd[1]: Installed new job media.mount/start as 31
Jul  4 04:11:22 bloody kernel: [    8.480187] systemd[1]: Installed new job remount-rootfs.service/start as 32
Jul  4 04:11:22 bloody kernel: [    8.480752] systemd[1]: Installed new job systemd-remount-api-vfs.service/start as 34
Jul  4 04:11:22 bloody kernel: [    8.482659] systemd[1]: Installed new job sys-kernel-security.automount/start as 40
Jul  4 04:11:22 bloody kernel: [    8.484407] systemd[1]: Installed new job dev-mqueue.automount/start as 50
Jul  4 04:11:22 bloody kernel: [    8.485623] systemd[1]: Installed new job proc-sys-fs-binfmt_misc.automount/start as 54
Jul  4 04:11:22 bloody kernel: [    8.486358] systemd[1]: Installed new job dev-hugepages.automount/start as 56
Jul  4 04:11:22 bloody kernel: [    8.486633] systemd[1]: Installed new job sys-kernel-debug.automount/start as 57
Jul  4 04:11:22 bloody kernel: [    8.575256] systemd[1]: sys-kernel-debug.automount changed dead -> waiting
Jul  4 04:11:22 bloody kernel: [    8.575615] systemd[1]: Job sys-kernel-debug.automount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [    8.581017] systemd[1]: dev-hugepages.automount changed dead -> waiting
Jul  4 04:11:22 bloody kernel: [    8.581405] systemd[1]: Job dev-hugepages.automount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [    8.600435] systemd[1]: proc-sys-fs-binfmt_misc.automount changed dead -> waiting
Jul  4 04:11:22 bloody kernel: [    8.600931] systemd[1]: Job proc-sys-fs-binfmt_misc.automount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [    8.630411] systemd[1]: dev-mqueue.automount changed dead -> waiting
Jul  4 04:11:22 bloody kernel: [    8.631155] systemd[1]: Job dev-mqueue.automount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [    8.641579] systemd[1]: sys-kernel-security.automount changed dead -> waiting
Jul  4 04:11:22 bloody kernel: [    8.641892] systemd[1]: Job sys-kernel-security.automount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [    8.671105] systemd[1]: About to execute: /bin/mount tmpfs /media -t tmpfs -o mode=755,nosuid,nodev,noexec
Jul  4 04:11:22 bloody kernel: [    8.691851] systemd[1]: Forked /bin/mount as 292
Jul  4 04:11:22 bloody kernel: [    8.692287] systemd[1]: media.mount changed dead -> mounting
Jul  4 04:11:22 bloody kernel: [    8.693186] systemd[1]: About to execute: /bin/mount / -o remount
Jul  4 04:11:22 bloody kernel: [    8.703270] systemd[1]: Forked /bin/mount as 293
Jul  4 04:11:22 bloody kernel: [    8.704700] systemd[1]: remount-rootfs.service changed dead -> start
Jul  4 04:11:22 bloody kernel: [    8.710778] systemd[1]: About to execute: /lib/systemd/systemd-remount-api-vfs
Jul  4 04:11:22 bloody kernel: [    8.723186] systemd[1]: Forked /lib/systemd/systemd-remount-api-vfs as 294
Jul  4 04:11:22 bloody kernel: [    8.724380] systemd[1]: systemd-remount-api-vfs.service changed dead -> start
Jul  4 04:11:22 bloody kernel: [    9.300423] logger[318]: MMM 'mount tmpfs /media -t tmpfs -o mode=755,nosuid,nodev,noexec' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [    9.304736] logger[320]: MMM 'mount / -o remount' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [    9.383771] systemd[1]: media.mount changed mounting -> mounting-done
Jul  4 04:11:22 bloody kernel: [    9.384085] systemd[1]: Job media.mount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [    9.403993] systemd[1]: Received SIGCHLD from PID 292 (mount.real).
Jul  4 04:11:22 bloody kernel: [    9.404518] systemd[1]: Got SIGCHLD for process 292 (mount.real)
Jul  4 04:11:22 bloody kernel: [    9.405420] systemd[1]: Child 292 belongs to media.mount
Jul  4 04:11:22 bloody kernel: [    9.405721] systemd[1]: media.mount mount process exited, code=exited status=0
Jul  4 04:11:22 bloody kernel: [    9.406898] systemd[1]: media.mount changed mounting-done -> mounted
Jul  4 04:11:22 bloody kernel: [    9.524358] logger[325]: MMM 'mount /dev/pts -o remount' called by 294 (systemd-remount)
Jul  4 04:11:22 bloody kernel: [    9.529115] logger[327]: MMM 'mount /dev/shm -o remount' called by 294 (systemd-remount)
Jul  4 04:11:22 bloody kernel: [    9.538829] logger[326]: MMM 'mount /sys -o remount' called by 294 (systemd-remount)
Jul  4 04:11:22 bloody kernel: [    9.549919] logger[328]: MMM 'mount /proc -o remount' called by 294 (systemd-remount)
Jul  4 04:11:22 bloody kernel: [    9.561521] EXT4-fs (sda2): re-mounted. Opts: (null)
Jul  4 04:11:22 bloody kernel: [    9.600096] systemd[1]: Received SIGCHLD from PID 293 (mount.real).
Jul  4 04:11:22 bloody kernel: [    9.600671] systemd[1]: Got SIGCHLD for process 293 (mount.real)
Jul  4 04:11:22 bloody kernel: [    9.606871] systemd[1]: Child 293 belongs to remount-rootfs.service
Jul  4 04:11:22 bloody kernel: [    9.607221] systemd[1]: remount-rootfs.service: main process exited, code=exited, status=0
Jul  4 04:11:22 bloody kernel: [    9.614878] systemd[1]: remount-rootfs.service changed start -> exited
Jul  4 04:11:22 bloody kernel: [    9.615340] systemd[1]: Job remount-rootfs.service/start finished, result=done
Jul  4 04:11:22 bloody kernel: [    9.653214] systemd[1]: remount-rootfs.service: cgroup is empty
Jul  4 04:11:22 bloody kernel: [    9.726956] systemd[1]: Received SIGCHLD from PID 294 (systemd-remount).
Jul  4 04:11:22 bloody kernel: [    9.727466] systemd[1]: Got SIGCHLD for process 294 (systemd-remount)
Jul  4 04:11:22 bloody kernel: [    9.728113] systemd[1]: Child 294 belongs to systemd-remount-api-vfs.service
Jul  4 04:11:22 bloody kernel: [    9.728452] systemd[1]: systemd-remount-api-vfs.service: main process exited, code=exited, status=0
Jul  4 04:11:22 bloody kernel: [    9.729963] systemd[1]: systemd-remount-api-vfs.service changed start -> exited
Jul  4 04:11:22 bloody kernel: [    9.730531] systemd[1]: Job systemd-remount-api-vfs.service/start finished, result=done
Jul  4 04:11:22 bloody kernel: [    9.750162] systemd[1]: systemd-remount-api-vfs.service: cgroup is empty
Jul  4 04:11:22 bloody kernel: [   14.091573] systemd[1]: About to execute: /bin/mount /boot
Jul  4 04:11:22 bloody kernel: [   14.103749] systemd[1]: Forked /bin/mount as 483
Jul  4 04:11:22 bloody kernel: [   14.104378] systemd[1]: boot.mount changed dead -> mounting
Jul  4 04:11:22 bloody kernel: [   14.178022] logger[486]: MMM 'mount /boot' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   14.333837] systemd[1]: About to execute: /bin/mount /usr/local
Jul  4 04:11:22 bloody kernel: [   14.350041] systemd[1]: Forked /bin/mount as 492
Jul  4 04:11:22 bloody kernel: [   14.350622] systemd[1]: usr-local.mount changed dead -> mounting
Jul  4 04:11:22 bloody kernel: [   14.423493] EXT3-fs (sda1): mounted filesystem with ordered data mode
Jul  4 04:11:22 bloody kernel: [   14.425048] systemd[1]: boot.mount changed mounting -> mounting-done
Jul  4 04:11:22 bloody kernel: [   14.425365] systemd[1]: Job boot.mount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [   14.430538] systemd[1]: Received SIGCHLD from PID 483 (mount.real).
Jul  4 04:11:22 bloody kernel: [   14.431627] systemd[1]: Got SIGCHLD for process 483 (mount.real)
Jul  4 04:11:22 bloody kernel: [   14.434576] systemd[1]: Child 483 belongs to boot.mount
Jul  4 04:11:22 bloody kernel: [   14.434889] systemd[1]: boot.mount mount process exited, code=exited status=0
Jul  4 04:11:22 bloody kernel: [   14.435230] systemd[1]: boot.mount changed mounting-done -> mounted
Jul  4 04:11:22 bloody kernel: [   14.499694] logger[498]: MMM 'mount /usr/local' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   14.971698] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null)
Jul  4 04:11:22 bloody kernel: [   14.973502] systemd[1]: usr-local.mount changed mounting -> mounting-done
Jul  4 04:11:22 bloody kernel: [   14.973870] systemd[1]: Job usr-local.mount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [   14.985796] systemd[1]: About to execute: /bin/mount /var
Jul  4 04:11:22 bloody kernel: [   14.998027] systemd[1]: Forked /bin/mount as 505
Jul  4 04:11:22 bloody kernel: [   14.998398] systemd[1]: var.mount changed dead -> mounting
Jul  4 04:11:22 bloody kernel: [   14.999337] systemd[1]: About to execute: /bin/mount /home
Jul  4 04:11:22 bloody kernel: [   15.011656] systemd[1]: Forked /bin/mount as 506
Jul  4 04:11:22 bloody kernel: [   15.012309] systemd[1]: home.mount changed dead -> mounting
Jul  4 04:11:22 bloody kernel: [   15.013294] systemd[1]: Received SIGCHLD from PID 492 (mount.real).
Jul  4 04:11:22 bloody kernel: [   15.015809] systemd[1]: Got SIGCHLD for process 492 (mount.real)
Jul  4 04:11:22 bloody kernel: [   15.016743] systemd[1]: Child 492 belongs to usr-local.mount
Jul  4 04:11:22 bloody kernel: [   15.017110] systemd[1]: usr-local.mount mount process exited, code=exited status=0
Jul  4 04:11:22 bloody kernel: [   15.017606] systemd[1]: usr-local.mount changed mounting-done -> mounted
Jul  4 04:11:22 bloody kernel: [   15.083252] logger[511]: MMM 'mount /var' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   15.098142] logger[512]: MMM 'mount /home' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   15.109489] systemd[1]: var.mount changed mounting -> mounting-done
Jul  4 04:11:22 bloody kernel: [   15.109851] systemd[1]: Job var.mount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [   15.118648] systemd[1]: About to execute: /bin/mount /run /var/run -t bind -o bind
Jul  4 04:11:22 bloody kernel: [   15.128920] systemd[1]: Forked /bin/mount as 513
Jul  4 04:11:22 bloody kernel: [   15.129467] systemd[1]: var-run.mount changed dead -> mounting
Jul  4 04:11:22 bloody kernel: [   15.137178] systemd[1]: About to execute: /bin/mount /run/lock /var/lock -t bind -o bind
Jul  4 04:11:22 bloody kernel: [   15.148416] systemd[1]: Forked /bin/mount as 515
Jul  4 04:11:22 bloody kernel: [   15.149033] systemd[1]: var-lock.mount changed dead -> mounting
Jul  4 04:11:22 bloody kernel: [   15.155499] systemd[1]: home.mount changed mounting -> mounting-done
Jul  4 04:11:22 bloody kernel: [   15.155930] systemd[1]: Job home.mount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [   15.161874] systemd[1]: Received SIGCHLD from PID 505 (mount.real).
Jul  4 04:11:22 bloody kernel: [   15.162313] systemd[1]: Got SIGCHLD for process 505 (mount.real)
Jul  4 04:11:22 bloody kernel: [   15.163124] systemd[1]: Child 505 belongs to var.mount
Jul  4 04:11:22 bloody kernel: [   15.163432] systemd[1]: var.mount mount process exited, code=exited status=0
Jul  4 04:11:22 bloody kernel: [   15.171103] systemd[1]: var.mount changed mounting-done -> mounted
Jul  4 04:11:22 bloody kernel: [   15.172161] systemd[1]: Got SIGCHLD for process 506 (mount.real)
Jul  4 04:11:22 bloody kernel: [   15.173120] systemd[1]: Child 506 belongs to home.mount
Jul  4 04:11:22 bloody kernel: [   15.173419] systemd[1]: home.mount mount process exited, code=exited status=0
Jul  4 04:11:22 bloody kernel: [   15.178972] systemd[1]: home.mount changed mounting-done -> mounted
Jul  4 04:11:22 bloody kernel: [   15.253920] logger[521]: MMM 'mount /run /var/run -t bind -o bind' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   15.255078] logger[522]: MMM 'mount /run/lock /var/lock -t bind -o bind' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   15.273586] systemd[1]: var-run.mount changed mounting -> mounting-done
Jul  4 04:11:22 bloody kernel: [   15.273952] systemd[1]: Job var-run.mount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [   15.278706] systemd[1]: var-lock.mount changed mounting -> mounting-done
Jul  4 04:11:22 bloody kernel: [   15.279065] systemd[1]: Job var-lock.mount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [   15.283824] systemd[1]: Received SIGCHLD from PID 513 (mount.real).
Jul  4 04:11:22 bloody kernel: [   15.284280] systemd[1]: Got SIGCHLD for process 513 (mount.real)
Jul  4 04:11:22 bloody kernel: [   15.285116] systemd[1]: Child 513 belongs to var-run.mount
Jul  4 04:11:22 bloody kernel: [   15.285414] systemd[1]: var-run.mount mount process exited, code=exited status=0
Jul  4 04:11:22 bloody kernel: [   15.285887] systemd[1]: var-run.mount changed mounting-done -> mounted
Jul  4 04:11:22 bloody kernel: [   15.286409] systemd[1]: Got SIGCHLD for process 515 (mount.real)
Jul  4 04:11:22 bloody kernel: [   15.296514] systemd[1]: Child 515 belongs to var-lock.mount
Jul  4 04:11:22 bloody kernel: [   15.296859] systemd[1]: var-lock.mount mount process exited, code=exited status=0
Jul  4 04:11:22 bloody kernel: [   15.304104] systemd[1]: var-lock.mount changed mounting-done -> mounted
Jul  4 04:11:22 bloody kernel: [   16.062350] systemd[1]: Got direct mount request for cde71c00, triggered by 544 (lvm)
Jul  4 04:11:22 bloody kernel: [   16.062975] systemd[1]: Trying to enqueue job dev-hugepages.mount/start/replace
Jul  4 04:11:22 bloody kernel: [   16.063480] systemd[1]: Installed new job dev-hugepages.mount/start as 159
Jul  4 04:11:22 bloody kernel: [   16.063808] systemd[1]: Enqueued job dev-hugepages.mount/start as 159
Jul  4 04:11:22 bloody kernel: [   16.064083] systemd[1]: dev-hugepages.automount changed waiting -> running
Jul  4 04:11:22 bloody kernel: [   16.064810] systemd[1]: About to execute: /bin/mount hugetlbfs /dev/hugepages -t hugetlbfs
Jul  4 04:11:22 bloody kernel: [   16.076174] systemd[1]: Forked /bin/mount as 545
Jul  4 04:11:22 bloody kernel: [   16.076655] systemd[1]: dev-hugepages.mount changed dead -> mounting
Jul  4 04:11:22 bloody kernel: [   16.108785] logger[548]: MMM 'mount hugetlbfs /dev/hugepages -t hugetlbfs' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   16.119409] systemd[1]: dev-hugepages.mount changed mounting -> mounting-done
Jul  4 04:11:22 bloody kernel: [   16.119752] systemd[1]: Job dev-hugepages.mount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [   16.123963] systemd[1]: Received SIGCHLD from PID 545 (mount.real).
Jul  4 04:11:22 bloody kernel: [   16.124489] systemd[1]: Got SIGCHLD for process 545 (mount.real)
Jul  4 04:11:22 bloody kernel: [   16.125301] systemd[1]: Child 545 belongs to dev-hugepages.mount
Jul  4 04:11:22 bloody kernel: [   16.125628] systemd[1]: dev-hugepages.mount mount process exited, code=exited status=0
Jul  4 04:11:22 bloody kernel: [   16.128109] systemd[1]: dev-hugepages.mount changed mounting-done -> mounted
Jul  4 04:11:22 bloody kernel: [   16.129076] systemd[1]: Got direct mount request for cd403380, triggered by 544 (lvm)
Jul  4 04:11:22 bloody kernel: [   16.129630] systemd[1]: Trying to enqueue job dev-mqueue.mount/start/replace
Jul  4 04:11:22 bloody kernel: [   16.129993] systemd[1]: Installed new job dev-mqueue.mount/start as 162
Jul  4 04:11:22 bloody kernel: [   16.130326] systemd[1]: Enqueued job dev-mqueue.mount/start as 162
Jul  4 04:11:22 bloody kernel: [   16.130624] systemd[1]: dev-mqueue.automount changed waiting -> running
Jul  4 04:11:22 bloody kernel: [   16.131268] systemd[1]: About to execute: /bin/mount mqueue /dev/mqueue -t mqueue
Jul  4 04:11:22 bloody kernel: [   16.145778] systemd[1]: Forked /bin/mount as 550
Jul  4 04:11:22 bloody kernel: [   16.146280] systemd[1]: dev-mqueue.mount changed dead -> mounting
Jul  4 04:11:22 bloody kernel: [   16.187842] logger[553]: MMM 'mount mqueue /dev/mqueue -t mqueue' called by 1 (systemd)
Jul  4 04:11:22 bloody kernel: [   16.198383] systemd[1]: dev-mqueue.mount changed mounting -> mounting-done
Jul  4 04:11:22 bloody kernel: [   16.198722] systemd[1]: Job dev-mqueue.mount/start finished, result=done
Jul  4 04:11:22 bloody kernel: [   16.203044] systemd[1]: Received SIGCHLD from PID 550 (mount.real).
Jul  4 04:11:22 bloody kernel: [   16.203539] systemd[1]: Got SIGCHLD for process 550 (mount.real)
Jul  4 04:11:22 bloody kernel: [   16.204296] systemd[1]: Child 550 belongs to dev-mqueue.mount
Jul  4 04:11:22 bloody kernel: [   16.204634] systemd[1]: dev-mqueue.mount mount process exited, code=exited status=0
Jul  4 04:11:22 bloody kernel: [   16.209079] systemd[1]: dev-mqueue.mount changed mounting-done -> mounted
Jul  4 04:11:31 bloody logger: MMM 'mount -a -t nonfs,nfs4,cifs,ncpfs,gfs' called by 867 (netfs)#0121:name=systemd:/system/netfs.service
Jul  4 04:11:32 bloody logger: MMM 'mount -t rpc_pipefs sunrpc /var/lib/nfs/rpc_pipefs' called by 982 (sh)#0121:name=systemd:/system/rpcidmapd.service
Jul  4 04:11:32 bloody kernel: [   28.463120] systemd[1]: var-lib-nfs-rpc_pipefs.mount changed dead -> mounted
Jul  4 04:11:33 bloody logger: MMM 'mount -t nfsd nfsd /proc/fs/nfsd' called by 1014 (sh)#0121:name=systemd:/system/nfs.service
Jul  4 04:11:33 bloody kernel: [   29.221618] systemd[1]: proc-fs-nfsd.mount changed dead -> mounted
Jul  4 04:11:34 bloody rpc.mountd[1069]: Version 1.2.3 starting
[root@bloody log]#
Comment 26 Michal Schmidt 2011-07-04 06:35:04 EDT
(In reply to comment #23)
> /usr/bin/logger -- "MMM 'mount $@' called by $PPID ($(cat /proc/$PPID/comm))
> $(grep 'name=systemd' /proc/self/cgroup)"

This should have been on one line. It got line-wrapped by Bugzilla.
But it does not really matter, as we got the important information anyway.

(In reply to comment #24)
> I see that first "/var" is mounted by systemd, but then "/run" is mounted as
> "/var/run" again by systemd
> 
> Probably that is the source of the second "/var"

I'm getting the same sequence on my system, but no duplicate mounts here.

I was hoping there would be a second 'MMM mount /var' in the log somewhere, but it isn't. Either the second /var mount is done without going via /bin/mount, or the kernel is doing something weird.

Do you get the duplicate /var mount if you boot into single user mode?:
 - boot with "single"
 - check:  grep '/var ' /proc/self/mountinfo

If not, you may be able to narrow the appearance of the duplicate mount down to a specific service by disabling the services.
Comment 27 Karel Zak 2011-07-04 10:25:49 EDT
>  1) 'mount /var' adding new bind mounts every time it is run. I'll leave this
>     to Karel.

The mount(8) does NOT check if the filesystem is already mounted (exception is  mount -a and user mounts). This is kernel job (e.g. ext4 returns EBUSY, cifs supports this feature, etc.).

Anyway, it seems like a duplicate to the bug #701176 (try "chkconfig sandbox off" if you have policycoreutils >= 2.0.86-7).
Comment 28 Michal Schmidt 2011-07-04 10:49:04 EDT
(In reply to comment #27)
> The mount(8) does NOT check if the filesystem is already mounted (exception is 
> mount -a and user mounts). This is kernel job (e.g. ext4 returns EBUSY, cifs
> supports this feature, etc.).

I see. The kernel does not seem to have any trouble laying more and more bind mounts one after another. And maybe that's correct behaviour.

> Anyway, it seems like a duplicate to the bug #701176 (try "chkconfig sandbox
> off" if you have policycoreutils >= 2.0.86-7).

I don't see the sandbox service mentioned in Javier's /var/log/messages. And it would show up in the MMM logs too.
Comment 29 Michal Schmidt 2011-07-04 17:54:54 EDT
Here's a possible trick to find out where the mount comes from:
 1) yum install perf kernel-PAE-debuginfo --enablerepo=\*-debuginfo
 2) Reboot with "emergency" added to the kernel command line.
 3) Give root password when asked.
 4) Verify that "grep '/var' /proc/self/mountinfo" shows nothing.
 5) mount -o remount,rw /
 6) perf probe do_mount dev_name:string dir_name:string
    # There seems to be a bug, the first "perf probe" run may give an error.
    # If it does, just run the same command again and it will succeed.
 7) perf probe touch_mnt_namespace
 8) nohup perf record -e probe:do_mount -e probe:touch_mnt_namespace \
       -aR sleep 120 &
 9) Press CTRL+D to let the system boot into the default target.
10) Login as root
11) Verify that "grep '/var ' /proc/self/mountinfo" now shows the two duplicate
    mounts.
12) killall sleep   # just in case it has not finished yet at this point
13) perf script > log.txt
Comment 30 Michal Schmidt 2011-07-04 18:12:04 EDT
Wait a minute... What's the version of util-linux on your system? This bug could be explained by that 'mount -a -t nonfs,nfs4,cifs,ncpfs,gfs' producing duplicate bind mounts. It should be fixed in util-linux-2.19.1-1.2.fc15.
Comment 31 Javier Perez 2011-07-04 22:24:40 EDT
Hold it! The problem seems to have solved itself, I am not sure at what step.

After comment 17, when I updated systemd, I made two more updates on July 1st and July 2nd as follows:

==================================================
[root@bloody log]# yum history info 144
Loaded plugins: fastestmirror, langpacks, presto, refresh-packagekit
Transaction ID : 144
Begin time     : Fri Jul  1 04:27:52 2011
Begin rpmdb    : 1559:954956f8ffaa2400a7dda4021e8f45a1e84b5163
End time       :            04:31:16 2011 (204 seconds)
End rpmdb      : 1559:0e82364ec63037a17737c74007e914fb9ad1b775
User           : root <root>
Return-Code    : Success
Command Line   : update
Transaction performed with:
    Installed     rpm-4.9.0-9.fc15.i686
    Installed     yum-3.2.29-6.fc15.noarch
    Installed     yum-plugin-fastestmirror-1.1.30-2.fc15.noarch
    Installed     yum-presto-0.6.2-3.fc15.noarch
Packages Altered:
    Updated b43-fwcutter-013-3.fc15.i686
    Update               014-1.fc15.i686
    Updated enblend-4.0-6.fc15.i686
    Update          4.0-7.fc15.i686
    Updated kde-plasma-networkmanagement-1:0.9-0.47.20110323.fc15.1.i686
    Update                               1:0.9-0.53.20110616git.nm09.fc15.i686
    Updated kde-plasma-networkmanagement-libs-1:0.9-0.47.20110323.fc15.1.i686
    Update                                    1:0.9-0.53.20110616git.nm09.fc15.i686
    Updated kde-plasma-networkmanagement-openvpn-1:0.9-0.47.20110323.fc15.1.i686
    Update                                       1:0.9-0.53.20110616git.nm09.fc15.i686
    Updated kde-plasma-networkmanagement-pptp-1:0.9-0.47.20110323.fc15.1.i686
    Update                                    1:0.9-0.53.20110616git.nm09.fc15.i686
    Updated kde-plasma-networkmanagement-vpnc-1:0.9-0.47.20110323.fc15.1.i686
    Update                                    1:0.9-0.53.20110616git.nm09.fc15.i686
    Updated openswan-2.6.32-1.fc15.i686
    Update           2.6.33-1.fc15.i686
    Updated python-slip-0.2.16-1.fc15.noarch
    Update              0.2.17-1.fc15.noarch
    Updated python-slip-dbus-0.2.16-1.fc15.noarch
    Update                   0.2.17-1.fc15.noarch
    Updated python-slip-gtk-0.2.16-1.fc15.noarch
    Update                  0.2.17-1.fc15.noarch
    Updated tzdata-2011d-3.fc15.noarch
    Update         2011h-1.fc15.noarch
    Updated tzdata-java-2011d-3.fc15.noarch
    Update              2011h-1.fc15.noarch
    Updated usbutils-001-3.fc15.i686
    Update           003-2.fc15.i686
    Updated xorg-x11-drv-ati-6.14.1-1.20110504gita6d2dba6.fc15.i686
    Update                   6.14.1-2.20110525gitfe5c42f51.fc15.i686
history info
==============================================================================
[root@bloody log]# yum history info 145
Loaded plugins: fastestmirror, langpacks, presto, refresh-packagekit
Transaction ID : 145
Begin time     : Sat Jul  2 04:21:54 2011
Begin rpmdb    : 1559:0e82364ec63037a17737c74007e914fb9ad1b775
End time       :            04:23:31 2011 (97 seconds)
End rpmdb      : 1559:321a658d6a4428066d2adc8a957ee51dd848921e
User           : root <root>
Return-Code    : Success
Command Line   : update
Transaction performed with:
    Installed     rpm-4.9.0-9.fc15.i686
    Installed     yum-3.2.29-6.fc15.noarch
    Installed     yum-plugin-fastestmirror-1.1.30-2.fc15.noarch
    Installed     yum-presto-0.6.2-3.fc15.noarch
Packages Altered:
    Updated microcode_ctl-1:1.17-14.fc15.i686
    Update                1:1.17-15.fc15.i686
    Updated oxygen-gtk-1.0.4-1.fc15.i686
    Update             1.0.5-1.fc15.i686
    Updated rsyslog-5.7.9-3.fc15.i686
    Update          5.8.2-1.fc15.i686
history info
==============================================================================

Michal, based on the 'MMM' maybe at that stage (comment #25) it had already been fixed because you did not see the second 'MMM mount /var' in the log file. 
I have not done anything from comments #26 to #30 yet.

The only other files I touched were the 10-auth.conf, 10-mail.conf and 10-master.conf from /etc/dovecot/conf.d to fix an acess isssue with my mail system. 

Right now systemd is not generating errors:

[root@bloody log]# systemd-analyze
Startup finished in 2411ms (kernel) + 4586ms (initrd) + 103853ms (userspace) = 110851ms

This is mountinfo:

[root@bloody log]# cat /proc/self/mountinfo
15 21 0:3 / /proc rw,relatime - proc /proc rw
16 21 0:15 / /sys rw,relatime - sysfs /sys rw
17 21 0:5 / /dev rw,nosuid,relatime - devtmpfs udev rw,size=113328k,nr_inodes=28332,mode=755
18 17 0:10 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000
19 17 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw
20 21 0:17 / /run rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
21 1 8:2 / / rw,relatime - ext4 /dev/sda2 rw,barrier=1,data=ordered
22 16 0:18 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
23 22 0:19 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
24 22 0:20 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpuset
25 22 0:21 / /sys/fs/cgroup/ns rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,ns
26 22 0:22 / /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpu
27 22 0:23 / /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpuacct
28 22 0:24 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,memory
29 22 0:25 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,devices
30 22 0:26 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,freezer
31 22 0:27 / /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,net_cls
32 22 0:28 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime - cgroup cgroup rw,blkio
33 16 0:29 / /sys/kernel/debug rw,relatime - autofs systemd-1 rw,fd=28,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
34 17 0:30 / /dev/hugepages rw,relatime - autofs systemd-1 rw,fd=29,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
35 15 0:31 / /proc/sys/fs/binfmt_misc rw,relatime - autofs systemd-1 rw,fd=30,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
36 17 0:32 / /dev/mqueue rw,relatime - autofs systemd-1 rw,fd=31,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
37 16 0:33 / /sys/kernel/security rw,relatime - autofs systemd-1 rw,fd=33,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
38 21 0:34 / /media rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
39 21 8:1 / /boot rw,relatime - ext3 /dev/sda1 rw,errors=continue,user_xattr,acl,barrier=0,data=ordered
40 21 8:5 / /usr/local rw,relatime - ext4 /dev/sda5 rw,barrier=1,data=ordered
41 21 8:5 /links/var /var rw,relatime - ext4 /dev/sda5 rw,barrier=1,data=ordered
42 21 8:5 /links/home /home rw,relatime - ext4 /dev/sda5 rw,barrier=1,data=ordered
43 41 0:17 / /var/run rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
44 41 0:17 /lock /var/lock rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
45 34 0:35 / /dev/hugepages rw,relatime - hugetlbfs hugetlbfs rw
46 36 0:12 / /dev/mqueue rw,relatime - mqueue mqueue rw
47 41 0:36 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw
48 15 0:37 / /proc/fs/nfsd rw,relatime - nfsd nfsd rw

I am not using sandbox:

[root@bloody log]# service sandbox status
sandbox is stopped

policycoreutils is version 2.0.86-7.fc15
util-linux is version 2.19.1-1.2.fc15

What do you want me to undo in order to test whether that item is the one that fixes or not this bug?
Comment 32 Michal Schmidt 2011-07-05 04:34:35 EDT
(In reply to comment #31)
> After comment 17, when I updated systemd, I made two more updates on July 1st
> and July 2nd as follows:

Do you usually reboot after updates? If not, you may have already installed the fix in an earlier update than the one in comment #17, but without the reboot it did not come into effect.

I am now convinced the double /var mount was due to the util-linux bug, because I could reproduce it by: yum downgrade util-linux libmount libblkid libuuid && reboot
And I have observed the second mount being done using the perf-based tracing.
Comment 33 Javier Perez 2011-07-05 04:57:18 EDT
Might be. I only reboot if the kernel is updated, I assume everything else automagically resets itself. I usually work this machine on level 3 therefore I do not need to start Xorg

Looking back in Yum History, I see I updated util-linux to version 2.19.1-1.2fc15 on June  24th. I do not remember rebooting the system at that time.

I guess this closes the bug.

Thanks for all your help! I think I learned a lot here!
Comment 34 Michal Schmidt 2011-07-05 06:06:24 EDT

*** This bug has been marked as a duplicate of bug 709319 ***

Note You need to log in before you can comment on or make changes to this bug.