Bug 1276286 - No such file or directory: '/var/lib/ceph' when creating initial monitor
Summary: No such file or directory: '/var/lib/ceph' when creating initial monitor
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Installer
Version: 1.3.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 1.3.2
Assignee: Ian Colle
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-29 11:04 UTC by Martin Frodl
Modified: 2022-02-21 18:18 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-30 10:24:48 UTC
Embargoed:


Attachments (Terms of Use)
ceph-deploy install log (15.91 KB, text/plain)
2015-10-30 09:05 UTC, Martin Frodl
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-3334 0 None None None 2022-02-21 18:18:33 UTC

Description Martin Frodl 2015-10-29 11:04:18 UTC
Description of problem:

When trying to create a storage cluster following the official documentation [0], ceph-deploy fails to create initial monitor because of missing '/var/lib/ceph'.

This is what I see when trying to create a simple single-node cluster on a machine 'myserver':

# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.26): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  prog                          : ceph-deploy
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2658440>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x2649b90>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts myserver
[ceph_deploy.mon][DEBUG ] detecting platform for host myserver ...
[myserver][DEBUG ] connected to host: myserver
[myserver][DEBUG ] detect platform information from remote host
[myserver][DEBUG ] detect machine type
[ceph_deploy.mon][INFO  ] distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[myserver][DEBUG ] determining if provided host has same hostname in remote
[myserver][DEBUG ] get remote short hostname
[myserver][DEBUG ] deploying mon to myserver
[myserver][DEBUG ] get remote short hostname
[myserver][DEBUG ] remote hostname: myserver
[ceph_deploy.mon][ERROR ] OSError: [Errno 2] No such file or directory: '/var/lib/ceph'
[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors

Version-Release number of selected component (if applicable):
ceph-common.x86_64 1:0.87.2-0.el7

[0] http://docs.ceph.com/docs/master/start/quick-ceph-deploy/

Comment 2 Alfredo Deza 2015-10-29 11:39:30 UTC
That directory is meant to be created when Ceph is installed. 

After installing ceph those directories should be there. You cannot deploy a monitor if you haven't installed ceph. On a new box without ceph installed:

vagrant@vagrant:~$ which ceph
vagrant@vagrant:~$ ls /var/lib/ceph
ls: cannot access /var/lib/ceph: No such file or directory
vagrant@vagrant:~$ sudo apt-get install ceph
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  btrfs-tools ceph-common ceph-fs-common ceph-fuse ceph-mds liblzo2-2
  libradosstriper1
The following NEW packages will be installed:
  btrfs-tools ceph ceph-common ceph-fs-common ceph-fuse ceph-mds liblzo2-2
  libradosstriper1
0 upgraded, 8 newly installed, 0 to remove and 113 not upgraded.
Need to get 6,010 kB/30.3 MB of archives.
After this operation, 135 MB of additional disk space will be used.
Get:1 http://download.ceph.com/debian-hammer/ trusty/main ceph-fs-common amd64 0.94.5-1trusty [813 kB]
...
...
Processing triggers for libc-bin (2.19-0ubuntu6.5) ...
Processing triggers for initramfs-tools (0.103ubuntu4.2) ...
update-initramfs: Generating /boot/initrd.img-3.16.0-30-generic
Processing triggers for ureadahead (0.100.0-16) ...
vagrant@vagrant:~$ which ceph
/usr/bin/ceph
vagrant@vagrant:~$ ls /var/lib/ceph
bootstrap-mds  bootstrap-osd  mds  mon  osd  tmp

Comment 3 Martin Frodl 2015-10-29 12:12:00 UTC
You are right, installing 'ceph' package truly did the trick.

Looking at the quickstart documentation [0], though, I can't see this step anywhere. In the full installation manual [1], on the other hand, 'yum install ceph' *is* mentioned. Maybe the quickstart documentation could be fixed, at least?

[0] http://docs.ceph.com/docs/master/start/
[1] http://docs.ceph.com/docs/master/install/install-storage-cluster/

Comment 4 Boris Ranto 2015-10-29 15:18:05 UTC
@Martin: It is mentioned in [0], just check Storage Cluster Quick Start, point 4 (Install Ceph). The ceph-deploy install command will run 'yum install ceph' on all the nodes for you.

Comment 5 Martin Frodl 2015-10-30 09:05:49 UTC
Created attachment 1087878 [details]
ceph-deploy install log

I tried running ceph-deploy install right now and even though it seemingly succeeds, package ceph is not installed; see the attached log.

Comment 6 Boris Ranto 2015-10-30 09:56:36 UTC
The command installs ceph on remote server, not the local machine, you should run 'rpm -q ceph' on 'myserver'. Also, you don't need to have ceph installed on the machine that you use for deployment via ceph-deploy commands.

Comment 7 Martin Frodl 2015-10-30 10:24:48 UTC
Oh, I see, it makes much more sense suddenly.

Just to outline my train of thought: my use case is testing httpd integration with ceph-radosgw, so I am trying to make things as simple as possible and not use more machines than absolutely necessary. That means admin node = monitor node = OSD node. If I understand correctly, there is no need to run 'ceph-deploy install' at all in such a situation, as plain 'yum install ceph' installs everything that is needed.

Thanks for the explanation, I hope you will not mind me closing the bug now.

Comment 8 Boris Ranto 2015-10-30 11:38:32 UTC
You might find this task useful for that purpose:

https://beaker.engineering.redhat.com/tasks/19344

It creates 3 VMs and installs and deploys ceph on them. There are some quirks there, though -- e.g. it installs upstream ceph instead of one that is in the fedora/rhel repos, fedora installs tend to break due to unreliability of servers, etc.


Note You need to log in before you can comment on or make changes to this bug.