Bug 1270019 - osd prepare doesn't activate the disk
Summary: osd prepare doesn't activate the disk
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Installer
Version: 1.3.1
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: rc
: 1.3.4
Assignee: Loic Dachary
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-08 19:36 UTC by Vasu Kulkarni
Modified: 2020-12-11 11:56 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-21 14:30:33 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2127661 0 None None None 2016-01-15 23:24:24 UTC

Description Vasu Kulkarni 2015-10-08 19:36:58 UTC
Description of problem:

This is seen on RH 7,2 Snapshot build 4, we wanted to check early intergation issues with 7.2


Version-Release number of selected component (if applicable):

7.2/ 1.3.1 

How reproducible:

Always


Steps to Reproduce:
install ceph, create new monitor, zap disk, osd prepare

Actual results:

[ubuntu@magna025 ~]$ sudo ceph -s
    cluster ed5b6c3c-3be2-4aab-b189-dd59443e9cb9
     health HEALTH_ERR
            64 pgs stuck inactive
            64 pgs stuck unclean
            no osds
     monmap e1: 1 mons at {magna025=10.8.128.25:6789/0}
            election epoch 1, quorum 0 magna025
     osdmap e1: 0 osds: 0 up, 0 in
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 



Expected results:
osd should be in and up

Additional info:
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy disk zap magna025:/dev/sdb
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on magna025
[magna025][DEBUG ] connected to host: magna025 
[magna025][DEBUG ] detect platform information from remote host
[magna025][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[magna025][DEBUG ] zeroing last few blocks of device
[magna025][DEBUG ] find the location of an executable
[magna025][INFO  ] Running command: /usr/sbin/ceph-disk zap /dev/sdb
[magna025][WARNING] Caution: invalid backup GPT header, but valid main header; regenerating
[magna025][WARNING] backup header from main header.
[magna025][WARNING] 
[magna025][DEBUG ] ****************************************************************************
[magna025][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[magna025][DEBUG ] verification and recovery are STRONGLY recommended.
[magna025][DEBUG ] ****************************************************************************
[magna025][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[magna025][DEBUG ] other utilities.
[magna025][DEBUG ] Creating new GPT entries.
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] partx: specified range <1:0> does not make sense
[ceph_deploy.osd][INFO  ] calling partx on zapped device /dev/sdb
[ceph_deploy.osd][INFO  ] re-reading known partitions will display errors
[magna025][INFO  ] Running command: partx -a /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy disk zap magna025:/dev/sdc
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on magna025
[magna025][DEBUG ] connected to host: magna025 
[magna025][DEBUG ] detect platform information from remote host
[magna025][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[magna025][DEBUG ] zeroing last few blocks of device
[magna025][DEBUG ] find the location of an executable
[magna025][INFO  ] Running command: /usr/sbin/ceph-disk zap /dev/sdc
[magna025][WARNING] Caution: invalid backup GPT header, but valid main header; regenerating
[magna025][WARNING] backup header from main header.
[magna025][WARNING] 
[magna025][WARNING] Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
[magna025][WARNING] on the recovery & transformation menu to examine the two tables.
[magna025][WARNING] 
[magna025][WARNING] Warning! One or more CRCs don't match. You should repair the disk!
[magna025][WARNING] 
[magna025][DEBUG ] ****************************************************************************
[magna025][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[magna025][DEBUG ] verification and recovery are STRONGLY recommended.
[magna025][DEBUG ] ****************************************************************************
[magna025][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[magna025][DEBUG ] other utilities.
[magna025][DEBUG ] Creating new GPT entries.
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] partx: specified range <1:0> does not make sense
[ceph_deploy.osd][INFO  ] calling partx on zapped device /dev/sdc
[ceph_deploy.osd][INFO  ] re-reading known partitions will display errors
[magna025][INFO  ] Running command: partx -a /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy disk zap magna025:/dev/sdd
[ceph_deploy.osd][DEBUG ] zapping /dev/sdd on magna025
[magna025][DEBUG ] connected to host: magna025 
[magna025][DEBUG ] detect platform information from remote host
[magna025][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[magna025][DEBUG ] zeroing last few blocks of device
[magna025][DEBUG ] find the location of an executable
[magna025][INFO  ] Running command: /usr/sbin/ceph-disk zap /dev/sdd
[magna025][WARNING] Caution: invalid backup GPT header, but valid main header; regenerating
[magna025][WARNING] backup header from main header.
[magna025][WARNING] 
[magna025][WARNING] Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
[magna025][WARNING] on the recovery & transformation menu to examine the two tables.
[magna025][WARNING] 
[magna025][WARNING] Warning! One or more CRCs don't match. You should repair the disk!
[magna025][WARNING] 
[magna025][DEBUG ] ****************************************************************************
[magna025][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[magna025][DEBUG ] verification and recovery are STRONGLY recommended.
[magna025][DEBUG ] ****************************************************************************
[magna025][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[magna025][DEBUG ] other utilities.
[magna025][DEBUG ] Creating new GPT entries.
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] partx: specified range <1:0> does not make sense
[ceph_deploy.osd][INFO  ] calling partx on zapped device /dev/sdd
[ceph_deploy.osd][INFO  ] re-reading known partitions will display errors
[magna025][INFO  ] Running command: partx -a /dev/sdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy osd prepare magna025:/dev/sdb
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks magna025:/dev/sdb:
[magna025][DEBUG ] connected to host: magna025 
[magna025][DEBUG ] detect platform information from remote host
[magna025][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to magna025
[magna025][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[magna025][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host magna025 disk /dev/sdb journal None activate False
[magna025][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[magna025][WARNING] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[magna025][WARNING] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdb
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:25506aa6-569a-44db-a735-1e12c8fb8981 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] INFO:ceph-disk:calling partx on prepared device /dev/sdb
[magna025][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdb
[magna025][WARNING] partx: /dev/sdb: error adding partition 2
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[magna025][WARNING] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/25506aa6-569a-44db-a735-1e12c8fb8981
[magna025][WARNING] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/25506aa6-569a-44db-a735-1e12c8fb8981
[magna025][WARNING] DEBUG:ceph-disk:Creating osd partition on /dev/sdb
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:4c79817e-f444-44f7-8f9d-fed419eb170e --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] INFO:ceph-disk:calling partx on created device /dev/sdb
[magna025][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdb
[magna025][WARNING] partx: /dev/sdb: error adding partitions 1-2
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[magna025][WARNING] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[magna025][DEBUG ] meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=60719917 blks
[magna025][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[magna025][DEBUG ]          =                       crc=0        finobt=0
[magna025][DEBUG ] data     =                       bsize=4096   blocks=242879665, imaxpct=25
[magna025][DEBUG ]          =                       sunit=0      swidth=0 blks
[magna025][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
[magna025][DEBUG ] log      =internal log           bsize=4096   blocks=118593, version=2
[magna025][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[magna025][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[magna025][WARNING] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.MizyTk with options noatime,inode64
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.MizyTk
[magna025][WARNING] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.MizyTk
[magna025][WARNING] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.MizyTk/journal -> /dev/disk/by-partuuid/25506aa6-569a-44db-a735-1e12c8fb8981
[magna025][WARNING] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.MizyTk
[magna025][WARNING] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.MizyTk
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] INFO:ceph-disk:calling partx on prepared device /dev/sdb
[magna025][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdb
[magna025][WARNING] partx: /dev/sdb: error adding partitions 1-2
[magna025][INFO  ] checking OSD status...
[magna025][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host magna025 is now ready for osd use.
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy osd prepare magna025:/dev/sdc
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks magna025:/dev/sdc:
[magna025][DEBUG ] connected to host: magna025 
[magna025][DEBUG ] detect platform information from remote host
[magna025][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to magna025
[magna025][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[magna025][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host magna025 disk /dev/sdc journal None activate False
[magna025][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdc
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[magna025][WARNING] INFO:ceph-disk:Will colocate journal with data on /dev/sdc
[magna025][WARNING] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdc
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:e3b23b55-3ca4-4fea-8dd6-80a9ee23f9ff --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdc
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] INFO:ceph-disk:calling partx on prepared device /dev/sdc
[magna025][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdc
[magna025][WARNING] partx: /dev/sdc: error adding partition 2
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[magna025][WARNING] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/e3b23b55-3ca4-4fea-8dd6-80a9ee23f9ff
[magna025][WARNING] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/e3b23b55-3ca4-4fea-8dd6-80a9ee23f9ff
[magna025][WARNING] DEBUG:ceph-disk:Creating osd partition on /dev/sdc
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:ca686a42-e591-4aaa-84a5-1d015e39c93c --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdc
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] INFO:ceph-disk:calling partx on created device /dev/sdc
[magna025][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdc
[magna025][WARNING] partx: /dev/sdc: error adding partitions 1-2
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[magna025][WARNING] DEBUG:ceph-disk:Creating xfs fs on /dev/sdc1
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdc1
[magna025][DEBUG ] meta-data=/dev/sdc1              isize=2048   agcount=4, agsize=60719917 blks
[magna025][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[magna025][DEBUG ]          =                       crc=0        finobt=0
[magna025][DEBUG ] data     =                       bsize=4096   blocks=242879665, imaxpct=25
[magna025][DEBUG ]          =                       sunit=0      swidth=0 blks
[magna025][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
[magna025][DEBUG ] log      =internal log           bsize=4096   blocks=118593, version=2
[magna025][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[magna025][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[magna025][WARNING] DEBUG:ceph-disk:Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.HljSUR with options noatime,inode64
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.HljSUR
[magna025][WARNING] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.HljSUR
[magna025][WARNING] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.HljSUR/journal -> /dev/disk/by-partuuid/e3b23b55-3ca4-4fea-8dd6-80a9ee23f9ff
[magna025][WARNING] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.HljSUR
[magna025][WARNING] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.HljSUR
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdc
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] INFO:ceph-disk:calling partx on prepared device /dev/sdc
[magna025][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdc
[magna025][WARNING] partx: /dev/sdc: error adding partitions 1-2
[magna025][INFO  ] checking OSD status...
[magna025][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host magna025 is now ready for osd use.
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy osd prepare magna025:/dev/sdd
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks magna025:/dev/sdd:
[magna025][DEBUG ] connected to host: magna025 
[magna025][DEBUG ] detect platform information from remote host
[magna025][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to magna025
[magna025][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[magna025][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host magna025 disk /dev/sdd journal None activate False
[magna025][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdd
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[magna025][WARNING] INFO:ceph-disk:Will colocate journal with data on /dev/sdd
[magna025][WARNING] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdd
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:b29e22fc-b639-4eed-943f-db5f92bc0eaa --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdd
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] INFO:ceph-disk:calling partx on prepared device /dev/sdd
[magna025][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdd
[magna025][WARNING] partx: /dev/sdd: error adding partition 2
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[magna025][WARNING] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/b29e22fc-b639-4eed-943f-db5f92bc0eaa
[magna025][WARNING] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/b29e22fc-b639-4eed-943f-db5f92bc0eaa
[magna025][WARNING] DEBUG:ceph-disk:Creating osd partition on /dev/sdd
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:1036dd35-66fb-4ce1-bb21-58c2e4eabec7 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdd
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] INFO:ceph-disk:calling partx on created device /dev/sdd
[magna025][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdd
[magna025][WARNING] partx: /dev/sdd: error adding partitions 1-2
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[magna025][WARNING] DEBUG:ceph-disk:Creating xfs fs on /dev/sdd1
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdd1
[magna025][DEBUG ] meta-data=/dev/sdd1              isize=2048   agcount=4, agsize=60719917 blks
[magna025][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[magna025][DEBUG ]          =                       crc=0        finobt=0
[magna025][DEBUG ] data     =                       bsize=4096   blocks=242879665, imaxpct=25
[magna025][DEBUG ]          =                       sunit=0      swidth=0 blks
[magna025][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
[magna025][DEBUG ] log      =internal log           bsize=4096   blocks=118593, version=2
[magna025][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[magna025][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[magna025][WARNING] DEBUG:ceph-disk:Mounting /dev/sdd1 on /var/lib/ceph/tmp/mnt.6IEYMV with options noatime,inode64
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdd1 /var/lib/ceph/tmp/mnt.6IEYMV
[magna025][WARNING] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.6IEYMV
[magna025][WARNING] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.6IEYMV/journal -> /dev/disk/by-partuuid/b29e22fc-b639-4eed-943f-db5f92bc0eaa
[magna025][WARNING] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.6IEYMV
[magna025][WARNING] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.6IEYMV
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdd
[magna025][DEBUG ] The operation has completed successfully.
[magna025][WARNING] INFO:ceph-disk:calling partx on prepared device /dev/sdd
[magna025][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdd
[magna025][WARNING] partx: /dev/sdd: error adding partitions 1-2
[magna025][INFO  ] checking OSD status...
[magna025][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host magna025 is now ready for osd use.
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy osd activate magna025:/dev/sdd
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks magna025:/dev/sdd:
[magna025][DEBUG ] connected to host: magna025 
[magna025][DEBUG ] detect platform information from remote host
[magna025][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] activating host magna025 disk /dev/sdd
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[magna025][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdd
[magna025][WARNING] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdd
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[magna025][WARNING] DEBUG:ceph-disk:Mounting /dev/sdd on /var/lib/ceph/tmp/mnt.1dxJEp with options noatime,user_subvol_rm_allowed
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/mount -t btrfs -o noatime,user_subvol_rm_allowed -- /dev/sdd /var/lib/ceph/tmp/mnt.1dxJEp
[magna025][WARNING] DEBUG:ceph-disk:Cluster uuid is 3787ca52-c644-4fcf-a5a7-c17d38d8c825
[magna025][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[magna025][WARNING] ERROR:ceph-disk:Failed to activate
[magna025][WARNING] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.1dxJEp
[magna025][WARNING] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.1dxJEp
[magna025][WARNING] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid 3787ca52-c644-4fcf-a5a7-c17d38d8c825
[magna025][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdd

Comment 2 Alfredo Deza 2015-10-21 19:47:40 UTC
Vasu it looks like you have a configuration issue. This line here seems to tell what is going on:

[magna025][WARNING] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid 3787ca52-c644-4fcf-a5a7-c17d38d8c825

Comment 3 Vasu Kulkarni 2015-10-27 22:52:39 UTC
Afredo, 

I will retest this on 7.2 GA build, my guess is it must be something due to snapshop build,

Comment 6 Vasu Kulkarni 2015-12-03 19:13:47 UTC
Alfredo assigning this back to you :)

on 7.2 GA i see something similar, probably an issue with ceph-disk on 7.2?
After osd prepare they dont seem to get activated but this works on 7.1


2015-12-03 14:07:03,437.437 INFO:teuthology.orchestra.run.magna009:Running: 'cd ~/cdtest ; ceph-deploy osd prepare magna009:/dev/sdd'
2015-12-03 14:07:03,599.599 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
2015-12-03 14:07:03,599.599 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ] Invoked (1.5.27.3): /usr/bin/ceph-deploy osd prepare magna009:/dev/sdd
2015-12-03 14:07:03,600.600 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ] ceph-deploy options:
2015-12-03 14:07:03,600.600 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  username                      : None
2015-12-03 14:07:03,600.600 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  disk                          : [('magna009', '/dev/sdd', None)]
2015-12-03 14:07:03,600.600 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
2015-12-03 14:07:03,601.601 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  verbose                       : False
2015-12-03 14:07:03,601.601 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
2015-12-03 14:07:03,601.601 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
2015-12-03 14:07:03,601.601 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
2015-12-03 14:07:03,601.601 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  quiet                         : False
2015-12-03 14:07:03,602.602 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa642c5af38>
2015-12-03 14:07:03,602.602 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  cluster                       : ceph
2015-12-03 14:07:03,602.602 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
2015-12-03 14:07:03,602.602 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fa642c51398>
2015-12-03 14:07:03,602.602 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
2015-12-03 14:07:03,603.603 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  default_release               : False
2015-12-03 14:07:03,603.603 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.cli][INFO  ]  zap_disk                      : False
2015-12-03 14:07:03,603.603 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks magna009:/dev/sdd:
2015-12-03 14:07:03,639.639 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] connection detected need for sudo
2015-12-03 14:07:03,661.661 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] connected to host: magna009
2015-12-03 14:07:03,661.661 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] detect platform information from remote host
2015-12-03 14:07:03,685.685 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] detect machine type
2015-12-03 14:07:03,689.689 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
2015-12-03 14:07:03,689.689 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.osd][DEBUG ] Deploying osd to magna009
2015-12-03 14:07:03,689.689 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
2015-12-03 14:07:03,691.691 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
2015-12-03 14:07:03,714.714 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.osd][DEBUG ] Preparing host magna009 disk /dev/sdd journal None activate False
2015-12-03 14:07:03,716.716 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][INFO  ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdd
2015-12-03 14:07:03,837.837 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
2015-12-03 14:07:03,837.837 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
2015-12-03 14:07:03,837.837 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
2015-12-03 14:07:03,838.838 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
2015-12-03 14:07:03,838.838 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
2015-12-03 14:07:03,838.838 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
2015-12-03 14:07:03,838.838 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
2015-12-03 14:07:03,840.840 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
2015-12-03 14:07:03,847.847 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
2015-12-03 14:07:03,851.851 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Will colocate journal with data on /dev/sdd
2015-12-03 14:07:03,851.851 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdd
2015-12-03 14:07:03,852.852 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:a573e973-5fda-4d1c-ae6e-646a7977c3ee --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdd
2015-12-03 14:07:05,068.068 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] The operation has completed successfully.
2015-12-03 14:07:05,068.068 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:calling partx on prepared device /dev/sdd
2015-12-03 14:07:05,069.069 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
2015-12-03 14:07:05,069.069 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdd
2015-12-03 14:07:05,069.069 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] partx: /dev/sdd: error adding partition 2
2015-12-03 14:07:05,069.069 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
2015-12-03 14:07:05,733.733 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/a573e973-5fda-4d1c-ae6e-646a7977c3ee
2015-12-03 14:07:05,733.733 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/a573e973-5fda-4d1c-ae6e-646a7977c3ee
2015-12-03 14:07:05,733.733 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] DEBUG:ceph-disk:Creating osd partition on /dev/sdd
2015-12-03 14:07:05,734.734 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:b28774a5-e248-428d-bcf7-f46b4440221e --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdd
2015-12-03 14:07:07,000.000 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] The operation has completed successfully.
2015-12-03 14:07:07,001.001 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:calling partx on created device /dev/sdd
2015-12-03 14:07:07,001.001 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
2015-12-03 14:07:07,001.001 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdd
2015-12-03 14:07:07,001.001 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] partx: /dev/sdd: error adding partitions 1-2
2015-12-03 14:07:07,001.001 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
2015-12-03 14:07:07,264.264 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] DEBUG:ceph-disk:Creating xfs fs on /dev/sdd1
2015-12-03 14:07:07,264.264 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdd1
2015-12-03 14:07:11,639.639 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] meta-data=/dev/sdd1              isize=2048   agcount=4, agsize=60719917 blks
2015-12-03 14:07:11,639.639 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
2015-12-03 14:07:11,639.639 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ]          =                       crc=0        finobt=0
2015-12-03 14:07:11,640.640 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] data     =                       bsize=4096   blocks=242879665, imaxpct=25
2015-12-03 14:07:11,640.640 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ]          =                       sunit=0      swidth=0 blks
2015-12-03 14:07:11,640.640 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
2015-12-03 14:07:11,640.640 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] log      =internal log           bsize=4096   blocks=118593, version=2
2015-12-03 14:07:11,640.640 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
2015-12-03 14:07:11,641.641 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
2015-12-03 14:07:11,641.641 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] DEBUG:ceph-disk:Mounting /dev/sdd1 on /var/lib/ceph/tmp/mnt.7Vx3n4 with options noatime,inode64
2015-12-03 14:07:11,641.641 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdd1 /var/lib/ceph/tmp/mnt.7Vx3n4
2015-12-03 14:07:11,902.902 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.7Vx3n4
2015-12-03 14:07:11,902.902 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.7Vx3n4/journal -> /dev/disk/by-partuuid/a573e973-5fda-4d1c-ae6e-646a7977c3ee
2015-12-03 14:07:11,933.933 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.7Vx3n4
2015-12-03 14:07:11,934.934 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.7Vx3n4
2015-12-03 14:07:12,097.097 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdd
2015-12-03 14:07:13,214.214 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] Warning: The kernel is still using the old partition table.
2015-12-03 14:07:13,214.214 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] The new table will be used at the next reboot.
2015-12-03 14:07:13,215.215 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][DEBUG ] The operation has completed successfully.
2015-12-03 14:07:13,215.215 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:calling partx on prepared device /dev/sdd
2015-12-03 14:07:13,215.215 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:re-reading known partitions will display errors
2015-12-03 14:07:13,215.215 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdd
2015-12-03 14:07:13,216.216 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] partx: /dev/sdd: error adding partitions 1-2
2015-12-03 14:07:18,220.220 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][INFO  ] checking OSD status...
2015-12-03 14:07:18,222.222 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
2015-12-03 14:07:18,488.488 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] there are 3 OSDs down
2015-12-03 14:07:18,488.488 INFO:teuthology.orchestra.run.magna009.stderr:[magna009][WARNING] there are 3 OSDs out
2015-12-03 14:07:18,488.488 INFO:teuthology.orchestra.run.magna009.stderr:[ceph_deploy.osd][DEBUG ] Host magna009 is now ready for osd use.
2015-12-03 14:07:18,497.497 INFO:teuthology.orchestra.run.magna009:Running: 'ls -lt ~/cdtest/'
2015-12-03 14:07:18,554.554 INFO:teuthology.orchestra.run.magna009.stdout:total 92
2015-12-03 14:07:18,554.554 INFO:teuthology.orchestra.run.magna009.stdout:-rw-rw-r--. 1 ubuntu ubuntu 65453 Dec  3 14:07 ceph.log
2015-12-03 14:07:18,554.554 INFO:teuthology.orchestra.run.magna009.stdout:-rw-------. 1 ubuntu ubuntu    71 Dec  3 14:06 ceph.bootstrap-rgw.keyring
2015-12-03 14:07:18,554.554 INFO:teuthology.orchestra.run.magna009.stdout:-rw-------. 1 ubuntu ubuntu    71 Dec  3 14:06 ceph.bootstrap-mds.keyring
2015-12-03 14:07:18,554.554 INFO:teuthology.orchestra.run.magna009.stdout:-rw-------. 1 ubuntu ubuntu    71 Dec  3 14:06 ceph.bootstrap-osd.keyring
2015-12-03 14:07:18,554.554 INFO:teuthology.orchestra.run.magna009.stdout:-rw-------. 1 ubuntu ubuntu    63 Dec  3 14:06 ceph.client.admin.keyring
2015-12-03 14:07:18,555.555 INFO:teuthology.orchestra.run.magna009.stdout:-rw-rw-r--. 1 ubuntu ubuntu   228 Dec  3 14:04 ceph.conf
2015-12-03 14:07:18,555.555 INFO:teuthology.orchestra.run.magna009.stdout:-rw-------. 1 ubuntu ubuntu    73 Dec  3 14:04 ceph.mon.keyring
2015-12-03 14:07:22,556.556 INFO:teuthology.orchestra.run.magna009:Running: 'sudo ceph -s'
2015-12-03 14:07:22,828.828 INFO:teuthology.orchestra.run.magna009.stdout:    cluster 2d27be62-0a7b-4ad3-84b4-1f6ae7600a96
2015-12-03 14:07:22,828.828 INFO:teuthology.orchestra.run.magna009.stdout:     health HEALTH_WARN
2015-12-03 14:07:22,829.829 INFO:teuthology.orchestra.run.magna009.stdout:            64 pgs stuck inactive
2015-12-03 14:07:22,829.829 INFO:teuthology.orchestra.run.magna009.stdout:            64 pgs stuck unclean
2015-12-03 14:07:22,829.829 INFO:teuthology.orchestra.run.magna009.stdout:     monmap e1: 1 mons at {magna009=10.8.128.9:6789/0}
2015-12-03 14:07:22,829.829 INFO:teuthology.orchestra.run.magna009.stdout:            election epoch 2, quorum 0 magna009
2015-12-03 14:07:22,829.829 INFO:teuthology.orchestra.run.magna009.stdout:     osdmap e7: 3 osds: 0 up, 0 in
2015-12-03 14:07:22,829.829 INFO:teuthology.orchestra.run.magna009.stdout:      pgmap v8: 64 pgs, 1 pools, 0 bytes data, 0 objects
2015-12-03 14:07:22,830.830 INFO:teuthology.orchestra.run.magna009.stdout:            0 kB used, 0 kB / 0 kB avail
2015-12-03 14:07:22,830.830 INFO:teuthology.orchestra.run.magna009.stdout:                  64 creating
2015-12-03 14:07:22,837.837 INFO:teuthology.orchestra.run.magna009:Running: 'sudo ceph health'

Comment 7 Vasu Kulkarni 2015-12-03 21:36:22 UTC
so we need activate here and then it will work, but for 7.1 prepare activates the osd's and there is a bz to track that https://bugzilla.redhat.com/show_bug.cgi?id=1244287


[ubuntu@magna009 cdtest]$ cd ~/cdtest ; ceph-deploy osd activate magna009:/dev/sdb1:/dev/sdb1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.27.3): /usr/bin/ceph-deploy osd activate magna009:/dev/sdb1:/dev/sdb1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7d804f5f80>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f7d804ed398>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('magna009', '/dev/sdb1', '/dev/sdb1')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks magna009:/dev/sdb1:/dev/sdb1
[magna009][DEBUG ] connection detected need for sudo
[magna009][DEBUG ] connected to host: magna009 
[magna009][DEBUG ] detect platform information from remote host
[magna009][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] activating host magna009 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[magna009][INFO  ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
[magna009][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1
[magna009][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[magna009][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[magna009][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.ZFKYR_ with options noatime,inode64
[magna009][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.ZFKYR_
[magna009][WARNIN] DEBUG:ceph-disk:Cluster uuid is 2d27be62-0a7b-4ad3-84b4-1f6ae7600a96
[magna009][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[magna009][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[magna009][WARNIN] DEBUG:ceph-disk:OSD uuid is ae49b11c-fe5c-4c4e-96a1-31709fe67201
[magna009][WARNIN] DEBUG:ceph-disk:OSD id is 0
[magna009][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[magna009][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.ZFKYR_
[magna009][WARNIN] DEBUG:ceph-disk:Moving mount to final location...
[magna009][WARNIN] INFO:ceph-disk:Running command: /bin/mount -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/osd/ceph-0
[magna009][WARNIN] INFO:ceph-disk:Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.ZFKYR_
[magna009][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
[magna009][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.0
[magna009][DEBUG ] === osd.0 === 
[magna009][WARNIN] libust[16446/16446]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[magna009][WARNIN] create-or-move updated item name 'osd.0' weight 0.9 at location {host=magna009,root=default} to crush map
[magna009][DEBUG ] Starting Ceph osd.0 on magna009...
[magna009][WARNIN] Running as unit run-16491.service.
[magna009][INFO  ] checking OSD status...
[magna009][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[magna009][WARNIN] there are 2 OSDs down
[magna009][WARNIN] there are 2 OSDs out
[magna009][INFO  ] Running command: sudo systemctl enable ceph
[magna009][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.
[magna009][WARNIN] Executing /sbin/chkconfig ceph on

Comment 8 Vasu Kulkarni 2015-12-07 23:15:12 UTC
Loic,

Assigning this to you, the node to use is magna031, you can also find the logs in the /home/ubuntu/cd folder

cmds log:
http://fpaste.org/298409/14495298/

ceph.log:
http://fpaste.org/298410/44952992/

Comment 9 Loic Dachary 2015-12-08 17:59:00 UTC
Vasu,

Would it be too much to ask for a similar 7.1 setup ? I was planning to use the centos 7.1 environment but I can't make it work right now (the lab is moving to RDU). 

Thanks !

Comment 10 Loic Dachary 2015-12-08 18:21:56 UTC
Vasu,

The udev event that is supposed to activate the OSD does not happen because 

Dec  8 12:45:55 magna031 kernel: sdc: unknown partition table
Dec  8 12:45:57 magna031 kernel: sdc: sdc2
Dec  8 12:45:57 magna031 python: detected unhandled Python exception in '/usr/sbin/ceph-disk'
Dec  8 12:45:57 magna031 abrt-server: Package 'ceph-osd' isn't signed with proper key
Dec  8 12:45:57 magna031 abrt-server: 'post-create' on '/var/spool/abrt/Python-2015-12-08-12:45:57-32031' exited with 1
Dec  8 12:45:57 magna031 abrt-server: Deleting problem directory '/var/spool/abrt/Python-2015-12-08-12:45:57-32031'

as found in /var/log/messages. I'm not familiar with the "Package 'ceph-osd' isn't signed with proper key". Do you know who to ask about it ?

Comment 11 Vasu Kulkarni 2015-12-08 18:34:36 UTC
Loic,

I will setup a similar 7.1 system, Regarding the singed key, I can check with Ken if thats an issue when testing with selinux, we sign the package only at the end of the release, I can try from other repo where we have the signed key.

Thanks

Comment 12 Ken Dreyer (Red Hat) 2015-12-08 18:44:40 UTC
The error from abrt-server is a tangential issue. It just means that abrt cannot generate a backtrace from the unsigned package.

Edit the file /etc/abrt/abrt-action-save-package-data.conf

Set OpenGPGCheck = no

Reload abrtd with the command  `service abrtd reload`

Comment 13 Loic Dachary 2015-12-08 22:12:08 UTC
@Ken so this error does not actually prevent the action from running ? If so I'm puzzled because I can't seem to find a trace of the output.

Comment 14 Ken Dreyer (Red Hat) 2015-12-08 22:13:51 UTC
"Dec  8 12:45:57 magna031 python: detected unhandled Python exception in '/usr/sbin/ceph-disk'" means that there was a backtrace, but abrt is concealing it from us.

If we can get abrt to properly record the backtrace, we'll have more information.

Comment 15 Loic Dachary 2015-12-08 22:40:34 UTC
@Ken it's not preventing ceph-disk to run but it is swallowing the output :-) After activating abrt with your help, I got the following backtrace from the 
/var/spool/abrt/Python-2015-12-08-17:27:17-1782/backtrace directory. Do you happen to know where in this directory the stdout/stderr can be found ?

ceph-disk:153:acquire:IOError: [Errno 30] Read-only file system: '/var/lib/ceph/tmp/ceph-disk.activate.lock'

Traceback (most recent call last):
  File "/usr/sbin/ceph-disk", line 2994, in <module>
    main()
  File "/usr/sbin/ceph-disk", line 2972, in main
    args.func(args)
  File "/usr/sbin/ceph-disk", line 2171, in main_activate
    activate_lock.acquire()  # noqa
  File "/usr/sbin/ceph-disk", line 153, in acquire
    self.fd = file(self.fn, 'w')
IOError: [Errno 30] Read-only file system: '/var/lib/ceph/tmp/ceph-disk.activate.lock'

Local variables in innermost frame:
self: <__main__.filelock object at 0x1a8fed0>

Comment 18 Vikhyat Umrao 2016-01-19 08:40:59 UTC
Hello Loic, Vasu and all,

Please correct me if I am wrong :

As per my understanding in 1.3.1 (ceph-deploy) what we are seeing should be right approach and 1.3 had issues if prepare also activate OSD.

- prepare should not activate the OSD, prepare should only format the disk and prepare it.
- activate should activate the OSD.

And this is working as expected in 1.3.1.

Now if any administrator want to prepare and activate the OSD in same time then I think he/she should go for "ceph-deploy osd create" not "ceph-deploy osd prepare" and then "ceph-deploy osd activate" .

Please let me know your inputs.

Regards,
Vikhyat

Comment 19 Moritz Rogalli 2016-09-06 14:02:56 UTC
i had the same exception and found a solution that works for us if this is still relevant, the file is owned by root but ceph-disk that is triggered by udev runs as ceph.

cd /var/lib/ceph/tmp/
chown ceph:root ceph-disk.*.lock

Comment 20 Loic Dachary 2017-09-21 14:30:33 UTC
This is no longer relevant. The reason why the lock was read-only at some point has not been clarified. There cannot be races between ceph-deploy commands because there is a global activation lock on each machine.

Closing but feel free to re-open if that requires more investigation.


Note You need to log in before you can comment on or make changes to this bug.