Thursday, December 5, 2013

Ceph Installation :: Part-3



Creating Block Device from Ceph

  • From monitor node , use ceph-deploy to install Ceph on your ceph-client1 node.
[root@ceph-mon1 ~]# ceph-deploy install ceph-client1
[ceph_deploy.cli][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy install ceph-client1
[ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster ceph hosts ceph-client1
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-client1 ...
[ceph-client1][DEBUG ] connected to host: ceph-client1
[ceph-client1][DEBUG ] detect platform information from remote host
[ceph-client1][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 13.04 raring
[ceph-client1][INFO  ] installing ceph on ceph-client1
[ceph-client1][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive apt-get -q install --assume-yes ca-certificates
[ceph-client1][DEBUG ] Reading package lists...
[ceph-client1][DEBUG ] Building dependency tree...
[ceph-client1][DEBUG ] Reading state information...
[ceph-client1][DEBUG ] ca-certificates is already the newest version.
[ceph-client1][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 105 not upgraded.
[ceph-client1][INFO  ] Running command: wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add -
[ceph-client1][WARNIN] command returned non-zero exit status: 4
[ceph-client1][DEBUG ] add ceph deb repo to sources.list
[ceph-client1][INFO  ] Running command: apt-get -q update
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security Release.gpg
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security Release
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring Release.gpg
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates Release.gpg
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring Release
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates Release
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security/main Sources
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring/main Sources
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring/universe Sources
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring/main amd64 Packages
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security/universe Sources
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring/universe amd64 Packages
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring/main i386 Packages
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring/universe i386 Packages
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security/main amd64 Packages
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring/main Translation-en
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring/universe Translation-en
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security/universe amd64 Packages
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates/main Sources
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates/universe Sources
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates/main amd64 Packages
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security/main i386 Packages
[ceph-client1][DEBUG ] Get:1 http://ceph.com raring Release.gpg [836 B]
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates/universe amd64 Packages
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security/universe i386 Packages
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates/main i386 Packages
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates/universe i386 Packages
[ceph-client1][DEBUG ] Hit http://ceph.com raring Release
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security/main Translation-en
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates/main Translation-en
[ceph-client1][DEBUG ] Hit http://nova.clouds.archive.ubuntu.com raring-updates/universe Translation-en
[ceph-client1][DEBUG ] Hit http://security.ubuntu.com raring-security/universe Translation-en
[ceph-client1][DEBUG ] Hit http://ceph.com raring/main amd64 Packages
[ceph-client1][DEBUG ] Hit http://ceph.com raring/main i386 Packages
[ceph-client1][DEBUG ] Ign http://ceph.com raring/main Translation-en
[ceph-client1][DEBUG ] Fetched 836 B in 15s (55 B/s)
[ceph-client1][DEBUG ] Reading package lists...
[ceph-client1][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get -q -o Dpkg::Options::=--force-confnew --no-install-recommends --assume-yes install -- ceph ceph-mds ceph-common ceph-fs-common gdisk
[ceph-client1][DEBUG ] Reading package lists...
[ceph-client1][DEBUG ] Building dependency tree...
[ceph-client1][DEBUG ] Reading state information...
[ceph-client1][DEBUG ] gdisk is already the newest version.
[ceph-client1][DEBUG ] The following extra packages will be installed:
[ceph-client1][DEBUG ]   binutils libaio1 libboost-thread1.49.0 libgoogle-perftools4 libjs-jquery
[ceph-client1][DEBUG ]   libleveldb1 libnspr4 libnss3 librados2 librbd1 libreadline5 libsnappy1
[ceph-client1][DEBUG ]   libtcmalloc-minimal4 libunwind8 python-ceph python-flask python-jinja2
[ceph-client1][DEBUG ]   python-markupsafe python-werkzeug xfsprogs
[ceph-client1][DEBUG ] Suggested packages:
[ceph-client1][DEBUG ]   binutils-doc javascript-common python-jinja2-doc ipython python-genshi
[ceph-client1][DEBUG ]   python-lxml python-memcache libjs-sphinxdoc xfsdump acl attr quota
[ceph-client1][DEBUG ] Recommended packages:
[ceph-client1][DEBUG ]   btrfs-tools ceph-fuse libcephfs1
[ceph-client1][DEBUG ] The following NEW packages will be installed:
[ceph-client1][DEBUG ]   binutils ceph ceph-common ceph-fs-common ceph-mds libaio1
[ceph-client1][DEBUG ]   libboost-thread1.49.0 libgoogle-perftools4 libjs-jquery libleveldb1 libnspr4
[ceph-client1][DEBUG ]   libnss3 librados2 librbd1 libreadline5 libsnappy1 libtcmalloc-minimal4
[ceph-client1][DEBUG ]   libunwind8 python-ceph python-flask python-jinja2 python-markupsafe
[ceph-client1][DEBUG ]   python-werkzeug xfsprogs
[ceph-client1][DEBUG ] 0 upgraded, 24 newly installed, 0 to remove and 105 not upgraded.
[ceph-client1][DEBUG ] Need to get 40.9 MB of archives.
[ceph-client1][DEBUG ] After this operation, 192 MB of additional disk space will be used.
[ceph-client1][DEBUG ] Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libaio1 amd64 0.3.109-3 [6328 B]
[ceph-client1][DEBUG ] Get:2 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libsnappy1 amd64 1.0.5-2 [13.2 kB]
[ceph-client1][DEBUG ] Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libleveldb1 amd64 1.9.0-1 [138 kB]
[ceph-client1][DEBUG ] Get:4 http://ceph.com/debian-dumpling/ raring/main librados2 amd64 0.67.4-1raring [1635 kB]
[ceph-client1][DEBUG ] Get:5 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libnspr4 amd64 2:4.9.5-1ubuntu1 [134 kB]
[ceph-client1][DEBUG ] Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libnss3 amd64 2:3.14.3-0ubuntu1 [1044 kB]
[ceph-client1][DEBUG ] Get:7 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libreadline5 amd64 5.2+dfsg-1 [131 kB]
[ceph-client1][DEBUG ] Get:8 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libunwind8 amd64 1.0.1-4ubuntu2 [55.9 kB]
[ceph-client1][DEBUG ] Get:9 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main binutils amd64 2.23.2-2ubuntu1 [2393 kB]
[ceph-client1][DEBUG ] Get:10 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libboost-thread1.49.0 amd64 1.49.0-3.2ubuntu1 [41.6 kB]
[ceph-client1][DEBUG ] Get:11 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libjs-jquery all 1.7.2+debian-1ubuntu1 [115 kB]
[ceph-client1][DEBUG ] Get:12 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main python-werkzeug all 0.8.3+dfsg-1 [1333 kB]
[ceph-client1][DEBUG ] Get:13 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main python-markupsafe amd64 0.15-1build3 [13.8 kB]
[ceph-client1][DEBUG ] Get:14 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main python-jinja2 amd64 2.6-1build3 [158 kB]
[ceph-client1][DEBUG ] Get:15 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main python-flask all 0.9-1 [56.1 kB]
[ceph-client1][DEBUG ] Get:16 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main xfsprogs amd64 3.1.9 [1238 kB]
[ceph-client1][DEBUG ] Get:17 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libtcmalloc-minimal4 amd64 2.0-4ubuntu1 [163 kB]
[ceph-client1][DEBUG ] Get:18 http://nova.clouds.archive.ubuntu.com/ubuntu/ raring/main libgoogle-perftools4 amd64 2.0-4ubuntu1 [412 kB]
[ceph-client1][DEBUG ] Get:19 http://ceph.com/debian-dumpling/ raring/main librbd1 amd64 0.67.4-1raring [276 kB]
[ceph-client1][DEBUG ] Get:20 http://ceph.com/debian-dumpling/ raring/main python-ceph amd64 0.67.4-1raring [39.7 kB]
[ceph-client1][DEBUG ] Get:21 http://ceph.com/debian-dumpling/ raring/main ceph-common amd64 0.67.4-1raring [6090 kB]
[ceph-client1][DEBUG ] Get:22 http://ceph.com/debian-dumpling/ raring/main ceph amd64 0.67.4-1raring [22.8 MB]
[ceph-client1][DEBUG ] Get:23 http://ceph.com/debian-dumpling/ raring/main ceph-fs-common amd64 0.67.4-1raring [28.2 kB]
[ceph-client1][DEBUG ] Get:24 http://ceph.com/debian-dumpling/ raring/main ceph-mds amd64 0.67.4-1raring [2676 kB]
[ceph-client1][DEBUG ] Fetched 40.9 MB in 12s (3212 kB/s)
[ceph-client1][DEBUG ] Selecting previously unselected package libaio1:amd64.
[ceph-client1][DEBUG ] (Reading database ... 58918 files and directories currently installed.)
[ceph-client1][DEBUG ] Unpacking libaio1:amd64 (from .../libaio1_0.3.109-3_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libsnappy1.
[ceph-client1][DEBUG ] Unpacking libsnappy1 (from .../libsnappy1_1.0.5-2_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libleveldb1:amd64.
[ceph-client1][DEBUG ] Unpacking libleveldb1:amd64 (from .../libleveldb1_1.9.0-1_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libnspr4:amd64.
[ceph-client1][DEBUG ] Unpacking libnspr4:amd64 (from .../libnspr4_2%3a4.9.5-1ubuntu1_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libnss3:amd64.
[ceph-client1][DEBUG ] Unpacking libnss3:amd64 (from .../libnss3_2%3a3.14.3-0ubuntu1_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libreadline5:amd64.
[ceph-client1][DEBUG ] Unpacking libreadline5:amd64 (from .../libreadline5_5.2+dfsg-1_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libunwind8.
[ceph-client1][DEBUG ] Unpacking libunwind8 (from .../libunwind8_1.0.1-4ubuntu2_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package binutils.
[ceph-client1][DEBUG ] Unpacking binutils (from .../binutils_2.23.2-2ubuntu1_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libboost-thread1.49.0.
[ceph-client1][DEBUG ] Unpacking libboost-thread1.49.0 (from .../libboost-thread1.49.0_1.49.0-3.2ubuntu1_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package librados2.
[ceph-client1][DEBUG ] Unpacking librados2 (from .../librados2_0.67.4-1raring_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package librbd1.
[ceph-client1][DEBUG ] Unpacking librbd1 (from .../librbd1_0.67.4-1raring_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libjs-jquery.
[ceph-client1][DEBUG ] Unpacking libjs-jquery (from .../libjs-jquery_1.7.2+debian-1ubuntu1_all.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package python-werkzeug.
[ceph-client1][DEBUG ] Unpacking python-werkzeug (from .../python-werkzeug_0.8.3+dfsg-1_all.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package python-markupsafe.
[ceph-client1][DEBUG ] Unpacking python-markupsafe (from .../python-markupsafe_0.15-1build3_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package python-jinja2.
[ceph-client1][DEBUG ] Unpacking python-jinja2 (from .../python-jinja2_2.6-1build3_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package python-flask.
[ceph-client1][DEBUG ] Unpacking python-flask (from .../python-flask_0.9-1_all.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package python-ceph.
[ceph-client1][DEBUG ] Unpacking python-ceph (from .../python-ceph_0.67.4-1raring_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package ceph-common.
[ceph-client1][DEBUG ] Unpacking ceph-common (from .../ceph-common_0.67.4-1raring_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package xfsprogs.
[ceph-client1][DEBUG ] Unpacking xfsprogs (from .../xfsprogs_3.1.9_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libtcmalloc-minimal4.
[ceph-client1][DEBUG ] Unpacking libtcmalloc-minimal4 (from .../libtcmalloc-minimal4_2.0-4ubuntu1_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package libgoogle-perftools4.
[ceph-client1][DEBUG ] Unpacking libgoogle-perftools4 (from .../libgoogle-perftools4_2.0-4ubuntu1_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package ceph.
[ceph-client1][DEBUG ] Unpacking ceph (from .../ceph_0.67.4-1raring_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package ceph-fs-common.
[ceph-client1][DEBUG ] Unpacking ceph-fs-common (from .../ceph-fs-common_0.67.4-1raring_amd64.deb) ...
[ceph-client1][DEBUG ] Selecting previously unselected package ceph-mds.
[ceph-client1][DEBUG ] Unpacking ceph-mds (from .../ceph-mds_0.67.4-1raring_amd64.deb) ...
[ceph-client1][DEBUG ] Processing triggers for man-db ...
[ceph-client1][DEBUG ] Processing triggers for ureadahead ...
[ceph-client1][DEBUG ] Setting up libaio1:amd64 (0.3.109-3) ...
[ceph-client1][DEBUG ] Setting up libsnappy1 (1.0.5-2) ...
[ceph-client1][DEBUG ] Setting up libleveldb1:amd64 (1.9.0-1) ...
[ceph-client1][DEBUG ] Setting up libnspr4:amd64 (2:4.9.5-1ubuntu1) ...
[ceph-client1][DEBUG ] Setting up libnss3:amd64 (2:3.14.3-0ubuntu1) ...
[ceph-client1][DEBUG ] Setting up libreadline5:amd64 (5.2+dfsg-1) ...
[ceph-client1][DEBUG ] Setting up libunwind8 (1.0.1-4ubuntu2) ...
[ceph-client1][DEBUG ] Setting up binutils (2.23.2-2ubuntu1) ...
[ceph-client1][DEBUG ] Setting up libboost-thread1.49.0 (1.49.0-3.2ubuntu1) ...
[ceph-client1][DEBUG ] Setting up librados2 (0.67.4-1raring) ...
[ceph-client1][DEBUG ] Setting up librbd1 (0.67.4-1raring) ...
[ceph-client1][DEBUG ] Setting up libjs-jquery (1.7.2+debian-1ubuntu1) ...
[ceph-client1][DEBUG ] Setting up python-werkzeug (0.8.3+dfsg-1) ...
[ceph-client1][DEBUG ] Setting up python-markupsafe (0.15-1build3) ...
[ceph-client1][DEBUG ] Setting up python-jinja2 (2.6-1build3) ...
[ceph-client1][DEBUG ] Setting up python-flask (0.9-1) ...
[ceph-client1][DEBUG ] Setting up python-ceph (0.67.4-1raring) ...
[ceph-client1][DEBUG ] Setting up ceph-common (0.67.4-1raring) ...
[ceph-client1][DEBUG ] Setting up xfsprogs (3.1.9) ...
[ceph-client1][DEBUG ] Setting up libtcmalloc-minimal4 (2.0-4ubuntu1) ...
[ceph-client1][DEBUG ] Setting up libgoogle-perftools4 (2.0-4ubuntu1) ...
[ceph-client1][DEBUG ] Setting up ceph (0.67.4-1raring) ...
[ceph-client1][DEBUG ] ceph-all start/running
[ceph-client1][DEBUG ] Setting up ceph-fs-common (0.67.4-1raring) ...
[ceph-client1][DEBUG ] Processing triggers for ureadahead ...
[ceph-client1][DEBUG ] Setting up ceph-mds (0.67.4-1raring) ...
[ceph-client1][DEBUG ] ceph-mds-all start/running
[ceph-client1][DEBUG ] Processing triggers for libc-bin ...
[ceph-client1][DEBUG ] ldconfig deferred processing now taking place
[ceph-client1][DEBUG ] Processing triggers for ureadahead ...
[ceph-client1][INFO  ] Running command: ceph --version
[ceph-client1][DEBUG ] ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
[root@ceph-mon1 ~]# 
  • From monitor node , use ceph-deploy to copy the Ceph configuration file and the ceph.client.admin.keyring to the ceph-client1.
[root@ceph-mon1 ceph]# ceph-deploy admin ceph-client1
[ceph_deploy.cli][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy admin ceph-client1
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-client1
[ceph-client1][DEBUG ] connected to host: ceph-client1
[ceph-client1][DEBUG ] detect platform information from remote host
[ceph-client1][DEBUG ] detect machine type
[ceph-client1][DEBUG ] get remote short hostname
[ceph-client1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@ceph-mon1 ceph]#
  • On the ceph-client1 node, create a block device image.
rbd create block1-ceph-client1 --size 200
  • On the ceph-client1 node, load the rbd client module.
modprobe rbd
  • On the ceph-client1 node, map the image to a block device.
root@ceph-client1:~# rbd map block1-ceph-client1
root@ceph-client1:~#
root@ceph-client1:~#
root@ceph-client1:~# rbd showmapped
id pool image               snap device
1  rbd  block1-ceph-client1 -    /dev/rbd1
root@ceph-client1:~#
  • Use the block device by creating a file system on the ceph-client1 node.
root@ceph-client1:/dev/rbd/rbd# mkfs.ext4 /dev/rbd/rbd/block1-ceph-client1
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=4096 blocks, Stripe width=4096 blocks
51200 inodes, 204800 blocks
10240 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
25 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
   8193, 24577, 40961, 57345, 73729

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

root@ceph-client1:/dev/rbd/rbd#
  • Mount the file system on the ceph-client1 node.


root@ceph-client1:/dev/rbd/rbd# mount /dev/rbd/rbd/block1-ceph-client1 /rbd-1/
root@ceph-client1:/dev/rbd/rbd#
root@ceph-client1:/dev/rbd/rbd#
root@ceph-client1:/dev/rbd/rbd#
root@ceph-client1:/dev/rbd/rbd# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1       9.8G  1.3G  8.0G  14% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            487M  8.0K  487M   1% /dev
tmpfs           100M  256K  100M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            498M     0  498M   0% /run/shm
none            100M     0  100M   0% /run/user
/dev/vdb        109G  188M  103G   1% /mnt
/dev/rbd1       190M  1.6M  179M   1% /rbd-1
root@ceph-client1:/dev/rbd/rbd#



Ceph Installation :: Part-2


CEPH Storage Cluster

Installing Ceph Deploy ( ceph-mon1 )

  • Update your repository and install ceph-deploy on ceph-mon1 node
[ceph@ceph-mon1 ~]$ sudo yum update && sudo yum install ceph-deploy
Loaded plugins: downloadonly, fastestmirror, security
Loading mirror speeds from cached hostfile
 * base: ftp.funet.fi
 * epel: www.nic.funet.fi
 * extras: ftp.funet.fi
 * updates: mirror.academica.fi
Setting up Update Process
No Packages marked for Update
Loaded plugins: downloadonly, fastestmirror, security
Loading mirror speeds from cached hostfile
 * base: ftp.funet.fi
 * epel: www.nic.funet.fi
 * extras: ftp.funet.fi
 * updates: mirror.academica.fi
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package ceph-deploy.noarch 0:1.2.7-0 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================================================================
 Package                             Arch                           Version                          Repository                           Size
===============================================================================================================================================
Installing:
 ceph-deploy                         noarch                         1.2.7-0                          ceph-noarch                         176 k

Transaction Summary
===============================================================================================================================================
Install       1 Package(s)

Total download size: 176 k
Installed size: 553 k
Is this ok [y/N]: y
Downloading Packages:
ceph-deploy-1.2.7-0.noarch.rpm                           75% [===================================            ]  64 kB/s | 133 kB     00:00 ETA 
 Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : ceph-deploy-1.2.7-0.noarch [############################################################################################# ] 1/1
Verifying  : ceph-deploy-1.2.7-0.noarch                                                                                                  1/1 
Installed:
  ceph-deploy.noarch 0:1.2.7-0                                                                                                                 
Complete!
root@ceph-mon1 /]# ce      rpm -qa | grep -i ceph
ceph-release-1-0.el6.noarch
ceph-deploy-1.2.7-0.noarch
[root@ceph-mon1 /]# 

Creating Ceph Cluster

  • As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and two Ceph OSD nodes. Post this will expand it by adding two more Ceph Monitors.
  • Tip : The ceph-deploy utility will output files to the current directory. Ensure you are in /etc/ceph directory when executing ceph-deploy.
  • Important Do not call ceph-deploy with sudo or run it as root if you are logged in as a different user, because it will not issue sudo commands needed on the remote host.
  • Create the cluster using ceph-deploy , Check the output of ceph-deploy with ls and cat in the current directory. You should see a Ceph configuration file, a keyring, and a log file for the new cluster.
 [root@ceph-mon1 ceph]# ceph-deploy new ceph-mon1
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy new ceph-mon1
[ [1mceph_deploy.new [0m][ [1;34mDEBUG [0m ] Creating new cluster named ceph
[ [1mceph_deploy.new [0m][ [1;34mDEBUG [0m ] Resolving host ceph-mon1
[ [1mceph_deploy.new [0m][ [1;34mDEBUG [0m ] Monitor ceph-mon1 at 192.168.1.38
[ [1mceph_deploy.new [0m][ [1;34mDEBUG [0m ] Monitor initial members are ['ceph-mon1']
[ [1mceph_deploy.new [0m][ [1;34mDEBUG [0m ] Monitor addrs are ['192.168.1.38']
[ [1mceph_deploy.new [0m][ [1;34mDEBUG [0m ] Creating a random mon key...
[ [1mceph_deploy.new [0m][ [1;34mDEBUG [0m ] Writing initial config to ceph.conf...
[ [1mceph_deploy.new [0m][ [1;34mDEBUG [0m ] Writing monitor keyring to ceph.mon.keyring...
 [root@ceph-mon1 ceph]# ll
-rw-r--r-- 1 root root 189 Oct 25 13:06 ceph.conf
-rw-r--r-- 1 root root 785 Oct 25 13:06 ceph.log
-rw-r--r-- 1 root root  73 Oct 25 13:06 ceph.mon.keyring
[root@ceph-mon1 ceph]# 
  • Install Ceph on ceph-mon1 node Dont panic if you see errors due to wget , centos does not understand wget and throws some errors that can be ignored , but finally it install ceph on your node and shows the installed version.
[root@ceph-mon1 ceph]# ceph-deploy install ceph-mon1
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy install ceph-mon1
[ [1mceph_deploy.install [0m][ [1;34mDEBUG [0m ] Installing stable version dumpling on cluster ceph hosts ceph-mon1
[ [1mceph_deploy.install [0m][ [1;34mDEBUG [0m ] Detecting platform for host ceph-mon1 ...
[ [1mceph_deploy.sudo_pushy [0m][ [1;34mDEBUG [0m ] will use a local connection without sudo
[ [1mceph_deploy.lsb [0m][ [1;33mWARNIN [0m] lsb_release was not found - inferring OS details
[ [1mceph_deploy.install [0m][ [1;37mINFO [0m  ] Distro info: CentOS 6.4 Final
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] installing ceph on ceph-mon1
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] adding EPEL repository
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ] Traceback (most recent call last):
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib/python2.6/site-packages/ceph_deploy/hosts/centos/install.py", line 77, in install_epel
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 10, in inner
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     def inner(*args, **kwargs):
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib/python2.6/site-packages/ceph_deploy/util/wrappers.py", line 6, in remote_call
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     This allows us to only remote-execute the actual calls, not whole functions.
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib64/python2.6/subprocess.py", line 500, in check_call
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     retcode = call(*popenargs, **kwargs)
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib64/python2.6/subprocess.py", line 478, in call
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     p = Popen(*popenargs, **kwargs)
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     errread, errwrite)
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib64/python2.6/subprocess.py", line 1234, in _execute_child
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     raise child_exception
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: rpm -Uvh --replacepkgs epel-release-6*.rpm
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ] Traceback (most recent call last):
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib/python2.6/site-packages/ceph_deploy/util/pkg_managers.py", line 69, in rpm
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 10, in inner
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     def inner(*args, **kwargs):
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib/python2.6/site-packages/ceph_deploy/util/wrappers.py", line 6, in remote_call
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     This allows us to only remote-execute the actual calls, not whole functions.
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     raise CalledProcessError(retcode, cmd)
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: su -c 'rpm --import "https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc"'
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Retrieving http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Preparing...                ##################################################
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] ceph-release                ##################################################
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: yum -y -q install ceph
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: ceph --version
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
 ]0;root@ceph-mon1:/etc/ceph [root@ceph-mon1 ceph]# 
  • Adding node ceph-mon1 as our first monitor node
 [root@ceph-mon1 ceph]# ceph-deploy mon  create ceph-mon1
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy mon create ceph-mon1
[ [1mceph_deploy.mon [0m][ [1;34mDEBUG [0m ] Deploying mon, cluster ceph hosts ceph-mon1
[ [1mceph_deploy.mon [0m][ [1;34mDEBUG [0m ] detecting platform for host ceph-mon1 ...
[ [1mceph_deploy.sudo_pushy [0m][ [1;34mDEBUG [0m ] will use a local connection without sudo
[ [1mceph_deploy.lsb [0m][ [1;33mWARNIN [0m] lsb_release was not found - inferring OS details
[ [1mceph_deploy.mon [0m][ [1;37mINFO [0m  ] distro info: CentOS 6.4 Final
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] determining if provided host has same hostname in remote
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] deploying mon to ceph-mon1
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] remote hostname: ceph-mon1
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] write cluster configuration to /etc/ceph/{cluster}.conf
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ] Traceback (most recent call last):
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 10, in inner
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     def inner(*args, **kwargs):
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]   File "/usr/lib/python2.6/site-packages/ceph_deploy/conf.py", line 12, in write_conf
[ [1mceph-mon1 [0m][ [1;31mERROR [0m ]     line = self.fp.readline()
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon1/done
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon1/done
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] create the monitor keyring file
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: ceph-mon --cluster ceph --mkfs -i ceph-mon1 --keyring /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] ceph-mon: mon.noname-a 192.168.1.38:6789/0 is local, renaming to mon.ceph-mon1
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] ceph-mon: set fsid to 91ad085b-81ad-43db-9aa0-f3895a53613e
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-mon1 for mon.ceph-mon1
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] create a done file to avoid re-doing the mon deployment
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] create the init path if it does not exist
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] locating `service` executable...
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] found `service` executable: /sbin/service
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.ceph-mon1
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] === mon.ceph-mon1 === 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] Starting Ceph mon.ceph-mon1 on ceph-mon1...
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] Starting ceph-create-keys on ceph-mon1...
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] ********************************************************************************
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] status for monitor: mon.ceph-mon1
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] {
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   "election_epoch": 2, 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   "extra_probe_peers": [], 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   "monmap": {
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]     "created": "0.000000", 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]     "epoch": 1, 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]     "fsid": "91ad085b-81ad-43db-9aa0-f3895a53613e", 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]     "modified": "0.000000", 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]     "mons": [
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]       {
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]         "addr": "192.168.1.38:6789/0", 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]         "name": "ceph-mon1", 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]         "rank": 0
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]       }
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]     ]
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   }, 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   "name": "ceph-mon1", 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   "outside_quorum": [], 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   "quorum": [
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]     0
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   ], 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   "rank": 0, 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   "state": "leader", 
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ]   "sync_provider": []
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] }
[ [1mceph-mon1 [0m][ [1;34mDEBUG [0m ] ********************************************************************************
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] monitor: mon.ceph-mon1 is running
[ [1mceph-mon1 [0m][ [1;37mINFO [0m  ] Running command: ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
  • This would generate (cluster_name).mon.keyring and (cluster_name).client.admin.keyring files in current directory
[root@ceph-mon1 ceph]# ll
total 28
-rw------- 1 root root    64 Oct 25 16:32 ceph.client.admin.keyring
-rw-r--r-- 1 root root   189 Oct 25 16:25 ceph.conf
-rw-r--r-- 1 root root 10937 Oct 25 16:32 ceph.log
-rw-r--r-- 1 root root    73 Oct 25 16:25 ceph.mon.keyring
-rwxr-xr-x 1 root root    92 Oct  4 14:39  rbdmap 
  • You can now check your cluster status , but you will see health Errors that will be resolved later by adding monitor and osd.
[root@ceph-mon1 ceph]# ceph status
  cluster 91ad085b-81ad-43db-9aa0-f3895a53613e
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at {ceph-mon1=192.168.1.38:6789/0}, election epoch 2, quorum 0 ceph-mon1
   osdmap e1: 0 osds: 0 up, 0 in
    pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up
 [root@ceph-mon1 ceph]# 
  • Gather keys Once you have gathered keys, your local directory should have the following keyrings:
    • {cluster-name}.client.admin.keyring
    • {cluster-name}.bootstrap-osd.keyring
    • {cluster-name}.bootstrap-mds.keyring
 [root@ceph-mon1 ceph]# ceph-d e ploy gatherkeys ceph-mon1
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy gatherkeys ceph-mon1
[ [1mceph_deploy.gatherkeys [0m][ [1;34mDEBUG [0m ] Have ceph.client.admin.keyring
[ [1mceph_deploy.gatherkeys [0m][ [1;34mDEBUG [0m ] Have ceph.mon.keyring
[ [1mceph_deploy.gatherkeys [0m][ [1;34mDEBUG [0m ] Checking ceph-mon1 for /var/lib/ceph/bootstrap-osd/ceph.keyring
[ [1mceph_deploy.sudo_pushy [0m][ [1;34mDEBUG [0m ] will use a local connection without sudo
[ [1mceph_deploy.gatherkeys [0m][ [1;34mDEBUG [0m ] Got ceph.bootstrap-osd.keyring key from ceph-mon1.
[ [1mceph_deploy.gatherkeys [0m][ [1;34mDEBUG [0m ] Checking ceph-mon1 for /var/lib/ceph/bootstrap-mds/ceph.keyring
[ [1mceph_deploy.sudo_pushy [0m][ [1;34mDEBUG [0m ] will use a local connection without sudo
[ [1mceph_deploy.gatherkeys [0m][ [1;34mDEBUG [0m ] Got ceph.bootstrap-mds.keyring key from ceph-mon1.

[root@ceph-mon1 ceph]# ll
total 36
-rw-r--r-- 1 root root    72 Oct 25 16:33 ceph.bootstrap-mds.keyring
-rw-r--r-- 1 root root    72 Oct 25 16:33 ceph.bootstrap-osd.keyring
-rw------- 1 root root    64 Oct 25 16:32 ceph.client.admin.keyring
-rw-r--r-- 1 root root   189 Oct 25 16:25 ceph.conf
-rw-r--r-- 1 root root 11867 Oct 25 16:33 ceph.log
-rw-r--r-- 1 root root    73 Oct 25 16:25 ceph.mon.keyring
-rwxr-xr-x 1 root root    92 Oct  4 14:39  rbdmap 
[root@ceph-mon1 ceph]# 
  • Modify ceph configuration file /etc/ceph/ceph.conf , add the below entries
[global] 
auth_service_required = cephx
auth_client_required = cephx
auth_cluster_required = cephx

[mon.ceph-mon1]
mon_addr = 192.168.1.38:6789
host = ceph-mon1

[osd]
filestore_xattr_use_omap = true
osd_data = /var/lib/ceph/osd/$cluster-$id
osd_journal_size = 1024
  • Restart the ceph service on the server and check that your monitor is restarting well. Note : updating ceph.conf file does not require ceph service bounce back , we are here just testing if monitor services are OK
[root@ceph-mon1 ceph]# service ceph restart
=== mon.ceph-mon1 === 
=== mon.ceph-mon1 === 
Stopping Ceph mon.ceph-mon1 on ceph-mon1...kill 27965...done
=== mon.ceph-mon1 === 
Starting Ceph mon.ceph-mon1 on ceph-mon1...
Starting ceph-create-keys on ceph-mon1...
=== mon.ceph-mon1 === 
=== mon.ceph-mon1 === 
Stopping Ceph mon.ceph-mon1 on ceph-mon1...kill 28439...done
=== mon.ceph-mon1 === 
Starting Ceph mon.ceph-mon1 on ceph-mon1...
Starting ceph-create-keys on ceph-mon1...
[root@ceph-mon1 ceph]# 
[root@ceph-mon1 ceph]# service ceph status
=== mon.ceph-mon1 === 
mon.ceph-mon1: running {"version":"0.67.4"}
=== mon.ceph-mon1 === 
mon.ceph-mon1: running {"version":"0.67.4"}
 [root@ceph-mon1 ceph]# 

Preparing OSD node ( ceph-node1 & ceph-node2 )

  • Use ceph-deploy node ( ceph-mon1 ) to list available disk on ceph-node1 that will be used as OSD
[root@ceph-mon1 ceph]# ceph-d e ploy disk list ceph-node1
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy disk list ceph-node1
[ [1mceph_deploy.sudo_pushy [0m][ [1;34mDEBUG [0m ] will use a remote connection without sudo
[ [1mceph_deploy.lsb [0m][ [1;33mWARNIN [0m] lsb_release was not found - inferring OS details
[ [1mceph_deploy.osd [0m][ [1;37mINFO [0m  ] Distro info: CentOS 6.4 Final
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] Listing disks on ceph-node1...
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] Running command: ceph-disk list
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] /dev/vda :
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ]  /dev/vda1 swap, swap
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ]  /dev/vda2 other, ext4, mounted on /
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] /dev/vdb other, ext3
[root@ceph-mon1 ceph]# 
  • To zap a disk (delete its partition table) in preparation for use with Ceph, execute the following . This will delete all data.
[root@ceph-mon1 ceph]# ceph-deploy disk zap ceph-node1:vdb
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy disk zap ceph-node1:vdb
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] zapping /dev/vdb on ceph-node1
[ [1mceph-node1 [0m][ [1;34mDEBUG [0m ] detect platform information from remote host
[ [1mceph_deploy.osd [0m][ [1;37mINFO [0m  ] Distro info: CentOS 6.4 Final
[ [1mceph-node1 [0m][ [1;34mDEBUG [0m ] zeroing last few blocks of device
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/vdb
[ [1mceph-node1 [0m][ [1;34mDEBUG [0m ] Creating new GPT entries.
[ [1mceph-node1 [0m][ [1;34mDEBUG [0m ] GPT data structures destroyed! You may now partition the disk using fdisk or
[ [1mceph-node1 [0m][ [1;34mDEBUG [0m ] other utilities.
[ [1mceph-node1 [0m][ [1;34mDEBUG [0m ] The operation has completed successfully.
 [root@ceph-mon1 ceph]# 
  • Prepare the OSDs and deploy them to the OSD node
[root@ceph-mon1 ceph]# ceph-deploy osd prepare ceph-node1:vdb
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare ceph-node1:vdb
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] Preparing cluster ceph disks ceph-node1:/dev/vdb:
[ [1mceph_deploy.sudo_pushy [0m][ [1;34mDEBUG [0m ] will use a remote connection without sudo
[ [1mceph_deploy.lsb [0m][ [1;33mWARNIN [0m] lsb_release was not found - inferring OS details
[ [1mceph_deploy.osd [0m][ [1;37mINFO [0m  ] Distro info: CentOS 6.4 Final
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] Deploying osd to ceph-node1
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] write cluster configuration to /etc/ceph/{cluster}.conf
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] keyring file does not exist, creating one at: /var/lib/ceph/bootstrap-osd/ceph.keyring
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] create mon keyring file
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] Preparing host ceph-node1 disk /dev/vdb journal None activate False
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/vdb
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] The operation has completed successfully.
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] The operation has completed successfully.
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] meta-data=/dev/vdb1              isize=2048   agcount=4, agsize=28770239 blks
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ]          =                       sectsz=512   attr=2, projid32bit=0
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] data     =                       bsize=4096   blocks=115080955, imaxpct=25
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ]          =                       sunit=0      swidth=0 blks
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] naming   =version 2              bsize=4096   ascii-ci=0
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] log      =internal log           bsize=4096   blocks=56191, version=2
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] The operation has completed successfully.
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] INFO:ceph-disk:Will colocate journal with data on /dev/vdb
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] Host ceph-node1 is now ready for osd use.
 [root@ceph-mon1 ceph]# 
  • Once you prepare an OSD you may activate it. The activate command will cause your OSD to come up and be placed in the cluster. The activate command uses the path to the partition created when running the prepare command.
[root@ceph-mon1 ceph]# ceph-de ploy osd activate ceph-node1: vdb1
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy osd activate ceph-node1:vdb
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] Activating cluster ceph disks ceph-node1:/dev/vdb:
[ [1mceph_deploy.sudo_pushy [0m][ [1;34mDEBUG [0m ] will use a remote connection without sudo
[ [1mceph_deploy.lsb [0m][ [1;33mWARNIN [0m] lsb_release was not found - inferring OS details
[ [1mceph_deploy.osd [0m][ [1;37mINFO [0m  ] Distro info: CentOS 6.4 Final
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] activating host ceph-node1 disk /dev/vdb
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] will use init type: sysvinit
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb
  • During this step if you encounter error like below , then start troubleshooting

[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 20:46:47.433307 7f6adc1d9700  0 -- :/1010803 >> 192.168.1.38:6789/0 pipe(0x7f6acc003d40 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f6acc003270).fault
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 20:46:50.433780 7f6ad43f9700  0 -- :/1010803 >> 192.168.1.38:6789/0 pipe(0x7f6acc00b860 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f6acc005090).fault
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 20:46:53.378985 7f6add314700  0 monclient(hunting): authenticate timed out after 300
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 20:46:53.379057 7f6add314700  0 librados: client.bootstrap-osd authentication error (110) Connection timed out
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] Error connecting to cluster: Error
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] ERROR:ceph-disk:Failed to activate
[root@ceph-mon1 ceph]# 
  • On ceph-mon1 , cd /etc/ceph
  • scp ceph.c lient.admin.keyring ceph-node1:/etc/ceph
  • scp /var/lib/ceph/bootstrap-osd/ceph.keyring ceph-node1:/var/lib/ceph/bootstrap-osd
  • check firewall between nodes
  • Again try to activate your OSD , it should work
[root@ceph-mon1 ceph]# ceph-deploy osd prepare ceph-node1:/dev/vdb1
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy osd activate ceph-node1:/dev/vdb1
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] Activating cluster ceph disks ceph-node1:/dev/vdb1:
[ [1mceph_deploy.sudo_pushy [0m][ [1;34mDEBUG [0m ] will use a remote connection without sudo
[ [1mceph_deploy.lsb [0m][ [1;33mWARNIN [0m] lsb_release was not found - inferring OS details
[ [1mceph_deploy.osd [0m][ [1;37mINFO [0m  ] Distro info: CentOS 6.4 Final
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] activating host ceph-node1 disk /dev/vdb1
[ [1mceph_deploy.osd [0m][ [1;34mDEBUG [0m ] will use init type: sysvinit
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/vdb1
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] === osd.0 === 
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] Starting Ceph osd.0 on ceph-node1...
[ [1mceph-node1 [0m][ [1;37mINFO [0m  ] starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] got latest monmap
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 21:50:53.768569 7fc6aff527a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 21:50:54.268147 7fc6aff527a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 21:50:54.269488 7fc6aff527a0 -1 filestore(/var/lib/ceph/tmp/mnt.DBPdBc) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 21:50:54.611784 7fc6aff527a0 -1 created object store /var/lib/ceph/tmp/mnt.DBPdBc journal /var/lib/ceph/tmp/mnt.DBPdBc/journal for osd.0 fsid 0ff473d9-0670-42a3-89ff-81bbfb2e676a
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 21:50:54.611876 7fc6aff527a0 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.DBPdBc/keyring: can't open /var/lib/ceph/tmp/mnt.DBPdBc/keyring: (2) No such file or directory
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] 2013-10-25 21:50:54.612049 7fc6aff527a0 -1 created new key in keyring /var/lib/ceph/tmp/mnt.DBPdBc/keyring
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] added key for osd.0
[ [1mceph-node1 [0m][ [1;31mERROR [0m ] create-or-move updating item name 'osd.0' weight 0.43 at location {host=ceph-node1,root=default} to crush map
 [root@ceph-mon1 ceph]# 
[root@ceph-mon1 ceph]# ceph status
  cluster 0ff473d9-0670-42a3-89ff-81bbfb2e676a
   health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
   monmap e1: 1 mons at {ceph-mon1=192.168.1.38:6789/0}, election epoch 2, quorum 0 ceph-mon1
   osdmap e5: 1 osds: 1 up, 1 in
    pgmap v8: 192 pgs: 192 active+degraded; 0 bytes data, 1057 MB used, 438 GB / 439 GB avail
   mdsmap e1: 0/0/1 up
 [root@ceph-mon1 ceph]# 

  • Similarly add 3 more OSD on ceph-node1 , after successful addition of OSD your ceph status will look like
[root@ceph-mon1 ceph]#  ceph status
  cluster 0ff473d9-0670-42a3-89ff-81bbfb2e676a
   health HEALTH_WARN 192 pgs stuck unclean
   monmap e1: 1 mons at {ceph-mon1=192.168.1.38:6789/0}, election epoch 1, quorum 0 ceph-mon1
   osdmap e63: 4 osds: 4 up, 4 in
    pgmap v112: 192 pgs: 192 active+remapped; 0 bytes data, 2188 MB used, 1755 GB / 1757 GB avail
   mdsmap e1: 0/0/1 up
 [root@ceph-mon1 ceph]# 
  • Once you have added 4 OSD on ceph-node1 , repeat these steps and add 4 OSD on ceph-node2 , before OSD make you install ceph packages on ceph-node2
  • After these steps you cluster should have 8 OSD running and cluster health must be OK , PGmap=ACTIVE+CLEAN
[root@ceph-mon1 ceph]# ceph status
  cluster 0ff473d9-0670-42a3-89ff-81bbfb2e676a
   health HEALTH_OK
   monmap e1: 1 mons at {ceph-mon1=192.168.1.38:6789/0}, election epoch 1, quorum 0 ceph-mon1
   osdmap e87: 8 osds: 8 up, 8 in
    pgmap v224: 192 pgs: 192 active+clean; 0 bytes data, 2363 MB used, 3509 GB / 3512 GB avail
   mdsmap e1: 0/0/1 up
[root@ceph-mon1 ceph]# 

Scaling the cluster by adding monitors ( ceph-mon2 & ceph-mon3 )

  • Create monitors on the ceph-mon2 and ceph-mon3 , before creating monitors make sure you install ceph packages on ceph-mon2 & ceph-mon3
[root@ceph-mon1 ceph]# ceph-deploy mon create ceph-mon2                        
[ [1mceph_deploy.cli [0m][ [1;37mINFO [0m  ] Invoked (1.2.7): /usr/bin/ceph-deploy mon create ceph-mon2
[ [1mceph_deploy.mon [0m][ [1;34mDEBUG [0m ] Deploying mon, cluster ceph hosts ceph-mon2
[ [1mceph_deploy.mon [0m][ [1;34mDEBUG [0m ] detecting platform for host ceph-mon2 ...
[ [1mceph_deploy.sudo_pushy [0m][ [1;34mDEBUG [0m ] will use a remote connection without sudo
[ [1mceph_deploy.lsb [0m][ [1;33mWARNIN [0m] lsb_release was not found - inferring OS details
[ [1mceph_deploy.mon [0m][ [1;37mINFO [0m  ] distro info: CentOS 6.4 Final
[ [1mceph-mon2 [0m][ [1;34mDEBUG [0m ] determining if provided host has same hostname in remote
[ [1mceph-mon2 [0m][ [1;34mDEBUG [0m ] deploying mon to ceph-mon2
[ [1mceph-mon2 [0m][ [1;34mDEBUG [0m ] remote hostname: ceph-mon2
[ [1mceph-mon2 [0m][ [1;37mINFO [0m  ] write cluster configuration to /etc/ceph/{cluster}.conf
[ [1mceph-mon2 [0m][ [1;34mDEBUG [0m ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon2/done
[ [1mceph-mon2 [0m][ [1;37mINFO [0m  ] create a done file to avoid re-doing the mon deployment
[ [1mceph-mon2 [0m][ [1;37mINFO [0m  ] create the init path if it does not exist
[ [1mceph-mon2 [0m][ [1;37mINFO [0m  ] locating `service` executable...
[ [1mceph-mon2 [0m][ [1;37mINFO [0m  ] found `service` executable: /sbin/service
[ [1mceph-mon2 [0m][ [1;37mINFO [0m  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.ceph-mon2
[ [1mceph-mon2 [0m][ [1;34mDEBUG [0m ] === mon.ceph-mon2 === 
[ [1mceph-mon2 [0m][ [1;34mDEBUG [0m ] Starting Ceph mon.ceph-mon2 on ceph-mon2...
[ [1mceph-mon2 [0m][ [1;34mDEBUG [0m ] failed: 'ulimit -n 32768;  /usr/bin/ceph-mon -i ceph-mon2 --pid-file /var/run/ceph/mon.ceph-mon2.pid -c /etc/ceph/ceph.conf '
[ [1mceph-mon2 [0m][ [1;34mDEBUG [0m ] Starting ceph-create-keys on ceph-mon2...
[ [1mceph-mon2 [0m][ [1;33mWARNIN [0m] No data was received after 7 seconds, disconnecting...
[ [1mceph-mon2 [0m][ [1;37mINFO [0m  ] Running command: ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon2.asok mon_status
[ [1mceph-mon2 [0m][ [1;31mERROR [0m ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ [1mceph-mon2 [0m][ [1;33mWARNIN [0m] monitor: mon.ceph-mon2, might not be running yet
[ [1mceph-mon2 [0m][ [1;37mINFO [0m  ] Running command: ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon2.asok mon_status
[ [1mceph-mon2 [0m][ [1;31mERROR [0m ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ [1mceph-mon2 [0m][ [1;33mWARNIN [0m] ceph-mon2 is not defined in `mon initial members`
[ [1mceph-mon2 [0m][ [1;33mWARNIN [0m] monitor ceph-mon2 does not exist in monmap
[ [1mceph-mon2 [0m][ [1;33mWARNIN [0m] neither `public_addr` nor `public_network` keys are defined for monitors
[ [1mceph-mon2 [0m][ [1;33mWARNIN [0m] monitors may not be able to form quorum
[root@ceph-mon1 ceph]# 
  • You might encounter some warnings or errors that need to be fixed
  • check monitor logs from ceph-mon1 , /var/lib/ceph directory
  • You might need to manually add monitor in cluster
#### Manually login on monitor node and execute commands like below 
ceph mon add ceph-mon2 192.168.1.33:6789
ceph-mon -i ceph-mon2 --public-addr 192.168.1.33:6789
service ceph status
service ceph restart
ps -ef | grep ceph
  • In case ceph service is not showing as running in service ceph status command , however ps -ef | grep ceph , shows monitor running , kill the process manually and restart ceph services , monitor now should start working
# service ceph status
# ps -ef | grep ceph
# kill -9 6554

### Finally you would see your ceph cluster is healthy with all the OSD and monitors UP and Running

[root@ceph-mon1 ~]# ceph status
  cluster 0ff473d9-0670-42a3-89ff-81bbfb2e676a
   health HEALTH_OK
   monmap e3: 3 mons at {ceph-mon1=192.168.1.38:6789/0,ceph-mon2=192.168.1.33:6789/0,ceph-mon3=192.168.1.31:6789/0}, election epoch 10, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
   osdmap e97: 8 osds: 8 up, 8 in
    pgmap v246: 192 pgs: 192 active+clean; 0 bytes data, 2352 MB used, 3509 GB / 3512 GB avail
   mdsmap e1: 0/0/1 up
[root@ceph-mon1 ~]#


Please Follow Ceph Installation :: Part-3 for next step in installation



Ceph Installation :: Part-1

Ceph Installation Step by Step
This quick start setup helps to deploy ceph with 3 Monitors and 2 OSD nodes with 4 OSD each node. In this we are using commodity hardware running CentOS 6.4

Ceph-mon1 : First Monitor + Ceph-deploy machine (will be used to deploy ceph to other nodes )
Ceph-mon2 : Second Monitor ( for monitor quorum )
Ceph-mon3 : Third Monitor ( for monitor quorum )
Ceph-node1 : OSD node 1 with 10G X 1 for OS , 440G X 4 for 4 OSD
Ceph-node2 : OSD node 2 with 10G X 1 for OS , 440G X 4 for 4 OSD
Ceph-Deploy Version is 1.3.2 , Ceph Version 0.67.4 ( Dumpling )


Preflight Checklist 

All the Ceph Nodes may require some basic configuration work prior to deploying a Ceph Storage Cluster.


CEPH node setup

  • Create a user on each Ceph Node.
sudo useradd -d /home/ceph -m ceph
sudo passwd ceph
  • Add root privileges for the user on each Ceph Node.
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers
sudo chmod 0440 /etc/sudoers
  • Configure your ceph-deploy node ( ceph-mon1) with password-less SSH access to each Ceph Node. Leave the passphrase empty , repeat this step for CEPH and ROOT users.
ceph@ceph-admin:~ [ceph@ceph-admin ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa): yes         
Created directory '/home/ceph/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
48:86:ff:4e:ab:c3:f6:cb:7f:ba:46:33:10:e6:22:52 ceph@ceph-admin.csc.fi
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|    E.  o        |
|   .. oo .       |
|  . .+..o        |
|   . .o.S.       |
|       .  +      |
|     .  o. o     |
|      ++ .. .    |
|     ..+*+++     |
+-----------------+

  • Copy the key to each Ceph Node. ( Repeat this step for ceph and root users )
[ceph@ceph-mon1 ~]$ ssh-copy-id ceph@ceph-node2
The authenticity of host 'ceph-node2 (192.168.1.38)' can't be established.
RSA key fingerprint is ac:31:6f:e7:bb:ed:f1:18:9e:6e:42:cc:48:74:8e:7b.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no': yes
Warning: Permanently added 'ceph-node2,192.168.1.38' (RSA) to the list of known hosts.
ceph@ceph-node2's password: 
Now try logging into the machine, with "ssh 'ceph@ceph-node2'", and check in:  .ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[ceph@ceph-mon1 ~]$ 
  • Ensure connectivity using ping with hostnames , for convenience we have used local host file , update host file of every node with details of other nodes. PS : Use of DNS is recommended
  • Packages are cryptographically signed with the release.asc key. Add release key to your system’s list of trusted keys to avoid a security warning:
sudo rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
  • Ceph may require additional additional third party libraries. To add the EPEL repository, execute the following:
su -c 'rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm'
sudo yum install snappy leveldb gdisk python-argparse gperftools-libs
  • Installing Release packages , Dumpling is the most recent stable release of Ceph. ( by the time i am creating this wiki )
su -c 'rpm -Uvh http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm'
  • Adding ceph to YUM , create repository file for ceph /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-dumpling/el6/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-dumpling/el6/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-dumpling/el6/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
  • For best results, create directories on your nodes for maintaining the configuration generated by ceph . These should get auto created by ceph however in may case it gave me problems. So creating manually.
mkdir -p /etc/ceph /var/lib/ceph/{tmp,mon,mds,bootstrap-osd} /var/log/ceph
  • By default, daemons bind to ports within the 6800:7100 range. You may configure this range at your discretion. Before configuring your IP tables, check the default iptables configuration. ::ports within the 6800:7100 range. You may configure this range at your discretion. Since we are performing test deployment we can disable iptables on ceph nodes . For moving to production this need to be attended.

Please Follow Ceph Installation :: Part-2 for next step in installation



Ceph Storage :: Introduction


What is CEPH

Ceph is an open-source, massively scalable, software-defined storage system which provides object, block and file system storage from a single clustered platform. Ceph's main goals is to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available. The data is replicated, making it fault tolerant. Ceph software runs on commodity hardware. The system is designed to be both self-healing and self-managing and self awesome :-)

CEPH Internals
  • OSD: A Object Storage Daemon (OSD) stores data, handles data replication, recovery, backfilling, rebalancing, and provides some monitoring information to Ceph Monitors by checking other Ceph OSD Daemons for a heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to achieve an active + clean state when the cluster makes two copies of your data . 
  • Monitor: A Ceph Monitor maintains maps of the cluster state, including the monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map. Ceph maintains a history (called an “epoch”) of each state change in the Monitors, Ceph OSD Daemons, and PGs.
  • MDS: A Ceph Metadata Server (MDS) stores metadata on behalf of the Ceph Filesystem . Ceph Metadata Servers make it feasible for POSIX file system users to execute basic commands like ls, find, etc. without placing an enormous burden on the Ceph Storage Cluster.
Note :: Please use http://ceph.com/docs/master/ and other official InkTank and ceph community resources as a primary source of information on ceph . This entire blog is an attempt help beginners in setting up ceph cluster and sharing my troubleshooting with you.