Friday, January 24, 2014

How Data Is Stored In CEPH Cluster

HOW :: Data is Storage Inside Ceph Cluster 

how data is stored in ceph storage

This is something definitely your would be wondering about , How Data _ _ _ Ceph Cluster ? 

Now showing a easy to understand ceph data storage diagram.

## POOLS : Ceph cluster has POOLS , pools are the logical group for storing objects .These pools are made up of PG ( Placement Groups ). At the time of pool creation we have to provide number of placement groups that the pool is going to contain , number of object replicas ( usually takes default value , if other not specified )

  • Creating a pool ( pool-A ) with 128 placement groups
# ceph osd pool create pool-A 128
pool 'pool-A' created
  • Listing pools
# ceph osd lspools
0 data,1 metadata,2 rbd,36 pool-A,
  • Find out total number of placement groups being used by pool
# ceph osd pool get pool-A pg_num
pg_num: 128
  • Find out replication level being used by pool ( see rep size value for replication )
# ceph osd dump | grep -i pool-A
pool 36 'pool-A' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 4051 owner 0
  • Changing replication level for a pool ( compare from above step , rep size changed )
# ceph osd pool set pool-A size 3
set pool 36 size to 3
# ceph osd dump | grep -i pool-A
pool 36 'pool-A' rep size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 4054 owner 0

This means all the objects of pool-A will be replicated 3 times on 3 different OSD's

Now , Putting some data in pool-A , and data would be stored in the form of objects  :-) thumb rule.

# dd if=/dev/zero of=object-A bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.0222705 s, 471 MB/s

# dd if=/dev/zero of=object-B bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.0221176 s, 474 MB/s
  • Putting some objects in pool-A
# rados -p pool-A put object-A  object-A
# rados -p pool-A put object-B  object-B
  • checking how many objects  does the pool contains
# rados -p pool-A ls

## PG ( Placement Group ): Ceph cluster links objects --> PG . These PG containing objects are spread across multiple OSD and improves reliability. 

## Object : Object is the smallest unit of data storage in ceph cluster , Each & Everything is stored in the form of objects , thats why ceph cluster is also known as Object Storage Cluster. Objects are mapped to PG , and these Objects / their copies always spreaded on different OSD. This is how ceph is designed. 

  • Locating object , to which PG it belongs and stored where ??
# ceph osd map pool-A object-A
osdmap e4055 pool 'pool-A' (36) object 'object-A' -> pg 36.b301e3e8 (36.68) -> up [122,63,62] acting [122,63,62]
# ceph osd map pool-A object-B
osdmap e4055 pool 'pool-A' (36) object 'object-B' -> pg 36.47f173fb (36.7b) -> up [153,110,118] acting [153,110,118]
Now , we already created a pool-A , changed its replication level to 3 , added objects ( object-A and object-B ) to pool-A . Observe the above output. It throws a lot of information

  1. OSD map version id is e4055
  2. pool name is pool-A
  3. pool id is 36
  4. object name ( which was enquired , object-A and object-B )
  5. Placement Group id to which this object belongs is  ( 36.68 ) and ( 36.7b )
  6. Our pool-A has replication level set to 3 , so every object of this pool should have 3 copies on different OSD , here our object's 3 copies resides on OSD.122 , OSD.63 and OSD.62
  • Login to ceph nodes containing OSD 122 , 63 and 62
  • You can see your OSD mounted
# df -h /var/lib/ceph/osd/ceph-122
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdj1             2.8T  1.8T  975G  65% /var/lib/ceph/osd/ceph-122
  • Browse to the directory where ACTUAL OBJECTS are stored
# pwd
  • Under this directory if you do a ls command , you will see PG ID , In our case the PG id is 36.68  for object-A
# ls -la | grep -i 36.68
drwxr-xr-x 1 root root    54 Jan 24 16:45 36.68_head
  • Browse to the PG head directory , give ls and Here you go you reached to your OBJECT.
# pwd
# ls -l
total 10240
-rw-r--r-- 1 root root 10485760 Jan 24 16:45 object-A__head_B301E3E8__24

Moral of the Story

  • Ceph storage cluster can have more than one Pools
  • Each pool SHOULD have multiple Placement Groups . More the PG , better your cluster performance , more reliable your setup would be.
  • A PG contains multiple Objects.
  • A PG is spreaded on multiple OSD , i.e Objects are spreaded across OSD. The first OSD mapped to PG will be its primary OSD and the other ODS's of same PG will be its secondary OSD.
  • An Object can be mapped to exactly one PG
  • Many PG's can be mapped to ONE OSD

How much PG you need for a POOL :

           (OSDs * 100)
Total PGs = ------------

# ceph osd stat
     osdmap e4055: 154 osds: 154 up, 154 in

Applying formula gives me  = ( 154 * 100 ) / 3 = 5133.33

Now , round up this value to the next power of 2 , this will give you the number of PG you should have for a pool having replication size of 3 and total 154 OSD in entire cluster.

Final Value = 8192 PG

Friday, January 10, 2014

CephFS with a dedicated pool

CephFS with a Dedicated Pool

cephfs with a dedicated pool

blog is about configuring a dedicated pool ( user defined pool ) for cephfs. If you are looking to configure cephfs , please visit  
CephFS Step by Step blog

  • Create a new pool for cephfs ( obviosly you can use your existing pool )
# rados mkpool cephfs
  • Grab pool id
# ceph osd dump | grep -i cephfs
pool 34 'cephfs' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 860 owner 0
  • Assign the pool to MDS
# ceph mds add_data_pool 34 
    • Mount your cephfs share
    # mount -t ceph /cephfs -o name=cephfs,secretfile=/etc/ceph/client.cephfs
    • Check current layout of cephfs , you would notice the default layout.data_pool is set to 0 , which means your cephfs will store date in pool 0 i.e data pool
    # cephfs /cephfs/ show_layout
    layout.data_pool:     0
    layout.object_size:   4194304
    layout.stripe_unit:   4194304
    layout.stripe_count:  1
    • Set a new layout for data_pool in cephfs , use pool id of the pool that we have created above.
    # cephfs /cephfs/ set_layout -p 34
    # cephfs /cephfs/ show_layout
    layout.data_pool:     34
    layout.object_size:   4194304
    layout.stripe_unit:   4194304
    layout.stripe_count:  1
    [root@na_csc_fedora19 ~]#
    • Remount your cephfs share
    # umount /cephfs
    # mount -t ceph /cephfs -o name=cephfs,secretfile=/etc/ceph/client.cephfs
    • Check objects that are present in cephfs pool , there should be no object as this is a fresh pool and does not contain any data . But if you look for objects for any other pool , it should contain objects.
    # rados --pool=cephfs ls
    # rados --pool=metadata ls
    this is a tesf fine
    • Go to your cephfs directory and create some files ( put data in your file ) .
    # cd /cephfs/
    # vi test
    • Recheck for objects in cephfs pool , now it will show you objects .
    # rados --pool=cephfs ls
    Summary is , we have created a new pool named "cephfs" , changed layout of cephfs to store its data in new pool "cephfs" , and finally we saw cephfs data is getting stored in pool named cephfs  ( i know its too more cephfs , read it again if you are sleeping and didn't understand cephfs)

    Thursday, January 9, 2014

    Kraken :: The First Free Ceph Dashboard in Town

    Kraken :: The Free Ceph Dashboard is Finally Live

    Kraken is a Free ceph dashboard for monitoring and statistics. Special thanks to Donald Talton for this beautiful dashboard. 

    Installing Kraken

    •  Install Prerequisites 
    # yum install git
    # yum install django
    # yum install python-pip
    # pip install requests
    Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/site-packages
    Cleaning up...
    # pip install django
    Requirement already satisfied (use --upgrade to upgrade): django in /usr/lib/python2.7/site-packages
    Cleaning up...
    # yum install screen
    • Create a new user account 
    # useradd kraken
    • Clone kraken from github
    # cd /home/kraken
    # git clone
    Cloning into 'krakendash'...
    remote: Counting objects: 511, done.
    remote: Compressing objects: 100% (276/276), done.
    remote: Total 511 (delta 240), reused 497 (delta 226)
    Receiving objects: 100% (511/511), 1.53 MiB | 343.00 KiB/s, done.
    Resolving deltas: 100% (240/240), done.
    • Exceute  and one by one , these would get launched in screens , use ctrl A  and D  to detach from screen

    # ./
    [detached from 14662.api]
    # ./
    [detached from 14698.django]
    # ps -ef | grep -i screen
    root     14662     1  0 07:29 ?        00:00:00 SCREEN -S api sudo ceph-rest-api -c /etc/ceph/ceph.conf --cluster ceph -i admin
    root     14698     1  0 07:30 ?        00:00:00 SCREEN -S django sudo python krakendash/kraken/ runserver
    root     14704 14472  0 07:30 pts/0    00:00:00 grep --color=auto -i screen
    • Open your browser and navigate to http://localhost:8000/



    kraken pools

    • Great you have a Ceph GUI dashboard running now :-)
    • Watch out this space for new features of kraken

    Thursday, January 2, 2014

    Zero To Hero Guide : : For CEPH CLUSTER PLANNING

    What it is all about :

    If you think or discuss about Ceph , the most common question strike to your mind is "What Hardware Should I Select For My CEPH Storage Cluster ?" and yes if you really thought of this question in your mind , congratulations you seems to be serious about ceph technology and You should be because CEPH IS THE FUTURE OF STORAGE.

    Ceph runs on Commodity hardware , Ohh Yeah !! everyone now knows it . It is designed to build a multi-petabyte storage cluster while providing enterprise ready features. No single point of failure , scaling to exabytes , self managing and self healing ( saves operational cost ) , runs on commodity hardware ( no vendor locking , saves capital investment )

    Ceph Overview :-

    The sole of ceph storage cluster is RADOS ( Reliable Autonomic Distributed Object Store ). Ceph uses powerful CRUSH ( Controlled Replication Under Scalable Hashing ) algorithm for optimize data placement ,  self managing and self healing. The RESTful interface is provided by Ceph Object Gateway (RGW) aks Rados GateWay and virtual disks are provisioned by Ceph Block Device (RBD) 

    Ceph Overview - Image Credit : Inktank

    Ceph Components :-

    # Ceph OSD ( Object Storage Daemons ) storage data in objects , manages data replication , recovery , rebalancing and provides stage information to Ceph Monitor. Its recommended to user 1 OSD per physical disk.

    # Ceph MON ( Monitors ) maintains overall health of cluster by keeping cluster map state including Monitor map , OSD map , Placement Group ( PG ) map , and CRUSH map. Monitors receives state information from other components to maintain maps and circulate these maps to other Monitor and OSD nodes.

    # Ceph RGW ( Object Gateway / Rados Gateway ) RESTful API interface compatible with Amazon S3 , OpenStack Swift .

    # Ceph RBD ( Raw Block Device ) Provides Block Storage to VM / bare metal as well as regular clients , supports OpenStack and CloudStack . Includes Enterprise features like snapshot , thin provisioning , compression.

    # CephFS ( File System ) distributed POSIX NAS storage.

    Few Thumb Rules :-

    • Run OSD on a dedicated storage node ( server with multiple disks ) , actual data is stored in the form of objects.
    • Run Monitor on a separate dedicated hardware or coexists with ceph client nodes ( other than OSD node ) such as RGW , CephFS node . For production its recommended to run Monitors on dedicated low cost servers since Monitors are not resource hungry.

    Monitor Hardware Configuration :-

    Monitor maintains health of entire cluster , it contains PG logs and OSD logs . A minimum of three monitors nodes are recommended for a cluster quorum. Ceph monitor nodes are not resource hungry they can work well with fairly low cpu and memory. A 1U server with low cost processor E5-2603,16GB RAM and 1GbE network should be sufficient in most of the cases. If PG,Monitor and OSD logs are storage on local disk of monitor node , make sure you have sufficient amount of local storage so that it should not fill up.

    Unhealthy clusters require more storage for logs , can reach upto GB and even hundreds of GB if the cluster is left unhealthy for a very long time . If verbose output is set on monitor nodes, then these are bound to generate huge amount of logging information. Refer ceph documentation for monitor log setting.

    Its recommended to run monitor on distant nodes rather on all on all one node or on virtual machines on physical separated machines to prevent single point of failure.

    The Planning Stage :-

    Deploying a ceph cluster in production requires a little bit Homework , you should gather the below information so that you can design a better and more reliable and scalable ceph cluster to fit in your IT needs. These very specific to your needs and your IT environment. This information will help you to design your storage requirement better.

    • Business Requirement
      • Budget ?
      • Do you need Ceph cluster for day to day operation or SPECIAL 
    • Technical Requirement
      • What applications will be running on your ceph cluster ?
      • What type of data will be stored on your ceph cluster ?
      • Should the ceph cluster be optimized for capacity and performance ?
      • What should be usable storage capacity ?
      • What is expected growth rate ?
      • How many IOPS should the cluster support ?
      • How much throughput should the cluster support
      • How much data replication ( reliability level ) you need ?

    Collect as much information as possible during the planning stage , the will give all the answers required to construct a better ceph cluster.

    The Physical Node and clustering technique:-

    In addition to above collected information , also take into account the rack density  and power budget , data center space pace cost to size the optimal node configuration. Ceph replicated data across multiple nodes in a storage cluster to provide data redundancy and higher availability. Its important to consider.

    • Should the replicated node be on the same rack or multiple racks to avoid SPOF ?
    • Should the OSD traffic stay within the rack or span across rack in a dedicated or shared network ?
    • How many nodes failure can be tolerated ?
    • If the nodes are separated out across multiple racks network traffic increases and the impact of latency and the number of network switch hops should be considered.
    Ceph will automatically recover by re-replicating data from the failed nodes using secondary copies present on other nodes in cluster . A node failure thus have several effects.

    • Total cluster capacity is reduced by some fractions.
    • Total cluster throughput is reduced by some fractions.
    • The cluster enters a write heavy recovery processes.

    A general thumb of rule to calculate recovery time in a ceph cluster given 1 disk per OSD node is : 

    Recovery Time in seconds = disk capacity in Gigabits / ( network speed *(nodes-1) )

    # POC Environment -- Can have a minimum of 3 physical nodes with 10 OSD's each. This provides 66% cluster availability upon a physical node failure and 97% uptime upon an OSD failure. RGW and Monitor nodes can be put on OSD  nodes but this may impact performance  and not recommended for production.

    # Production Environment -- a minimum of 5 physically separated nodes and minimum of 100 OSD @ 4TB per OSD the cluster capacity is over 130TB  and provides 80% uptime on physical node failure and 99% uptime on OSD failure. RGW and Monitors should be on separate nodes.

     Based on the outcome of planning phase and physical nodes and clustering stage you have a look on the hardware available in market as per your budget.

    OSD CPU selection :-

    < Under Construction ... Stay Tuned >