Showing posts with label cephfs. Show all posts
Showing posts with label cephfs. Show all posts

Friday, January 10, 2014

CephFS with a dedicated pool


CephFS with a Dedicated Pool


cephfs with a dedicated pool

This 
blog is about configuring a dedicated pool ( user defined pool ) for cephfs. If you are looking to configure cephfs , please visit  
CephFS Step by Step blog


  • Create a new pool for cephfs ( obviosly you can use your existing pool )
# rados mkpool cephfs
  • Grab pool id
# ceph osd dump | grep -i cephfs
pool 34 'cephfs' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 860 owner 0
# 
  • Assign the pool to MDS
# ceph mds add_data_pool 34 
    • Mount your cephfs share
    # mount -t ceph 192.168.100.101:/ /cephfs -o name=cephfs,secretfile=/etc/ceph/client.cephfs
    
    • Check current layout of cephfs , you would notice the default layout.data_pool is set to 0 , which means your cephfs will store date in pool 0 i.e data pool
    # cephfs /cephfs/ show_layout
    layout.data_pool:     0
    layout.object_size:   4194304
    layout.stripe_unit:   4194304
    layout.stripe_count:  1
    
    • Set a new layout for data_pool in cephfs , use pool id of the pool that we have created above.
    # cephfs /cephfs/ set_layout -p 34
    # cephfs /cephfs/ show_layout
    layout.data_pool:     34
    layout.object_size:   4194304
    layout.stripe_unit:   4194304
    layout.stripe_count:  1
    [root@na_csc_fedora19 ~]#
    
    • Remount your cephfs share
    # umount /cephfs
    # mount -t ceph 192.168.100.101:/ /cephfs -o name=cephfs,secretfile=/etc/ceph/client.cephfs
    
    • Check objects that are present in cephfs pool , there should be no object as this is a fresh pool and does not contain any data . But if you look for objects for any other pool , it should contain objects.
    # rados --pool=cephfs ls
    #
    # rados --pool=metadata ls
    1.00000000.inode
    100.00000000
    100.00000000.inode
    1.00000000
    2.00000000
    200.00000000
    this is a tesf fine
    200.00000001
    600.00000000
    601.00000000
    602.00000000
    603.00000000
    604.00000000
    605.00000000
    606.00000000
    607.00000000
    608.00000000
    609.00000000
    mds0_inotable
    mds0_sessionmap
    mds_anchortable
    mds_snaptable
    #
    
    • Go to your cephfs directory and create some files ( put data in your file ) .
    # cd /cephfs/
    # vi test
    
    • Recheck for objects in cephfs pool , now it will show you objects .
    # rados --pool=cephfs ls
    10000000005.00000000
    #
    Summary is , we have created a new pool named "cephfs" , changed layout of cephfs to store its data in new pool "cephfs" , and finally we saw cephfs data is getting stored in pool named cephfs  ( i know its too more cephfs , read it again if you are sleeping and didn't understand cephfs)



    Monday, December 23, 2013

    Ceph Filesystem ( CephFS) :: Step by Step Configuration


    CephFS 

    Ceph Filesystem is a posix compliant file system that uses ceph storage cluster to store its data. This is the only ceph component that is not ready for production , i would like to say ready for pre-production.


    Internals 
    Thanks to http://ceph.com/docs/master/cephfs/ for Image 

    Requirement of CephFS


    • You need a running ceph cluster with at least one MDS node. MDS is required for CephFS to work.
    • If you don't have MDS configure one
      • # ceph-deploy mds create <MDS-NODE-ADDRESS>
    Note : If you are running short of hardware or want to save hardware you can run MDS services on existing Monitor nodes. MDS services does not need much resources
    • A Ceph client to mount cephFS

    Configuring CephFS
    • Install ceph on client node
    [root@storage0101-ib ceph]# ceph-deploy install na_fedora19
    [ceph_deploy.cli][INFO  ] Invoked (1.3.2): /usr/bin/ceph-deploy install na_fedora19
    [ceph_deploy.install][DEBUG ] Installing stable version emperor on cluster ceph hosts na_csc_fedora19
    [ceph_deploy.install][DEBUG ] Detecting platform for host na_fedora19 ...
    [na_csc_fedora19][DEBUG ] connected to host: na_csc_fedora19
    [na_csc_fedora19][DEBUG ] detect platform information from remote host
    [na_csc_fedora19][DEBUG ] detect machine type
    [ceph_deploy.install][INFO  ] Distro info: Fedora 19 Schrödinger’s Cat
    [na_csc_fedora19][INFO  ] installing ceph on na_fedora19
    [na_csc_fedora19][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
    [na_csc_fedora19][INFO  ] Running command: rpm -Uvh --replacepkgs --force --quiet http://ceph.com/rpm-emperor/fc19/noarch/ceph-release-1-0.fc19.noarch.rpm
    [na_csc_fedora19][DEBUG ] ########################################
    [na_csc_fedora19][DEBUG ] Updating / installing...
    [na_csc_fedora19][DEBUG ] ########################################
    [na_csc_fedora19][INFO  ] Running command: yum -y -q install ceph
    
    [na_csc_fedora19][ERROR ] Warning: RPMDB altered outside of yum.
    [na_csc_fedora19][DEBUG ] No Presto metadata available for Ceph
    [na_csc_fedora19][INFO  ] Running command: ceph --version
    [na_csc_fedora19][DEBUG ] ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
    [root@storage0101-ib ceph]#
    • Create a new pool for CephFS
    # rados mkpool cephfs
    • Create a new keyring (client.cephfs) for cephfs 
    # ceph auth get-or-create client.cephfs mon 'allow r' osd 'allow rwx pool=cephfs' -o /etc/ceph/client.cephfs.keyring
    • Extract secret key from keyring
    # ceph-authtool -p -n client.cephfs /etc/ceph/client.cephfs.keyring > /etc/ceph/client.cephfs
    • Copy the secret file to client node under /etc/ceph . This allow filesystem to mount when cephx authentication is enabled
    # scp client.cephfs na_fedora19:/etc/ceph
    client.cephfs                                                                100%   41     0.0KB/s   00:00
    • List all the keys on ceph cluster
    # ceph auth list                                               


    Option-1 : Mount CephFS with Kernel Driver


    • On the client machine add mount point in /etc/fstab . Provide IP address of your ceph monitor node and path of secret key that we have created above
    192.168.200.101:6789:/ /cephfs ceph name=cephfs,secretfile=/etc/ceph/client.cephfs,noatime 0 2    
    • Mount cephfs mount point  , you might see some "mount: error writing /etc/mtab: Invalid argument" but you can ignore them and check  df -h
    [root@na_fedora19 ceph]# mount /cephfs
    mount: error writing /etc/mtab: Invalid argument
    
    [root@na_fedora19 ceph]#
    [root@na_fedora19 ceph]# df -h
    Filesystem              Size  Used Avail Use% Mounted on
    /dev/vda1               7.8G  2.1G  5.4G  28% /
    devtmpfs                3.9G     0  3.9G   0% /dev
    tmpfs                   3.9G     0  3.9G   0% /dev/shm
    tmpfs                   3.9G  288K  3.9G   1% /run
    tmpfs                   3.9G     0  3.9G   0% /sys/fs/cgroup
    tmpfs                   3.9G  2.6M  3.9G   1% /tmp
    192.168.200.101:6789:/  419T  8.5T  411T   3% /cephfs
    [root@na_fedora19 ceph]#

    Option-2 : Mounting CephFS as FUSE
    • Copy ceph configuration file ( ceph.conf ) from monitor node to client node and make sure it has permission of 644
    # scp ceph.conf na_fedora19:/etc/ceph
    # chmod 644 ceph.conf
    • Copy the secret file from monitor node to client node under /etc/ceph. This allow filesystem to mount when cephx authentication is enabled ( we have done this earlier )
    # scp client.cephfs na_fedora19:/etc/ceph
    client.cephfs                                                                100%   41     0.0KB/s   00:00
    • Make sure you have "ceph-fuse" package installed on client machine
    # rpm -qa | grep -i ceph-fuse
    ceph-fuse-0.72.2-0.fc19.x86_64 
    • To mount Ceph Filesystem as FUSE use ceph-fuse comand 
    [root@na_fedora19 ceph]# ceph-fuse -m 192.168.100.101:6789  /cephfs
    ceph-fuse[3256]: starting ceph client
    ceph-fuse[3256]: starting fuse
    [root@na_csc_fedora19 ceph]#
    
    [root@na_fedora19 ceph]# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/vda1       7.8G  2.1G  5.4G  28% /
    devtmpfs        3.9G     0  3.9G   0% /dev
    tmpfs           3.9G     0  3.9G   0% /dev/shm
    tmpfs           3.9G  292K  3.9G   1% /run
    tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
    tmpfs           3.9G  2.6M  3.9G   1% /tmp
    ceph-fuse       419T  8.5T  411T   3% /cephfs
    [root@na_fedora19 ceph]#