Ceph version: Quincy 17.2.5 and later 


Ceph NFS can use cephfs volumes or rgw buckets as it's back storage 


Use Ceph FS as the NFS back storage


You must have a Ceph FS volume first. 


You can create multiple NFS clusters to use any cephfs volume as the back storage. Every NFS cluster can create multiple exports for clients to mount as an folder.


Create Ceph Volume

Ceph FS volumes is an abstraction for CephFS file systems



$ # command : ceph fs volume create <name> [<placement>]

$ # the placement include the number of Meta Data Servers (MDS) you want to create and the host's names these MDS will use. 
$ #Example
$ ceph fs volume create ambedded --placement '2 ambedded-101 ambedded-102'

The placement is an option to let you assign explicit hosts for deploying the metadata servers.
The Ceph Orchestrator will automatically create and configure MDS for your file system. You can also create other CephFS volumes.

Every cephfs volume has its own MDS and pools. 

An High Available cephfs need at least one active MDS and one standby MDS.  



root@ambedded-102:~# ceph fs status
cephfs - 5 clients
======
RANK  STATE              MDS                 ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  cephfs.ambedded-101.hdeoyc  Reqs:    0 /s    19     21     16      7   
               POOL                   TYPE     USED  AVAIL  
cephfs_pool_66aaada7ab389.metadata  metadata   375k   242G  
  cephfs_pool_66aaada7ab389.data      data    36.0k   242G  
ambedded - 3 clients
========
RANK  STATE               MDS                  ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ambedded.ambedded-104.rcpuxy  Reqs:    0 /s    10     13     12      3   
        POOL            TYPE     USED  AVAIL  
cephfs.ambedded.meta  metadata  96.0k   242G  
cephfs.ambedded.data    data       0    242G  
openstack-manila - 2 clients
================
RANK  STATE                   MDS                      ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  openstack-manila.ambedded-102.yzmohc  Reqs:    0 /s    10     13     12      2   
            POOL                TYPE     USED  AVAIL  
cephfs.openstack-manila.meta  metadata  96.0k   242G  
cephfs.openstack-manila.data    data       0    242G  
       STANDBY MDS          
cephfs.ambedded-104.etmjxw  
MDS version: ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)




Create NFS cluster

You can create multiple ceph nfs cluster with or without HA by deploying haproxy and keepalived. 


NFS cluster create command

$ ceph nfs cluster create CLUSTER_ID [PLACEMENT] [--port PORT_NUMBER] [--ingress --virtual-ip IP_ADDRESS/CIDR_PREFIX]


If you use ingress and virtual IP options, ceph nfs will deploy hapoxy and keepalived services for high available nfs cluster. Otherwise, only one nfs service will be deployed.

The placement is an option to let you assign explicit hosts for deploying the .


root@ambedded-101:~# ceph nfs cluster create nfs-cluster-1 'ambedded-102 ambedded-104' --ingress --virtual-ip 172.18.2.109/24

root@ambedded-101:~# ceph nfs cluster info nfs-cluster-1
{
    "nfs-cluster-1": {
        "virtual_ip": "172.18.2.109",
        "backend": [
            {
                "hostname": "ambedded-102",
                "ip": "172.18.2.102",
                "port": 12049
            }
        ],
        "port": 2049,
        "monitor_port": 9049
    }
}



root@ambedded-101:~# ceph nfs cluster create nfs-cephfs 

root@ambedded-101:~# ceph nfs cluster ls

nfs-cephfs

nfs-cluster-1

root@ambedded-101:~# ceph nfs cluster info nfs-cluster-1

{

    "nfs-cluster-1": {

        "virtual_ip": "172.18.2.109",

        "backend": [

            {

                "hostname": "ambedded-102",

                "ip": "172.18.2.102",

                "port": 12049

            }

        ],

        "port": 2049,

        "monitor_port": 9049

    }

}

root@ambedded-101:~# ceph nfs cluster info nfs-cephfs

{

    "nfs-cephfs": {

        "virtual_ip": null,

        "backend": [

            {

                "hostname": "ambedded-101",

                "ip": "172.18.2.101",

                "port": 2049

            }

        ]

    }

}




The next step is to cerate NFS exports

Each nfs cluster can create multiple exports and you can choose the cephfs volume for each export.


nfs cluster-1  export-1 on volume-a

nfs cluster-1  export-2 on volume-b

nfs cluster-2  export-a on volume-a 

nfs cluster-2  export b on volume-c



$ # Command for creating nfs export on cephfs 

$ ceph nfs export create cephfs <cluster_id> <pseudo_path> <fsname> [<path>] [--readonly] [--client_addr <value>...] [--squash <value>] [--sectype <value>...]


# example


root@ambedded-101:~#ceph nfs export create cephfs nfs-cluster-1 /user-a ambedded /

root@ambedded-101:~# ceph nfs export ls nfs-cluster-1 --detailed

[

  {

    "export_id": 2,

    "path": "/",

    "cluster_id": "nfs-cluster-1",

    "pseudo": "/user-a",

    "access_type": "RW",

    "squash": "none",

    "security_label": true,

    "protocols": [

      4

    ],

    "transports": [

      "TCP"

    ],

    "fsal": {

      "name": "CEPH",

      "user_id": "nfs.nfs-cluster-1.2",

      "fs_name": "ambedded"

    },

    "clients": []

  }

]


root@ambedded-101:~# ceph nfs export create cephfs nfs-cluster-1 /user-b openstack-manila /

{

    "bind": "/user-b",

    "fs": "openstack-manila",

    "path": "/",

    "cluster": "nfs-cluster-1",

    "mode": "RW"

}



Mount NFS on an client

https://www.ibm.com/docs/zh-tw/storage-ceph/7?topic=ncem-exporting-ceph-file-system-namespaces-over-nfs-protocol


apt-get update

apt-get install nfs-common


mkdir -p /mnt/nfs-cluster


#command

#mount a nfs export with ingress

mount -t nfs $virtual_ip:/user-a /mnt/nfs-cluster

mount -t nfs 172.18.2.109:/user-a /mnt/nfs-cluster


#mount a nfs export without ingress (stand alone nfs gateway)

mount -t nfs -o port=2049 172.18.2.101:/ambedded /mnt/nfs


Check NFS client mount


root@uvsvm-108:/mnt# df -h
Filesystem              Size  Used Avail Use% Mounted on
udev                    3.9G     0  3.9G   0% /dev
tmpfs                   795M  1.4M  794M   1% /run
/dev/md127               63G   11G   49G  18% /
tmpfs                   3.9G   39M  3.9G   1% /dev/shm
tmpfs                   5.0M     0  5.0M   0% /run/lock
tmpfs                   3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                   795M     0  795M   0% /run/user/0
172.18.2.109:/user-a    243G     0  243G   0% /mnt/nfs-cluster
172.18.2.101:/ambedded  243G     0  243G   0% /mnt/nfs



Reference 

https://www.ibm.com/docs/zh-tw/storage-ceph/7?topic=orchestrator-implementing-ha-cephfsnfs-service