This page will show how to migrate the replica pool to erasure.


Environment:

  • ceph version: 17.2.6
  • uvs version: 3.17.5



Prepare:

  1. Running ceph cluster
  2. Ceph admin keyring
  3. Ceph client package
  4. erasure code profile


Step 1: Create new erasure pool


data_pool="taipei.rgw.buckets.data"
ec_pool="taipei.rgw.buckets.data.ec42"
ec_rule="ec42_osd"
pg_size="128"

ceph osd pool create "$ec_pool" "$pg_size" "$pg_size" erasure "$ec_rule"
ceph osd pool application enable "$ec_pool" rgw


Step 2: create pool tier


data_pool="taipei.rgw.buckets.data"
ec_pool="taipei.rgw.buckets.data.ec42"

ceph osd tier add "$ec_pool" "$data_pool" --force-nonempty
ceph osd tier cache-mode '$data_pool" readproxy
ceph osd pool set "$data_pool" hit_set_type bloom


Step 3: move data from data pool to ec pool

data_pool="taipei.rgw.buckets.data"

# check pool stored size
ceph df

# move data from data pool to ec pool
# Set fast object move when there is no client I/O as this will block the client IO
ceph osd pool set "$data_pool" target_max_bytes 100

# Unset fast object move when you want to resume the client I/O
ceph osd pool set "$data_pool" target_max_bytes 0

# If you want to have both client I/O and move objects to ec pool slowly, use manual flushing command 
rados -p "$data_pool" cache-flush-evict-all

Step 4: Switch pool to erasure code

data_pool="taipei.rgw.buckets.data"
ec_pool="taipei.rgw.buckets.data.ec42"
rgw_service="rgw.asia.taipei"

# stop service first
ceph orch stop "$rgw_service"

# flush data from data to ec pool until no data at data pool
rados -p "$data_pool" cache-flush-evict-all

# remove cache tier
ceph osd tier remove "$ec_pool" "$data_pool"

# rename pool

ceph osd pool rename "$data_pool" "$data_pool".old
ceph osd pool rename "$ec_pool" "$data_pool"

# start service again

ceph osd start "$rgw_service"


Reference:

https://ceph.io/en/news/blog/2015/ceph-pool-migration/