We recommend you set up a minimum of 3x Mars 400 for your initial Ceph cluster.

Usually, we call Mars 400 as an appliance or a chassis instead of a server node. This is because actually each Mars 400 has 8 Arm 64-bit server nodes inside a 1U appliance. 

Every node can be configured as either a Ceph monitor, an OSD or an MDS node(metadata server for file system). 

With 3x Mars 400, you can deploy 3x monitors on any one of the three chassis. So, you have one monitor inside each Mars 400. If any chassis is completed out of service, you can still have the other two monitors to support the cluster. Most of the other Arm nodes are usually deployed as OSD nodes. This will be the case of using object storage and block storage. 

If you are going to use the Ceph file system, you need at least one node as an active MDS. The active MDS shall not be collocated with monitor or OSD.

You can configure CRUSH map to have 3 chassis and create OSDs in their own chassis. All your replica 3 pools can use chassis as the failure domain. If you want to use erasure code for your pools, you can use "host" as the failure domain.

However, if you have 3x chassis only and CRUSH rule failure domain is chassis, your replica 3 pools will not be able to re-heal when a whole chassis is out of service. Deploy more than four chassis as a Ceph cluster can make your cluster more robust.

You don't have to deploy more monitors when you expand your cluster with more than three Mars 400.