1
votes

I installed ceph on servers "A" and "B" and I would like to mount it from "C" or "D" servers.

But I faced below error.

ceph-fuse[4628]: ceph mount failed with (95) Operation not supported

My server configuration is as follow.

A Server: ubunt16.04(ceph-server) 10.1.1.54
B Server: ubuntu16.04(ceph-server) 10.1.1.138
C Server: AmazonLinux(clinet)
D Server: ubuntu16.04(client)

and ceph.conf

[global]
fsid = 44f299ac-ff11-41c8-ab96-225d62cb3226
mon_initial_members = node01, node02
mon_host = 10.1.1.54,10.1.1.138
auth cluster required = none
auth service required = none
auth client required = none
auth supported = none
osd pool default size = 2
public network = 10.1.1.0/24

Ceph is also installed correctly.

ceph health

HEALTH_OK

ceph -s

  cluster 44f299ac-ff11-41c8-ab96-225d62cb3226
     health HEALTH_OK
     monmap e1: 2 mons at {node01=10.1.1.54:6789/0,node02=10.1.1.138:6789/0}
            election epoch 12, quorum 0,1 node01,node02
     osdmap e41: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v100: 64 pgs, 1 pools, 306 bytes data, 4 objects
            69692 kB used, 30629 MB / 30697 MB avail
                  64 active+clean

An error occurred when using the ceph-fuse command.

sudo ceph-fuse -m 10.1.1.138:6789 /mnt/mycephfs/ --debug-auth=10 --debug-ms=10
ceph-fuse[4628]: starting ceph client
2017-11-02 08:57:22.905630 7f8cfdd60f00 -1 init, newargv = 0x55779de6af60 newargc=11
ceph-fuse[4628]: ceph mount failed with (95) Operation not supported
ceph-fuse[4626]: mount failed: (95) Operation not supported

I got an error saying "ceph mount failed with (95) Operation not supported"

I added an option "--auth-client-required=none"

sudo ceph-fuse -m 10.1.1.138:6789 /mnt/mycephfs/ --debug-auth=10 --debug-ms=10 --auth-client-required=none
ceph-fuse[4649]: starting ceph client
2017-11-02 09:03:47.501363 7f1239858f00 -1 init, newargv = 0x5597621eaf60 newargc=11

Behavior changed, There is no response here.

I got an below error if ceph-fuse command is not used.

sudo mount -t ceph 10.1.1.138:6789:/ /mnt/mycephfs

can't read superblock

Somehow, it seems necessary to authenticate with client even with "auth supported = none"

In that case, how could you pass authentication form servers "c" or "d"?

Please let me know, If there is possible cause other than authentication.

2

2 Answers

1
votes

I thought that you need more steps such as format the file system, so you should check again your installation steps for your purposes, Ceph has multiple components for each services, such as object storage, block storage, file system and API. And each service was required its configuration steps.

This installation gude is helpful for your cases.

https://github.com/infn-bari-school/cloud-storage-tutorials/wiki/Ceph-cluster-installation-(jewel-on-CentOS)

If you want to build the Ceph file system for testing, you can build the small size CephFS as following installation steps. I'll skip the details of the steps and CLI usages, you can get more information from the official documents.

Environment informations

  • Ceph version: Jewel, 10.2.9
  • OS: CentOS 7.4

Prerequisite before installation of Ceph file system.

  • Required this configuration 4 nodes,

    • ceph-admin node: deploy monitor, metadata server
    • ceph-osd0: osd service
    • ceph-osd1: osd service
    • ceph-osd2: osd service
  • Enabling NTP - all nodes

  • The OS user for deploying ceph compnents required escalation privileges setting (e.g. sudoers)
  • SSH public key configuration (directions: ceph-admin -> OSD nodes)

Installation of ceph-deploy tool on ceph-admin Admin node.

# yum install -y ceph-deploy

Deploying the required the Ceph components for the Ceph file system

  1. Create the cluster on ceph-admin Admin node using normal OS user (for deploying ceph components)

    $ mkdir ./cluster

    $ cd ./cluster

    $ ceph-deploy new ceph-admin

  2. modify the ceph.conf into the cluster directory.

    $ vim ceph.conf

    [global]

    ..snip...

    mon_initial_members = ceph-admin

    mon_host = $MONITORSERVER_IP_OR_HOSTNAME

    auth_cluster_required = cephx

    auth_service_required = cephx

    auth_client_required = cephx

    # the number of replicas for objects in the pool, default value is 3

    osd pool default size = 3

    public network = $YOUR_SERVICE_NETWORK_CIDR

  3. installing monitor and osd services to related nodes.

    $ ceph-deploy install --release jewel ceph-admin ceph-osd0 ceph-osd1 ceph-osd2

  4. initiate monitor service

    $ ceph-deploy mon create-initial

  5. Create the OSD devices

    ceph-deploy osd list ceph-osd{0..2}:vdb

Adding metadata server component for Ceph file system service.

  1. Adding metadata server (this service just required only Ceph file system)

    ceph-deploy mds create ceph-admin

  2. check the status

    ceph mds stat

  3. create the pools for cephFS

    ceph osd pool create cephfs_data_pool 64

    ceph osd pool create cephfs_meta_pool 64

  4. Create the ceph file systems

    ceph fs new cephfs cephfs_meta_pool cephfs_data_pool

Mount the Ceph file system

  1. Required ceph-fuse package on the node for mounting.

  2. mount as the cephFS

    ceph-fuse -m MONITOR_SERVER_IP_OR_HOSTNAME:PORT_NUMBER <LOCAL_MOUNTPOINT>

End...

1
votes

I solved this problem by fixing three settings.

1.

The auth settings in ceph.conf returned as follows

auth cluster required = cephx
auth service required = cephx
auth client required = cephx

2.

public network was wrong.

public network = 10.1.1.0/24

public network = 10.0.0.0/8

my client ip address was 10.1.0.238... It was a stupid mistake.

3.

I changed secret option to secretfile option and everything was fine.

In this case, it was failed.

sudo mount -t ceph 10.1.1.138:6789:/ /mnt/mycephfs -o name=client.admin,secret=`sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring`

output:

mount error 1 = Operation not permitted

but in this case, It was success.

sudo mount -vvvv -t ceph 10.1.1.138:6789:/ /mnt/mycephfs -o name=admin,secretfile=admin.secret

output:

parsing options: rw,name=admin,secretfile=admin.secret
mount: error writing /etc/mtab: Invalid argument

※ Invalid argument The error seems to be ignored.

Apparently, Both are the same key.

sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
AQBd9f9ZSL46MBAAqwepJDC5tuIL/uYp0MXjCA==


cat admin.secret 
AQBd9f9ZSL46MBAAqwepJDC5tuIL/uYp0MXjCA==

I don't know reason,but I could mount using secretfile option.