0
votes

i use a 3 node Ceph cluster based on Ubuntu Server 14.04. Actually my problem is that 192 placement groups (pgs) are in the status active+remapped. All nodes are online and all osds are online.

How can i cleanup the pgs?

root@node1:~# ceph status
    cluster 776020a6-5c44-49c8-93e4-4a83703d4315
     health HEALTH_WARN 192 pgs stuck unclean
     monmap e1: 3 mons at     {node1=192.168.178.101:6789/0,node2=192.168.178.102:6789/0,node3=192.168.178.103:6789/0}, election epoch 14, quorum 0,1,2 node1,node2,node3
 osdmap e235: 12 osds: 12 up, 12 in
  pgmap v341719: 392 pgs, 5 pools, 225 GB data, 70891 objects
        597 GB used, 6604 GB / 7201 GB avail
             200 active+clean
             192 active+remapped

root@node1:~# ceph osd tree
# id    weight  type name       up/down reweight
-9      7.02    root erasure
-3      2.34            host node2-erasure
7       0.9                     osd.7   up      1
6       0.9                     osd.6   up      1
-5      2.34            host node3-erasure
11      0.9                     osd.11  up      1
10      0.9                     osd.10  up      1
-7      2.34            host node1-erasure
2       0.9                     osd.2   up      1
3       0.9                     osd.3   up      1
-8      7.02    root cache
-2      2.34            host node2-cache
5       0.27                    osd.5   up      1
4       0.27                    osd.4   up      1
-4      2.34            host node3-cache
9       0.27                    osd.9   up      1
8       0.27                    osd.8   up      1
-6      2.34            host node1-cache
0       0.27                    osd.0   up      1
1       0.27                    osd.1   up      1
-1      0       root default

Does someone has an idea?

Best regards schlussbilanz

1
can you upload somewhere the output of ceph report ? It will tell more about the details. - Loic Dachary

1 Answers

0
votes

my solution is reweight all osd.{num} to tha same