1. Remove old ceph-release from all nodes
[ceph1][DEBUG ] Preparing... ########################################
[ceph1][WARNIN] file /etc/yum.repos.d/ceph.repo from install of ceph-release-1-1.el7.noarch conflicts with file from package ceph-release-1-1.el7.noarch
yum remove ceph-release-1-1.el7.noarch
2. Install software on one ceph1 node
ceph-deploy install --release luminous ceph1
[ceph1][DEBUG ] Complete!
[ceph1][INFO ] Running command: ceph --version
[ceph1][DEBUG ] ceph version 12.0.2 (5a1b6b3269da99a18984c138c23935e5eb96f73e)
3. Restart services on ceph1
systemctl restart ceph-mon.target
systemctl restart ceph-mgr.target
4. Install software on other nodes ceph2, ceph3
ceph-deploy install --release luminous ceph2 ceph3
and restart services
systemctl restart ceph-mon.target
systemctl restart ceph-mgr.target
5. Check ceph status
health HEALTH_OK
monmap e3: 3 mons at {ceph1=10.1.16.1:6789/0,ceph2=10.1.16.2:6789/0,ceph3=10.1.16.3:6789/0}
election epoch 96, quorum 0,1,2 ceph1,ceph2,ceph3
fsmap e27: 1/1/1 up {0=ceph1=up:active}
mgr active: ceph1 standbys: ceph3, ceph2
osdmap e4649: 12 osds: 12 up, 12 in
pgmap v226087: 1608 pgs, 4 pools, 2765 GB data, 691k objects
5578 GB used, 5151 GB / 10729 GB avail
1608 active+clean
client io 10384 B/s wr, 0 op/s rd, 2 op/s wr
On each node restart OSD consistently, wait until the apopriate OSD's are up
systemctl restart ceph-osd.target
[root@ceph1 ceph]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 10.47839 root default
-2 3.49280 host ceph1
0 0.87320 osd.0 up 1.00000 1.00000
3 0.87320 osd.3 up 1.00000 1.00000
6 0.87320 osd.6 up 1.00000 1.00000
9 0.87320 osd.9 up 1.00000 1.00000
-3 3.49280 host ceph2
1 0.87320 osd.1 up 1.00000 1.00000
4 0.87320 osd.4 up 1.00000 1.00000
7 0.87320 osd.7 up 1.00000 1.00000
10 0.87320 osd.10 up 1.00000 1.00000
-4 3.49280 host ceph3
12 0.87320 osd.12 up 1.00000 1.00000
13 0.87320 osd.13 up 1.00000 1.00000
14 0.87320 osd.14 up 1.00000 1.00000
15 0.87320 osd.15 up 1.00000 1.00000
and wait health ok
6. Set osd flag
I have the warning:
health HEALTH_WARN
all OSDs are running luminous or later but the 'require_luminous_osds' osdmap flag is not set
Execute:
ceph osd set require_luminous_osds
7. Restart other services
systemctl restart ceph-mds.target
8. Finally check status
[root@ceph1 ceph]# ceph -s
cluster 2d856a9e-ff9e-4322-bedb-56f3be26b607
health HEALTH_OK
monmap e3: 3 mons at {ceph1=10.1.16.1:6789/0,ceph2=10.1.16.2:6789/0,ceph3=10.1.16.3:6789/0}
election epoch 96, quorum 0,1,2 ceph1,ceph2,ceph3
fsmap e33: 1/1/1 up {0=ceph1=up:active}
mgr active: ceph1 standbys: ceph3, ceph2
osdmap e4674: 12 osds: 12 up, 12 in
pgmap v227520: 1608 pgs, 4 pools, 2765 GB data, 691k objects
5573 GB used, 5156 GB / 10729 GB avail
1608 active+clean
client io 17394 B/s wr, 0 op/s rd, 2 op/s wr
[ceph1][DEBUG ] Preparing... ########################################
[ceph1][WARNIN] file /etc/yum.repos.d/ceph.repo from install of ceph-release-1-1.el7.noarch conflicts with file from package ceph-release-1-1.el7.noarch
yum remove ceph-release-1-1.el7.noarch
2. Install software on one ceph1 node
ceph-deploy install --release luminous ceph1
[ceph1][DEBUG ] Complete!
[ceph1][INFO ] Running command: ceph --version
[ceph1][DEBUG ] ceph version 12.0.2 (5a1b6b3269da99a18984c138c23935e5eb96f73e)
3. Restart services on ceph1
systemctl restart ceph-mon.target
systemctl restart ceph-mgr.target
4. Install software on other nodes ceph2, ceph3
ceph-deploy install --release luminous ceph2 ceph3
and restart services
systemctl restart ceph-mon.target
systemctl restart ceph-mgr.target
5. Check ceph status
health HEALTH_OK
monmap e3: 3 mons at {ceph1=10.1.16.1:6789/0,ceph2=10.1.16.2:6789/0,ceph3=10.1.16.3:6789/0}
election epoch 96, quorum 0,1,2 ceph1,ceph2,ceph3
fsmap e27: 1/1/1 up {0=ceph1=up:active}
mgr active: ceph1 standbys: ceph3, ceph2
osdmap e4649: 12 osds: 12 up, 12 in
pgmap v226087: 1608 pgs, 4 pools, 2765 GB data, 691k objects
5578 GB used, 5151 GB / 10729 GB avail
1608 active+clean
client io 10384 B/s wr, 0 op/s rd, 2 op/s wr
On each node restart OSD consistently, wait until the apopriate OSD's are up
systemctl restart ceph-osd.target
[root@ceph1 ceph]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 10.47839 root default
-2 3.49280 host ceph1
0 0.87320 osd.0 up 1.00000 1.00000
3 0.87320 osd.3 up 1.00000 1.00000
6 0.87320 osd.6 up 1.00000 1.00000
9 0.87320 osd.9 up 1.00000 1.00000
-3 3.49280 host ceph2
1 0.87320 osd.1 up 1.00000 1.00000
4 0.87320 osd.4 up 1.00000 1.00000
7 0.87320 osd.7 up 1.00000 1.00000
10 0.87320 osd.10 up 1.00000 1.00000
-4 3.49280 host ceph3
12 0.87320 osd.12 up 1.00000 1.00000
13 0.87320 osd.13 up 1.00000 1.00000
14 0.87320 osd.14 up 1.00000 1.00000
15 0.87320 osd.15 up 1.00000 1.00000
and wait health ok
6. Set osd flag
I have the warning:
health HEALTH_WARN
all OSDs are running luminous or later but the 'require_luminous_osds' osdmap flag is not set
Execute:
ceph osd set require_luminous_osds
7. Restart other services
systemctl restart ceph-mds.target
8. Finally check status
[root@ceph1 ceph]# ceph -s
cluster 2d856a9e-ff9e-4322-bedb-56f3be26b607
health HEALTH_OK
monmap e3: 3 mons at {ceph1=10.1.16.1:6789/0,ceph2=10.1.16.2:6789/0,ceph3=10.1.16.3:6789/0}
election epoch 96, quorum 0,1,2 ceph1,ceph2,ceph3
fsmap e33: 1/1/1 up {0=ceph1=up:active}
mgr active: ceph1 standbys: ceph3, ceph2
osdmap e4674: 12 osds: 12 up, 12 in
pgmap v227520: 1608 pgs, 4 pools, 2765 GB data, 691k objects
5573 GB used, 5156 GB / 10729 GB avail
1608 active+clean
client io 17394 B/s wr, 0 op/s rd, 2 op/s wr
Комментариев нет:
Отправить комментарий