日日操夜夜添-日日操影院-日日草夜夜操-日日干干-精品一区二区三区波多野结衣-精品一区二区三区高清免费不卡

公告:魔扣目錄網(wǎng)為廣大站長(zhǎng)提供免費(fèi)收錄網(wǎng)站服務(wù),提交前請(qǐng)做好本站友鏈:【 網(wǎng)站目錄:http://www.ylptlb.cn 】, 免友鏈快審服務(wù)(50元/站),

點(diǎn)擊這里在線咨詢客服
新站提交
  • 網(wǎng)站:51998
  • 待審:31
  • 小程序:12
  • 文章:1030137
  • 會(huì)員:747

一、環(huán)境準(zhǔn)備

1、架構(gòu)

官方文檔:http://docs.ceph.org.cn/start/quick-start-preflight/

Ceph分布式存儲(chǔ)安裝部署過(guò)程

 

2、創(chuàng)建ceph.repo

[root@admin-node yum.repos.d]# cat ceph.repo

[ceph]

name=ceph

baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/

gpgcheck=0

priority=1

[ceph-noarch]

name=cephnoarch

baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/

gpgcheck=0

priority=1

[root@admin-node yum.repos.d]#

3、在管理節(jié)點(diǎn)安裝ceph部署工具

yum install ceph-deploy

4、配置NTP時(shí)間同步

yum install chrony -y && systemctl start chronyd && systemctl status chronyd && systemctl enable chronyd && egrep -v "#|^$" /etc/chrony.conf

[root@osd-node2 ~]# egrep -v "#|^$" /etc/chrony.conf

server 10.100.50.120 iburst

driftfile /var/lib/chrony/drift

makestep 1.0 3

rtcsync

logdir /var/log/chrony

5、創(chuàng)建各個(gè)節(jié)點(diǎn)上的ceph賬戶

我們建議在集群內(nèi)的所有 Ceph 節(jié)點(diǎn)上給 ceph-deploy 創(chuàng)建一個(gè)特定的用戶,但不要用 “ceph” 這個(gè)名字。全集群統(tǒng)一的用戶名可簡(jiǎn)化操作(非必需),然而你應(yīng)該避免使用知名用戶名,因 為黑客們會(huì)用它做暴力破解(如 root 、 admin 、 {productname} )。后續(xù)步驟描述了如何創(chuàng)建無(wú) sudo 密碼的用戶,你要用自己取的名字替換 {username} 。

useradd -d /home/{username} -m {username}

passwd {username}

6、配置SSH無(wú)密鑰登錄

1、生成密鑰:不要使用soudu或者root用戶

[sysadmin@admin-node ~]$ ssh-keygen -t rsa 使用-t 選擇密鑰加密類型

Generating public/private rsa key pair.

Enter file in which to save the key (/home/sysadmin/.ssh/id_rsa): 直接回車

Created directory '/home/sysadmin/.ssh'.

Enter passphrase (empty for no passphrase): 直接回車

Enter same passphrase again: 直接回車

Your identification has been saved in /home/sysadmin/.ssh/id_rsa. 私鑰路徑

Your public key has been saved in /home/sysadmin/.ssh/id_rsa.pub. 公鑰路徑

The key fingerprint is:

SHA256:2jiM64WbNMHu3n6wrHIz3dED8ezKDcFCRnZ4erps8uo sysadmin@admin-node

The key's randomart image is:

+---[RSA 2048]----+

| .o.. |

| .+.o |

| o + + |

| . o = o |

| o +S= |

| . =o+o + |

| *+**.= . |

| .o=BB.= . |

| *EX+. |

+----[SHA256]-----+

[sysadmin@admin-node ~]$

2、將密鑰拷貝到其他節(jié)點(diǎn)

ssh-copy-id [email protected]

ssh-copy-id [email protected]

ssh-copy-id [email protected]

嘗試登錄測(cè)試

[sysadmin@admin-node ~]$10.100.50.130

Last login: Fri May 4 14:25:14 2018 from 10.100.50.127

[sysadmin@osdb-node2 ~]$

7、配置hosts解析

[root@admin-node ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.100.50.127 admin-node

10.100.50.128 mon-node1

10.100.50.129 osd-node1

10.100.50.130 osd-node2

8、修改管理節(jié)點(diǎn)的~/.ssh/config 簡(jiǎn)化SSH登錄

[sysadmin@admin-node ~]$ cat ~/.ssh/config

Host mon-node1

Hostname mon-node1

User sysadmin

Host osd-node1

Hostname osd-node1

User sysadmin

Host osd-node2

Hostname osd-node2

User sysadmin

登錄測(cè)試:

[sysadmin@admin-node ~]$ ssh osd-node2

Last login: Fri May 4 16:55:54 2018 from admin-node

[sysadmin@osd-node2 ~]$

二、安裝部署

1、創(chuàng)建配置文件夾用來(lái)保存配置文件和密鑰對(duì)

/home/sysadmin/my-cluster

2、禁用 requiretty

將Defaults !visiblepw 修改為Defaults visiblepw

報(bào)錯(cuò)信息如下:

[ceph_deploy.cli][INFO ] gpg_url : None

[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts mon-node1

[ceph_deploy.install][DEBUG ] Detecting platform for host mon-node1 ...

[mon-node1][DEBUG ] connection detected need for sudo

We trust you have received the usual lecture from the local System

Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.

#2) Think before you type.

#3) With great power comes great responsibility.

sudo: no tty present and no askpass program specified

[ceph_deploy][ERROR ] RuntimeError: connecting to host: mon-node1 resulted in errors: IOError cannot send (already closed?)

[sysadmin@admin-node my-cluster]$

在某些發(fā)行版(如 centos )上,執(zhí)行 ceph-deploy 命令時(shí),如果你的 Ceph 節(jié)點(diǎn)默認(rèn)設(shè)置了 requiretty 那就會(huì)遇到報(bào)錯(cuò)。可以這樣禁用此功能:

執(zhí)行 sudo visudo ,找到 Defaults requiretty 選項(xiàng),把它改為 Defaults:ceph !requiretty ,這樣 ceph-deploy 就能用 ceph 用戶登錄并使用 sudo 了。

3、如何刪除ceph

如果在某些地方碰到麻煩,想從頭再來(lái),可以用下列命令清除配置:

ceph-deploy purgedata {ceph-node} [{ceph-node}]

ceph-deploy forgetkeys

用下列命令可以連 Ceph 安裝包一起清除:

ceph-deploy purge {ceph-node} [{ceph-node}]

如果執(zhí)行了 purge ,你必須重新安裝 Ceph 。

4、創(chuàng)建集群

1、在所有節(jié)點(diǎn)安裝ceph

如果報(bào)錯(cuò)顯示:[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph-source'

需要修改epel源,或者yum makecache,或者刪除/etc/yum.repos.d/中的repo緩存文件如: rm -rf ceph.repo.rpmnew ceph.repo.rpmsave epel.repo.rpmnew

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

ceph-deploy install osd-node1

如果報(bào)下面錯(cuò)誤,需要將/etc/yum.repos.d文件夾中的ceph.repo.rpmnew、epel-testing.repo、ceph.repo.rpmsave 、 epel.repo.rpmnew刪除。

[osd-node1][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority

[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph-source'

2、運(yùn)行下面命令創(chuàng)建監(jiān)控節(jié)點(diǎn)集群,目前只有一臺(tái)作為monitor

ceph-deploy new mon-node1

如果遇到報(bào)錯(cuò):sudo: no tty present and no askpass program specified

[ceph_deploy][ERROR ] RuntimeError: connecting to host: mon-node1 resulted in errors: IOError cannot send (already closed?)

運(yùn)行 sudo visudo并添加下面一行

sysadmin ALL=(ALL) NOPASSWD: ALL

3、配置monitor并收集密鑰,必須在my-cluster目錄下運(yùn)行

[sysadmin@admin-node my-cluster]$ ceph-deploy mon create-initial

..........................

.........................

[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring

[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring

[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpeANrgr

[sysadmin@admin-node my-cluster]$ ls

ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log

ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring

[sysadmin@admin-node my-cluster]$

4、部署過(guò)程中出現(xiàn)問(wèn)題重新部署時(shí)報(bào)錯(cuò)如下:

[mon-node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.mon-node1.asok mon_status

[mon-node1][ERROR ] no valid command found; 10 closest matches:

[mon-node1][ERROR ] config set <var> <val> [<val>...]

[mon-node1][ERROR ] version

[mon-node1][ERROR ] git_version

[mon-node1][ERROR ] help

[mon-node1][ERROR ] config show

[mon-node1][ERROR ] get_command_descriptions

[mon-node1][ERROR ] config get <var>

[mon-node1][ERROR ] perfcounters_dump

[mon-node1][ERROR ] 2

[mon-node1][ERROR ] config diff

[mon-node1][ERROR ] admin_socket: invalid command

[ceph_deploy.mon][WARNIN] mon.mon-node1 monitor is not yet in quorum, tries left: 2

[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying

[mon-node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.mon-node1.asok mon_status

[mon-node1][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

[ceph_deploy.mon][WARNIN] mon.mon-node1 monitor is not yet in quorum, tries left: 1

[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying

[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:

[ceph_deploy.mon][ERROR ] mon-node1

解決方式:

1)、刪除安裝包

[sysadmin@admin-node my-cluster]$ ceph-deploy purge mon-node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy purge mon-node1

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f31db696ef0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] host : ['mon-node1']

[ceph_deploy.cli][INFO ] func : <function purge at 0x7f31dbf89500>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.install][INFO ] note that some dependencies *will not* be removed because they can cause issues with qemu-kvm

[ceph_deploy.install][INFO ] like: librbd1 and librados2

[ceph_deploy.install][DEBUG ] Purging on cluster ceph hosts mon-node1

[ceph_deploy.install][DEBUG ] Detecting platform for host mon-node1 ...

[mon-node1][DEBUG ] connection detected need for sudo

[mon-node1][DEBUG ] connected to host: mon-node1

[mon-node1][DEBUG ] detect platform information from remote host

[mon-node1][DEBUG ] detect machine type

[ceph_deploy.install][INFO ] Distro info: CentOS linux 7.5.1804 Core

[mon-node1][INFO ] Purging Ceph on mon-node1

[mon-node1][INFO ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw

[mon-node1][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave

[mon-node1][INFO ] Running command: sudo yum clean all

[mon-node1][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities

[mon-node1][DEBUG ] Cleaning repos: base epel extras updates

[mon-node1][DEBUG ] Cleaning up everything

[mon-node1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos

[mon-node1][DEBUG ] Cleaning up list of fastest mirrors

[sysadmin@admin-node my-cluster]$

2)、 刪除key

[sysadmin@admin-node my-cluster]$ ls

ceph.conf ceph-deploy-ceph.log ceph.mon.keyring

[sysadmin@admin-node my-cluster]$ ceph-deploy forgetkeys

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy forgetkeys

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe4eaed35a8>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function forgetkeys at 0x7fe4eb7c2e60>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[sysadmin@admin-node my-cluster]$ ls

ceph.conf ceph-deploy-ceph.log

3)、刪除部署的數(shù)據(jù)文件

[sysadmin@admin-node my-cluster]$ ceph-deploy purgedata mon-node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy purgedata mon-node1

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f51e0d0a878>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] host : ['mon-node1']

[ceph_deploy.cli][INFO ] func : <function purgedata at 0x7f51e15fd578>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.install][DEBUG ] Purging data from cluster ceph hosts mon-node1

[mon-node1][DEBUG ] connection detected need for sudo

[mon-node1][DEBUG ] connected to host: mon-node1

[mon-node1][DEBUG ] detect platform information from remote host

[mon-node1][DEBUG ] detect machine type

[mon-node1][DEBUG ] find the location of an executable

[mon-node1][DEBUG ] connection detected need for sudo

[mon-node1][DEBUG ] connected to host: mon-node1

[mon-node1][DEBUG ] detect platform information from remote host

[mon-node1][DEBUG ] detect machine type

[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core

[mon-node1][INFO ] purging data on mon-node1

[mon-node1][INFO ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph

[mon-node1][INFO ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/

[sysadmin@admin-node my-cluster]$

4)、重新安裝部署ceph

5、修改存儲(chǔ)副本數(shù)

[sysadmin@admin-node my-cluster]$ vim ceph.conf

添加下面信息,配置public網(wǎng)絡(luò),配置mon之間時(shí)間差為2s(默認(rèn)為0.05s),配置副本數(shù)量為2

public network = 10.100.50.0/24

mon clock drift allowed = 2

osd pool default size = 2

5、格式化osd磁盤

查看osd節(jié)點(diǎn)磁盤

[sysadmin@admin-node my-cluster]$ ceph-deploy disk list osd-node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy disk list osd-node1

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : list

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x111dcb0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x1108398>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('osd-node1', None, None)]

[osd-node1][DEBUG ] connection detected need for sudo

[osd-node1][DEBUG ] connected to host: osd-node1

[osd-node1][DEBUG ] detect platform information from remote host

[osd-node1][DEBUG ] detect machine type

[osd-node1][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.4.1708 Core

[ceph_deploy.osd][DEBUG ] Listing disks on osd-node1...

[osd-node1][DEBUG ] find the location of an executable

[osd-node1][INFO ] Running command: sudo /usr/sbin/ceph-disk list

[osd-node1][DEBUG ] /dev/dm-0 other, ext4, mounted on /

[osd-node1][DEBUG ] /dev/sda :

[osd-node1][DEBUG ] /dev/sda3 other, LVM2_member

[osd-node1][DEBUG ] /dev/sda2 swap, swap

[osd-node1][DEBUG ] /dev/sda1 other, ext4, mounted on /boot

[osd-node1][DEBUG ] /dev/sdb other, unknown

[osd-node1][DEBUG ] /dev/sdc other, unknown

[osd-node1][DEBUG ] /dev/sdd other, unknown

[osd-node1][DEBUG ] /dev/sr0 other, unknown

[sysadmin@admin-node my-cluster]$

將osd節(jié)點(diǎn)的sdb作為日志盤

將osd節(jié)點(diǎn)的sdc作為數(shù)據(jù)盤并分別進(jìn)行分區(qū)格式化。使用ceph推薦的xfs文件系統(tǒng)和gpt分區(qū)。

parted -a optimal --script /dev/sdc -- mktable gpt

parted -a optimal --script /dev/sdc -- mkpart primary xfs 0% 100%

mkfs.xfs /dev/sdc1

6、準(zhǔn)備OSD磁盤

查看磁盤:

[sysadmin@admin-node my-cluster]$ ceph-deploy disk list osd-node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy disk list osd-node1

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : list

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1580cb0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x156b398>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('osd-node1', None, None)]

[osd-node1][DEBUG ] connection detected need for sudo

[osd-node1][DEBUG ] connected to host: osd-node1

[osd-node1][DEBUG ] detect platform information from remote host

[osd-node1][DEBUG ] detect machine type

[osd-node1][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.4.1708 Core

[ceph_deploy.osd][DEBUG ] Listing disks on osd-node1...

[osd-node1][DEBUG ] find the location of an executable

[osd-node1][INFO ] Running command: sudo /usr/sbin/ceph-disk list

[osd-node1][DEBUG ] /dev/dm-0 other, ext4, mounted on /

[osd-node1][DEBUG ] /dev/sda :

[osd-node1][DEBUG ] /dev/sda3 other, LVM2_member

[osd-node1][DEBUG ] /dev/sda2 swap, swap

[osd-node1][DEBUG ] /dev/sda1 other, ext4, mounted on /boot

[osd-node1][DEBUG ] /dev/sdb :

[osd-node1][DEBUG ] /dev/sdb1 other, xfs

[osd-node1][DEBUG ] /dev/sdc :

[osd-node1][DEBUG ] /dev/sdc1 other, xfs

[osd-node1][DEBUG ] /dev/sdd other, unknown

[osd-node1][DEBUG ] /dev/sr0 other, unknown

[sysadmin@admin-node my-cluster]$

[sysadmin@admin-node my-cluster]$ ceph-deploy disk list osd-node2

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy disk list osd-node2

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : list

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1122cb0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x110d398>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('osd-node2', None, None)]

[osd-node2][DEBUG ] connection detected need for sudo

[osd-node2][DEBUG ] connected to host: osd-node2

[osd-node2][DEBUG ] detect platform information from remote host

[osd-node2][DEBUG ] detect machine type

[osd-node2][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.4.1708 Core

[ceph_deploy.osd][DEBUG ] Listing disks on osd-node2...

[osd-node2][DEBUG ] find the location of an executable

[osd-node2][INFO ] Running command: sudo /usr/sbin/ceph-disk list

[osd-node2][DEBUG ] /dev/dm-0 other, ext4, mounted on /

[osd-node2][DEBUG ] /dev/sda :

[osd-node2][DEBUG ] /dev/sda3 other, LVM2_member

[osd-node2][DEBUG ] /dev/sda2 swap, swap

[osd-node2][DEBUG ] /dev/sda1 other, ext4, mounted on /boot

[osd-node2][DEBUG ] /dev/sdb :

[osd-node2][DEBUG ] /dev/sdb1 other, xfs

[osd-node2][DEBUG ] /dev/sdc :

[osd-node2][DEBUG ] /dev/sdc1 other, xfs

[osd-node2][DEBUG ] /dev/sdd other, unknown

[osd-node2][DEBUG ] /dev/sr0 other, unknown

[sysadmin@admin-node my-cluster]$

下面操作必須在admin的/etc/ceph下執(zhí)行(請(qǐng)務(wù)必注意,格式為: ip地址:osd磁盤:日志盤)

ceph-deploy osd prepare osd-node1:sdc1:sdb1

ceph-deploy osd prepare osd-node2:sdc1:sdb1

如果報(bào)下面錯(cuò)誤表示配置文件在各個(gè)節(jié)點(diǎn)上沒(méi)有同步,需要手動(dòng)推送一下:ceph-deploy --overwrite-conf config push admin-node osd-node1 osd-node2

[ceph_deploy.osd][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite

[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

7、激活OSD磁盤

ceph-deploy osd activate osd-node1:sdc1:sdb1

ceph-deploy osd activate osd-node2:sdc1:sdb1

如果遇到下面錯(cuò)誤,提示權(quán)限問(wèn)題,需要修改磁盤的屬主和屬組為ceph(不是部署ceph創(chuàng)建的賬號(hào),而是ceph自己創(chuàng)建的‘ceph’賬戶)

[osd-node1][WARNIN] ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'0', '--monmap', '/var/lib/ceph/tmp/mnt.wMliCA/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.wMliCA', '--osd-journal', '/var/lib/ceph/tmp/mnt.wMliCA/journal', '--osd-uuid', u'14a500fd-a030-427a-b007-f16f6f4bbd4d', '--keyring', '/var/lib/ceph/tmp/mnt.wMliCA/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2018-05-23 10:47:41.254049 7fece10c8800 -1 filestore(/var/lib/ceph/tmp/mnt.wMliCA) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.wMliCA/journal: (13) Permission denied

[osd-node1][WARNIN] 2018-05-23 10:47:41.254068 7fece10c8800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13

[osd-node1][WARNIN] 2018-05-23 10:47:41.254123 7fece10c8800 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.wMliCA: (13) Permission denied

[osd-node1][WARNIN]

[osd-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc1

需要修改磁盤的屬主和屬組為ceph

[root@osd-node1 yum.repos.d]# cat /etc/passwd

root:x:0:0:root:/root:/bin/bash

bin:x:1:1:bin:/bin:/sbin/nologin

......

sysadmin:x:1000:1000:sysadmin:/home/sysadmin:/bin/bash

ceph:x:167:167:Ceph daemons:/var/lib/ceph:/sbin/nologin

[root@osd-node1 yum.repos.d]# chown ceph:ceph /dev/sdb1

[root@osd-node1 yum.repos.d]# chown ceph:ceph /dev/sdc1

[root@osd-node1 yum.repos.d]# ll /dev/sdb1

brw-rw---- 1 ceph ceph 8, 17 May 23 10:29 /dev/sdb1

[root@osd-node1 yum.repos.d]# ll /dev/sdc1

brw-rw---- 1 ceph ceph 8, 33 May 23 10:33 /dev/sdc1

[root@osd-node1 yum.repos.d]#

8、一次性完成磁盤的準(zhǔn)備和激活

ceph-deploy osd create osd-node1:sdc1:sdb1

ceph-deploy osd create osd-node2:sdc1:sdb1

9、查看osd視圖

[sysadmin@admin-node ceph]$ ceph osd tree

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.03897 root default

-2 0.01949 host osd-node1

0 0.01949 osd.0 up 1.00000 1.00000

-3 0.01949 host osd-node2

1 0.01949 osd.1 up 1.00000 1.00000

[sysadmin@admin-node ceph]$

如果報(bào)下面錯(cuò)誤,因?yàn)槲业膋ey都是放到/home/sysadmin/my-cluster/下面,所以直接將下面文件全部拷貝到/etc/ceph/下面,并將權(quán)限改為755。

[sysadmin@admin-node my-cluster]$ ceph osd tree

2018-05-23 11:08:38.913940 7fbd0e84d700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory

2018-05-23 11:08:38.913953 7fbd0e84d700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication

2018-05-23 11:08:38.913955 7fbd0e84d700 0 librados: client.admin initialization error (2) No such file or directory

Error connecting to cluster: ObjectNotFound

[sysadmin@admin-node my-cluster]$

[root@admin-node ceph]# cp /home/sysadmin/my-cluster/* /etc/ceph/

cp: overwrite ‘/etc/ceph/ceph.bootstrap-mds.keyring’? y

cp: overwrite ‘/etc/ceph/ceph.conf’? y

[root@admin-node ceph]# chmod 755 *

[root@admin-node ceph]# ll

total 172

-rwxr-xr-x 1 root root 113 May 23 11:08 ceph.bootstrap-mds.keyring

-rwxr-xr-x 1 root root 71 May 23 11:08 ceph.bootstrap-mgr.keyring

-rwxr-xr-x 1 root root 113 May 23 11:08 ceph.bootstrap-osd.keyring

-rwxr-xr-x 1 root root 113 May 23 11:08 ceph.bootstrap-rgw.keyring

-rwxr-xr-x 1 root root 129 May 23 11:08 ceph.client.admin.keyring

-rwxr-xr-x 1 root root 225 May 23 11:08 ceph.conf

-rwxr-xr-x 1 root root 142307 May 23 11:08 ceph-deploy-ceph.log

-rwxr-xr-x 1 root root 73 May 23 11:08 ceph.mon.keyring

-rwxr-xr-x. 1 root root 92 Oct 4 2017 rbdmap

-rwxr-xr-x 1 root root 0 May 23 10:33 tmpAlNIWB

[root@admin-node ceph]#

10、查看集群狀態(tài)

[sysadmin@admin-node my-cluster]$ ceph -s

cluster 77524d79-bc18-471a-8956-f5045579cc74

health HEALTH_OK

monmap e1: 1 mons at {mon-node1=10.100.50.128:6789/0}

election epoch 3, quorum 0 mon-node1

osdmap e11: 2 osds: 2 up, 2 in

flags sortbitwise,require_jewel_osds

pgmap v19: 64 pgs, 1 pools, 0 bytes data, 0 objects

68400 kB used, 40869 MB / 40936 MB avail

64 active+clean

[sysadmin@admin-node my-cluster]$ ceph osd tree

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.03897 root default

-2 0.01949 host osd-node1

0 0.01949 osd.0 up 1.00000 1.00000

-3 0.01949 host osd-node2

1 0.01949 osd.1 up 1.00000 1.00000

[sysadmin@admin-node my-cluster]$

分享到:
標(biāo)簽:分布式
用戶無(wú)頭像

網(wǎng)友整理

注冊(cè)時(shí)間:

網(wǎng)站:5 個(gè)   小程序:0 個(gè)  文章:12 篇

  • 51998

    網(wǎng)站

  • 12

    小程序

  • 1030137

    文章

  • 747

    會(huì)員

趕快注冊(cè)賬號(hào),推廣您的網(wǎng)站吧!
最新入駐小程序

數(shù)獨(dú)大挑戰(zhàn)2018-06-03

數(shù)獨(dú)一種數(shù)學(xué)游戲,玩家需要根據(jù)9

答題星2018-06-03

您可以通過(guò)答題星輕松地創(chuàng)建試卷

全階人生考試2018-06-03

各種考試題,題庫(kù),初中,高中,大學(xué)四六

運(yùn)動(dòng)步數(shù)有氧達(dá)人2018-06-03

記錄運(yùn)動(dòng)步數(shù),積累氧氣值。還可偷

每日養(yǎng)生app2018-06-03

每日養(yǎng)生,天天健康

體育訓(xùn)練成績(jī)?cè)u(píng)定2018-06-03

通用課目體育訓(xùn)練成績(jī)?cè)u(píng)定