Comment
Author: Admin | 2025-04-28
500 http://de.archive.ubuntu.com/ubuntu jammy-security/universe amd64 Packages 17.1.0-0ubuntu3 500 500 http://de.archive.ubuntu.com/ubuntu jammy/universe amd64 PackagesThe surest way to install the latest version of cephadm is via installing the Ceph package repository;wget -q -O- 'https://download.ceph.com/keys/release.asc' | \gpg --dearmor -o /etc/apt/trusted.gpg.d/cephadm.gpgecho deb https://download.ceph.com/debian-reef/ $(lsb_release -sc) main \> /etc/apt/sources.list.d/cephadm.listapt updateConfirm the version;apt-cache policy cephadmcephadm: Installed: (none) Candidate: 18.2.0-1jammy Version table: 18.2.0-1jammy 500 500 https://download.ceph.com/debian-reef jammy/main amd64 Packages 17.2.6-0ubuntu0.22.04.1 500 500 http://de.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages 17.2.5-0ubuntu0.22.04.3 500 500 http://de.archive.ubuntu.com/ubuntu jammy-security/universe amd64 Packages 17.1.0-0ubuntu3 500 500 http://de.archive.ubuntu.com/ubuntu jammy/universe amd64 Packagesapt install cephadmInitialize Ceph Cluster Monitor On Ceph Admin NodeYour nodes are now ready to deploy a Ceph storage cluster. To begin with, switch to cephadmin user;su - cephadminwhoamiOutput;cephadminIt is now time to bootstrap the Ceph cluster in order to create the first Ceph monitor daemon on Ceph admin node. Thus, run the command below, substituting the IP address with that of the Ceph admin node accordingly.sudo cephadm bootstrap --mon-ip 192.168.122.240Creating directory /etc/ceph for ceph.confVerifying podman|docker is present...Verifying lvm2 is present...Verifying time synchronization is in place...Unit chrony.service is enabled and runningRepeating the final host check...podman (/usr/bin/podman) version 3.4.4 is presentsystemctl is presentlvcreate is presentUnit chrony.service is enabled and runningHost looks OKCluster fsid: 70d227de-83e3-11ee-9dda-ff8b7941e415Verifying IP 192.168.122.240 port 3300 ...Verifying IP 192.168.122.240 port 6789 ...Mon IP `192.168.122.240` is in CIDR network `192.168.122.0/24`Mon IP `192.168.122.240` is in CIDR network `192.168.122.0/24`Internal network (--cluster-network) has not been provided, OSD replication will default to the public_networkPulling container image quay.io/ceph/ceph:v18...Ceph version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)Extracting ceph user uid/gid from container image...Creating initial keys...Creating initial monmap...Creating mon...Waiting for mon to start...Waiting for mon...mon is availableAssimilating anything we can from ceph.conf...Generating new minimal ceph.conf...Restarting the monitor...Setting mon public_network to 192.168.122.0/24Wrote config to /etc/ceph/ceph.confWrote keyring to /etc/ceph/ceph.client.admin.keyringCreating mgr...Verifying port 9283 ...Verifying port 8765 ...Verifying port 8443 ...Waiting for mgr to start...Waiting for mgr...mgr not available, waiting (1/15)...mgr not available, waiting (2/15)...mgr not available, waiting (3/15)...mgr is availableEnabling cephadm module...Waiting for the mgr to restart...Waiting for mgr epoch 5...mgr epoch 5 is availableSetting orchestrator backend to cephadm...Generating ssh key...Wrote public SSH key to /etc/ceph/ceph.pubAdding key to root@localhost authorized_keys...Adding host ceph-admin...Deploying mon service with default placement...Deploying mgr service with default placement...Deploying crash service with default placement...Deploying ceph-exporter service with default placement...Deploying prometheus service with default placement...Deploying grafana service with default placement...Deploying node-exporter service with default placement...Deploying alertmanager service with default placement...Enabling the dashboard module...Waiting for the mgr to restart...Waiting for mgr epoch 9...mgr epoch 9 is availableGenerating a dashboard self-signed certificate...Creating initial admin user...Fetching dashboard port number...Ceph Dashboard is now available at: URL: https://ceph-admin:8443/ User: admin Password: hnrpt41gffEnabling client.admin keyring and conf on hosts with "admin" labelSaving cluster configuration to /var/lib/ceph/70d227de-83e3-11ee-9dda-ff8b7941e415/config directoryEnabling autotune for osd_memory_targetYou can access the Ceph CLI as following in case of multi-cluster or non-default config:
Add Comment