Proxmox, Ceph, and Linux Helps

Passwordless SSH

If you don’t want to be prompted for a password each time rsync makes a connection — and you don’t — make sure that you have rsync set up to log in using an SSH key rather than a password. To do this, create an SSH key on the local machine using

ssh-keygen -t dsa

and press Enter when prompted for a passphrase. After the key is created, use

ssh-copy-id -i .ssh/

to copy the public key to the remote host.


check to see if your directory is encrypted:

ls -A /home

if a “.ecryptfs” folder/file is there, then it is encrypted.

use it on encrypted dir

sudo mkdir /etc/ssh/USERNAME
sudo chmod 0755 /etc/ssh/USERNAME
sudo cp /home/USERNAME/.ssh/authorized_keys /etc/ssh/USERNAME
sudo chmod 644 /etc/ssh/USERNAME/authorized_keys

sudo nano /etc/ssh/sshd_config

AuthorizedKeysFile /etc/ssh/%u/authorized_keys

sudo service ssh restart

Use .ssh keys on encrypted dir

$ /sbin/umount.ecryptfs_private
 $ cd $HOME
 $ chmod 700 .
 $ mkdir -m 700 .ssh
 $ chmod 500 .
 $ echo $YOUR_REAL_PUBLIC_KEY > .ssh/authorized_keys
 $ /sbin/mount.ecryptfs_private


Rsync Common Examples

rsync -avz /home/ root@

Cluster Recovery

If you have major problems with your Proxmox VE host, e.g. hardware issues, it could be helpful to just copy the pmxcfs database file /var/lib/pve-cluster/config.db and move it to a new Proxmox VE host. On the new host (with nothing running), you need to stop the pve-cluster service and replace the config.db file (needed permissions : 600). Second, adapt /etc/hostname and /etc/hosts according to the lost Proxmox VE host, then reboot and check. (And don´t forget your VM/CT data)
Remove Cluster configuration

The recommended way is to reinstall the node after you removed it from your cluster. This makes sure that all secret cluster/ssh keys and any shared configuration data is destroyed.
I some cases, you might prefer to put a node back to local mode without reinstall, which is described here:
stop the cluster file system in /etc/pve/
# service pve-cluster stop
start it again but forcing local mode
# pmxcfs -l
remove the cluster config
# rm /etc/pve/cluster.conf
# rm /etc/cluster/cluster.conf
# rm /var/lib/pve-cluster/corosync.authkey
stop the cluster file system again
# service pve-cluster stop
restart pve services (or reboot)
# service pve-cluster start
# service pvedaemon restart
# service pveproxy restart
# service pvestatd restart

Static/Dynamic IP Ubuntu

sudo nano /etc/network/interfaces


auto eth0
iface eth0 inet dhcp


auto eth0

iface eth0 inet static

Windows 8.1 Accessible Samba Shares

sudo useradd -s /bin/true SAMBAUSERNAME
sudo smbpasswd -L -a SAMBAUSERNAME
sudo smbpasswd -L -e SAMBAUSERNAME

Add Ubuntu User

sudo adduser NEWUSERNAME
sudo gpasswd -a NEWUSERNAME sudo

Install Webmin

 sudo nano /etc/apt/sources.list

Add Lines

deb sarge contrib
deb sarge contrib

Ctrl+ X & Y to Exit and Save

sudo apt-key add jcameron-key.asc
sudo apt-get update
sudo apt-get install webmin


Editing the CEPH CRUSH Map

ceph osd getcrushmap -o crushmap1
cp crushmap1 crushmap_org
crushtool -d crushmap1 -o crushmap1.txt
nano crushmap1.txt
crushtool -c crushmap1.txt -o crushmap_new
ceph osd setcrushmap -i crushmap_new


High Availability, HA: reset rgmanager

Get rgmanager to start, check status and rejoin fence group

fence_tool join
fence_tool ls

after an HA event, you need to re-enable the rgmanager to allow management of the VMs from one computer to another.
/etc/init.d/rgmanager start

If you are going to reboot the proxmox server for kernel updates, first stop the rgmanager to prevent a fencing even and power cut off from the APC PDU

/etc/init.d/rgmanager stop



Proxmox from USB:

1) Use rufus to copy ISO to usb (
2) Copy the actual *.iso file to the root of your usb drive so you can mount it later (I renamed mine proxmox.iso to make it easy)
2) Boot from USB
3) Type ‘debug’ before pressing enter on the proxmox boot screen
4) At the command prompt, you have to mount the ‘proxmox.iso’ file you copied to the drive earlier by doing the following:

#fdisk -l

find out what the path to your usb stick is (look at the GB’s). In my case it is /dev/sdg1 for the partition on my stick

#mount /dev/sdg1 /mnt
#mount -o loop -t iso9660 /mnt/proxmox.iso /mnt
#cd /mnt
#chroot /mnt sbin/

5)Proxmox install should start and be lightning fast compared to a CD if you are using a fast USB 3.0 drive.



Create Proxmox Cluster

1) Only On ONE machine type:

pvecm create <clustername>

2) On the other machines, tell the cluster to join the original

pvecm add <original machine ip>
3) see cluster status
pvecm status

Update Proxmox Using Free GPL Code With No Subscription


nano /etc/apt/sources.list.d/pve-enterprise.list

Comment out with a #

#deb wheezy pve-enterprise

CTRL+X to exit and then type Y to save

nano /etc/apt/sources.list

erase all and Add these lines

deb jessie main contrib

# PVE pve-no-subscription repository provided by, NOT recommended for production use
deb jessie pve-no-subscription

# security updates
deb jessie/updates main contrib

CTRL+X to exit and then type Y to save
Repeat this on all your nodes.

Remove Subscription Nag from Proxmox

Backup the file

cp /usr/share/pve-manager/ext4/pvemanagerlib.js /usr/share/pve-manager/ext4/pvemanagerlib.js_bak
nano /usr/share/pve-manager/ext4/pvemanagerlib.js

CTRL+W and type


you will find a line that looks like:

if (data.status !== 'Active') {

change it to:

if (false) {

CTRL+X to exit and then type Y to save
Repeat this on all your nodes.

 Install Ceph on Proxmox

First, make sure you setup a second subnet for private traffic of the ceph cluster replication from the public traffic going out to VMs

I setup and

-Run on all nodes:

pveceph install -version hammer

-Run on ONE node

pveceph init --network
nano /etc/pve/ceph.conf

change the public AND private networks to your public subnet until you create the monitors, then switch the private network option back later on ( from

pveceph createmon

Then you can create more monitors from the proxmox web gui. Make sure you have at least three.

nano /etc/pve/ceph.conf

After the monitors are created, change the private network back to and leave the public one alone this time.

 Setup OSDs for Ceph with BTRFS on Proxmox

prepare the journal SSD disk


parted /dev/SSD
mkpart journal-2 1 15G
mkpart journal-3 15G 30G
mkpart journal-4 30G 45G

To clear the disk if it is already partitioned:

ceph-disk zap /dev/sd(x)

Setup the osd:

pveceph createosd /dev/sd(x) -fstype btrfs

Connect Proxmox to Ceph cluster RBD

Run on any proxmox node:

# mkdir /etc/pve/priv/ceph 
# cd /etc/pve/priv/ceph
# scp <ceph-admin>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<storageID>.keyring

Note that storageID is the name of the storage we are going to create through the Proxmox GUI. We are going to use cephrbd01 as the Proxmox RBD storage name. 
If your storage name is cephrbd01, then you would fill out the RBD storage info like:
ID: cephrbd01
Pool: <ceph pool name>
Monitor Host: <monitor ip:6789>,<>, <>, etc ie:;;
user name: admin

Remove offending SSH Keys

(you have a new machine with the same IP as and old machine or you rebuilt one)

Find out what line it is in your key file by looking at  the warning message when you try to connect via ssh

Replace that line number between the quotes and leave the ‘d’ with it

sed -i '6d' ~/.ssh/known_hosts

Fix Grub Options for incompatible graphics

nano /etc/default/grub
comment out the section to disable video “=console”

, ,