CentOS 7 - Install Kernel-based Virtual Machine (KVM)


  1. Our Server Setup.

  1. KVM installation and preparation.

First thing to do is to check if the host-machine supports VM-extensions. On the x86 platofrom, those are either AMD-V or Intel’s VT-X. In order to check if the installed CPU’s support those extensions, we need to check if the vmx (for VT-X) or svm (for AMD-V) flag exists in the cpuinfo-output:
[root@node1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
2

When the output is 0, meaning that neither vmx or svm is found in the flags, it probably means that your CPU doesn’t support those extensions and there is little you can do. When the extensions are listed, be sure to check if they are enabled in the systems BIOS since that would cause problems later on. In case your CPU doesn’t support VM-extensions, you are limited to QEMU-emulation in combination with KVM, which delivers a much worse performance in comparison. For this tutorial, I’ll assume that the VM-extensions are supported and enabled in the BIOS of the host-system.

Install KVM.
[root@node1 ~]# yum -y install kvm qemu-kvm libvirt virt-install virt-manager xauth dejavu-lgc-sans-fonts qemu-kvm-tools

Basically all components are now ok but before KVM can be used it’s a good idea to perform a reboot in order to load the kvm-modules and to reload the new network settings.
[root@node1 ~]# reboot

After the reboot, we should check if the necessary kernel modules are loaded, which means that KVM successfully can handle the VM-extensions of our CPU:.
[root@node1 ~]# lsmod | grep kvm
kvm_intel 170181 0
kvm 554609 1 kvm_intel
irqbypass 13503 1 kvm

You will notice that have additional bridge interfaces. The virtual network (virbr0) used for Network address translation (NAT) which allows guests to access to network services. However, NAT slows down things and only recommended for desktop installations.
[root@node1 ~]# ip addr
...
16: virbr0: mtu 1500 qdisc noqueue state DOWN qlen 1000
  link/ether 52:**:**:**:35:b9 brd ff:ff:ff:ff:ff:ff
  inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
   valid_lft forever preferred_lft forever
17: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000

To disable virbr0 and virbr0-nic, enter:
[root@node1 ~]# virsh net-list
Name State Autostart Persistent
---------------------------------------------------------- ------------------------- ------------------------- -------------------------
default active yes yes

[root@node1 ~]# virsh net-destroy default
Network default destroyed

[root@node1 ~]# virsh net-undefine default
Network default has been undefined

[root@node1 ~]# systemctl restart libvirtd

[root@node1 ~]# virsh net-list
Name State Autostart Persistent
---------------------------------------------------------- ------------------------- ------------------------- -------------------------

Check if we have a default storage pool directory by using the command line utility virsh which comes with libvirt package.
[root@node1 ~]# virsh pool-list --all
Name State Autostart
------------------------------------- -------------------------------------- ---------------------------------------
default active yes

# If we have a default storage pool then we remove it.
[root@node1 ~]# virsh pool-destroy default
Pool default destroyed

[root@node1 ~]# virsh pool-undefine default
Pool default has been undefined

Before we create our new default storage pool, let's create a new directory for our default storage pool directory.
[root@node1 ~]# mkdir -p /vm/images
[root@node1 ~]# chmod 711 -R /vm

Create a new default storage pool directory.
[root@node1 ~]# virsh pool-define-as default dir - - - - "/vm/images"
Pool default defined

# List pools
[root@node1 ~]# virsh pool-list --all
Name State Autostart
------------------------------------- -------------------------------------- ---------------------------------------
default inactive no

# Start the inactive pool 'default'
[root@node1 ~]# virsh pool-start default
Pool default started

# Autostart the pool 'default'
[root@node1 ~]# virsh pool-autostart default
Pool default marked as autostarted

Create an additional storage pool directory with the name 'storage-1TB'.
[root@node1 ~]# virsh pool-define-as storage-1TB dir - - - - "/vm/storage-1TB"
Pool default defined

# List pools
[root@node1 ~]# virsh pool-list --all
Name State Autostart
------------------------------------- -------------------------------------- ---------------------------------------
default active yes
storage-1TB inactive no

# Start the inactive pool 'storage-1TB'
[root@node1 ~]# virsh pool-start storage-1TB
Pool storage-1TB started

# Autostart the pool 'storage-1TB'
[root@node1 ~]# virsh pool-autostart storage-1TB
Pool storage-1TB marked as autostarted

# List pools
[root@node1 ~]# virsh pool-list --all
Name State Autostart
------------------------------------- -------------------------------------- ---------------------------------------
default active yes
storage-1TB active yes

Virtual disk images for the KVM-guests can be placed in /var/lib/libvirt by default. In our case we changed the default storage pool directory. SELinux will, by default, prevent access and the security context of that location needs to be changed in order to use it for KVM. To change the SELinux context when storing the images in another location (/vm for example):
[root@node1 ~]# semanage fcontext -a -t virt_image_t "/vm(/.*)?"
[root@node1 ~]# restorecon -R /vm


  1. Check the KVM installation.

Check if we can connect to KVM by asking for a simple list of systems:
[root@node1 ~]# virsh -c qemu:///system list
Id Name State
------------------------------------- -------------------------------------------------------------------------- -------------------------------------

[root@node1 ~]# 


  1. Optimize your KVM server.

Tuned is a daemon that can monitors and collects data on the system load and activity, by default tuned won't dynamically change settings, however you can modify how the tuned daemon behaves and allow it to dynamically adjust settings on the fly based on activity.

For CentOS 7, tuned is installed and activated by default..
[root@node1 ~]# yum list tuned

Installed Packages
tuned.noarch       2.7.1-3.el7_3.1     @updates

Show active profile using the command tuned-adm.
[root@node1 ~]# tuned-adm active
Current active profile: balanced

Show current profile using the command tuned-adm.
[root@node1 ~]# tuned-adm profile_info
Profile name:
balanced

Profile summary:
General non-specialized tuned profile

Profile description:

Show profile virtual-host..
[root@node1 ~]# tuned-adm profile_info virtual-host
Profile name:
virtual-host

Profile summary:
Optimize for running KVM guests

Profile description:

To activate the virtual-host profile.
[root@node1 ~]# tuned-adm profile virtual-host
[root@node1 ~]# tuned-adm active
Current active profile: virtual-host

Geen opmerkingen:

Een reactie posten