OnApp is a cloud management software suite that makes it easy to deploy and manage your own cloud infrastructure. OnApp features a very simple to use web based GUI that is built on proven virtualization technologies such as Xen, KVM and VMware. Though OnApp does have thorough documentation, it can be a bit confusing at times. This tutorial attempts to consolidate all the necessary information into one complete workflow to help you deploy your new cloud infrastructure.

The first thing you need to do is install the base operating system on your controller server, hypervisors, and backup server if you want to configure one. For this tutorial we will be creating Xen hypervisors, and installing Centos 5.9 on all servers. It is possible to install CentOS 6.x on the controller server, however i will be sticking with CentOS 5.9 for consistency. So go ahead an download CentOS 5.9 from the link below:

Network Configuration

Now that you have installed your base operating systems, it is worth considering how you want to set up your network. OnApp recommends using 4 separate networks: storage, management, provisioning and appliance. The storage, management, and provisioning can be theoretically be bundled together, however the appliance network needs to be on a separate interface to allow for blank configuration of the network interface.

Storage network

This network provides the connection between the SAN and the hypervisors. It handles the iSCSI, Fibre Channel or ATAoE traffic. It must be a local network.

  • OnApp recommends this network runs at least 1Gbit; however, to you may want to consider 10Gbit to achieve maximum performance.
  • To achieve better performance and redundancy over 1Gbit you should consider NIC
    teaming/bonding and LACP or MPIO over multiple subnets.
  • It is important that the switch connecting the hypervisors to the SAN supports jumbo frames:
    the storage network on the hypervisors and the SAN(s) must have MTU set to 9000.

Management network

This network connects the various elements of the cloud together. OnApp Cloud uses this network for internal communication between the Control Panel server and the hypervisors, so its stability is crucial to the cloud.

  • This network should be Internet-facing to enable remote access to the server for installation and ongoing maintenance. We recommend firewalling it down to your sysadmin team once the installation is complete, allowing only access to the OnApp web interface on port 80/http or 443/https.
  • It is also possible to use a local network here if you have the necessary NAT rules in place on
    your network to get in and out.
  • We recommend this network runs at least 1Gbit.
  • The IP addresses that are assigned to each node should not overlap with the IP addresses range for public networking: see the Appliance Network section for more information.

If your management network is behind a firewall, please make sure that ports 22/80/5555/30000-
40000 are open to the world for the Control Panel server, and port 22 for all other servers.

Provisioning network

This network connects the backup SAN to the Control Panel and the hypervisors. It is used for the OnApp backup service that backs up VMs.

  • This network is recommended for optimum performance within the cloud, but is optional:
    backups can also be performed over the management network if required.
  • We recommend this network runs at least 1Gbit.

Appliance network

The appliance network interface is used for VM networking only. It is assigned a pool of external IP addresses, which OnApp Cloud then assigns to VMs as they are provisioned.
It is important to understand that this interface will not provide the actual hypervisor OS installation with an Internet connection, since the public interface is managed fully by OnApp Cloud and so by default requires a blank config – for example:

/etc/sysconfig/network-scripts/ifcfg-ethX
ONBOOT=no
BOOTPROTO=none

OnApp Cloud will bridge this port and assign virtual interfaces to it as VMs are provisioned and/or
additional network interfaces are added to VMs from the Web UI, or via the OnApp API.
You will also need an Internet-facing IP address range for use on the public interfaces. Addresses in this range will be allocated to Virtual Machines you create within OnApp.
Note that all hypervisors will have this public interface, so the IP address range must be portable
between hypervisors.

Storage Server Configuration

This tutorial is based on using a separate storage server that will be setup as an iSCSI target, if you want to set up OnApp Storage which utilises the hypervisor local disks for storage, please see the official OnApp documentation:

http://cdn.onapp.com/files/docs/onapp_storage_beta_preparation_guide.pdf

iSCSI is an IP based storage networking standard for linking data storage facilities. It is a Storage Area Network (SAN) protocol, which allows to create a centralized storage server while providing hosts (such as database server or web server) with the illusion of locally attached disks. iSCSI can operate over long distances using existing network infrastructure.

I recommend creating 2TB luns. It is possible to create larger volumes, however this presents increase risk, and can be slower. More LUNS can mean more IO paths, so it can actually perform faster than a very large single LUN. OnApp recommends RAID level 10 for the storage node, for its superior performance and redundancy, however you can get away with RAID 5 or 6 if you are more cost conscious, though it will adversely affect performance.

Setting up the iSCSI target (host)

First up a few definitions to get us started (the iSCSI terminology can be a little confusing):

iSCSI Target is the iSCSI storage host
iSCSI Initiator is the client that connects to the storage

For ease of use, and consistency, I will be setting up iSCSI target on a Centos 5.9 installation since this is the same distro that is required for the other components of OnApp. There are other, more storage oriented linux/unix distros out there, such as Solaris and OpenE, that you might also want to check out.

Now, on the server loaded with Centos, on which the storage is located, run the following command:

# yum -y install scsi-target-utils

Start tgtd

To start the tgtd, enter:

# /etc/init.d/tgtd start

Define the iSCSI target name:

iSCSI Qualified Name (IQN) is defined as

  • literal iqn
  • date (yyyy-mm) that the naming authority took ownership of the domain
  • reversed domain name of the authority (org.giving, com.example, fm.di)
  • Optional “:” prefixing a storage target name specified by the naming authority.
             Naming     String defined by
     Type  Date    Auth      "example.com" naming authority
    +--++-----+ +---------+ +-----------------------------+
    |  ||     | |         | |                             |

    iqn.2013-01.com.example:storage:diskarrays-sn-a8675309
    iqn.2013-01.com.example
    iqn.2013-01.com.example:storage.tape1.sys1.xyz
    iqn.2013-01.com.example:storage.disk2.sys1.xyz
# tgtadm --lld iscsi --op new --mode target --tid=1 --targetname iqn.2013-01.com.example:storage.disk1.sys1.xyz

View the current configuration:

# tgtadm --lld iscsi --op show --mode target

Sample Output:

Target 1: iqn.2013-01.com.example:storage.disk1.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Online: No
            Poweron/Reset: Yes
            Removable media: No
            Backing store: No backing store
    Account information:
    ACL information:

Add a logical unit to the target (where /dev/sdc1 is the target volume):

# tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/sdc1

To enable the target to accept any initiators, enter:

# tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL

iSCSI communicates over port #3260, so enter the following command to open network port #3260:

# iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT
# service iptables save
# service iptables restart

Make sure tgtd starts on reboot:

# chkconfig tgtd on

Now, the only way I could get the above configuration to be persistent between reboots was to add the following lines to /etc/rc.local which would then be run at startup.

tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.2013-01.com.example:storage.disk1.sys1.xyz
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/sdc1
tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL

For reference, if you want to delete a specific target enter the following command:

# tgtadm --lld iscsi --op delete --mode target --tid=1

Setting up the iSCSI initiators (clients)

Login to the iscsi target session

# iscsiadm --mode node --targetname iqn.2013-01.com.example:storage.disk1.sys1.xyz --portal 192.168.0.5:3260 --login

-where 192.168.0.5 is the IP address of your iSCSI target (host)

You may need to restart iSCSI to probe partition and check disks

# service iscsi restart
# partprobe
# fdisk -l

To logout of an iSCSI target, run the following command

iscsiadm -m node -T iqn.2013-01.com.example:storage.disk1.sys1.xyz -p 192.168.0.5 -u

Configuring the Controller Server

Now that you have a fresh copy of Centos 5.x installed on your controller server, it’s time to install the OnApp software.

First, make sure your server is updated by running the following command

yum -y update

Download the OnApp YUM repository file

rpm -Uvh http://rpm.repo.onapp.com/repo/centos/5/onapp-repo.noarch.rpm

Download the OnApp 3.0.0 repository config file

# wget -N http://rpm.repo.onapp.com/repo/centos/5/OnApp-3.0.0.repo -P /etc/yum.repos.d/

Remove all cached files from any enabled repositories by issuing the following command

yum clean all

Install OnApp Control Panel installer package

# yum install onapp-cp-install

Custom Control Panel configuration:

Edit the /onapp/onapp-cp.conf file to set Control Panel custom values, such as:

  • OnApp to MySQL database connection data: connection timeout, pool, encoding, unix socket
  • MySQL server configuration data (if MySQL is running on the same server as the CP): wait timeout, maximum number of connections
  • The maximum number of requests queued to a listen socket (net.core.somaxconn value for sysctl.conf)
  • The root of OnApp database backups directory (temporary directory on the CP box where MySQL backups are placed)
# vi /onapp/onapp-cp.conf

Run Control Panel installer

sudo /onapp/onapp-cp-install/onapp-cp-install.sh

Install OnApp license to activate the Control Panel

Login with admin as the login, changeme for the password.
Enter a valid license key via the Web UI (you’ll be prompted to do so).

Installing Backup Server

https://onappdev.atlassian.net/wiki/display/3GetStarted/Backup+Server+installation

Add a backup server to the web UI

  • Log into your Control Panel.
  • Go to the Settings menu and click the Backup Servers icon.
  • Click the Add New Backup Server button.
  • Fill in the form that appears:
    • Give your backup server a label.
    • Enter the backup server IP address (IPv4).
    • Set the backup server capacity (in GB).
  • Tick the Enabled box to enable the backup server.
  • Click the Add Backup Server button to finish

 

Download the OnApp repository

# wget http://rpm.repo.onapp.com/repo/centos/5/onapp-repo.noarch.rpm
# rpm -Uvh onapp-repo.noarch.rpm

Download the OnApp 3.0.0 repository config file

# wget -N http://rpm.repo.onapp.com/repo/centos/5/OnApp-3.0.0.repo -P /etc/yum.repos.d/
# yum clean all

Install the OnApp Backup Server installer package

# yum install onapp-bk-install

Check and set Backup Server default settings

Edit Backup Server default settings (such as templates and backups directories, and ntp server) by editing the /onapp/onapp-bk.conf file:

# vim /onapp/onapp-bk.conf

Run the installer:

# sh /onapp/onapp-bk-install/onapp-bk-install.sh

To get the information about installer and its options, such as packages update, templates download and non-interactive mode, run the installer with ‘-h’ option.

# /onapp/onapp-bk-install/onapp-bk-install.sh -h

Usage: /onapp/onapp-bk-install/onapp-bk-install.sh [-c CONFIG_FILE] [-a] [-y] [-t] [-h]

Options
-c CONFIG_FILE Custom installer configuration file. Otherwise, the preinstalled one is used.
-a Non-interactive mode. Automatic installation process.
-y Update all packages on the box with ‘yum update’. The update will be processed if the -a option is used.
-t Download of Base, Load Balancer and CDN templates. The download is initiated if ‘-a’ option is used.
-h Print this info.

Use -y option carefully, as it updates all packages in the box with ‘yum update’.
OnApp recommends downloading Base, Load Balancer and CDN templates while running the installer. You may rerun the installer later with the -t option.
The -a option switches the installer into a non-interactive mode (nothing will be performed). This option also processes the packages update and templates download.

Setup iSCSI initiators (see above)

Configure the data store

# pvcreate --metadatasize=50M /dev/sdb -ff
# vgcreate onapp-a0c462om5xj5tp /dev/sdb

Edit /etc/sysconfig/iptables and add the following lines to open port 8080:

-A RH-Firewall-1-INPUT -p udp -m udp --dport 8080 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 8080 -j ACCEPT

Add your backup server on the Control Panel Server

Login to your Control Panel Server:
Settings ? Configuration ? Backups/Templates ? Enter template & backups path that you set in /onapp/onapp-bk.conf above ? Enter the Backup server’s IP in Ssh file transfer server field

Hypervisor Installation

The following steps assumes you have completed the base installation of Centos 5.x

Add the hypervisor to your cloud using the OnApp Control Panel

Settings –> Hypervisors –> Add New Hypervisor
Make sure the hypervisor is visible in the Control Panel, and at this point showing as inactive.

Make sure your OS is up to date

#> yum -y update

Enable IPv6:

This step is required regardless of whether you’ll be using IPv6 or not.

Edit /etc/modprobe.conf and comment out the following strings:

alias ipv6 off
options ipv6 disable=1

Next, edit /etc/sysconfig/network and replace

NETWORKING_IPV6=no

with

NETWORKING_IPV6=yes

These settings won’t take effect until you reboot, but do not reboot now. We’ll do that later.

Applying Blank config to eth1

Now set the provisioning network interface to have a blank config by editing /etc/sysconfig/network-scripts/ifcfg-ethX (where X is the number of your network interface) and entering the following

ONBOOT=no
BOOTPROTO=none

Download the OnApp repository

# wget  http://rpm.repo.onapp.com/repo/centos/5/onapp-repo.noarch.rpm
# yum clean all

Install the OnApp hypervisor installer package

# yum install onapp-hv-install

Edit custom hypervisor configuration

Edit the /onapp/onapp-hv.conf file to set hypervisor custom values, such as NTP time sync server, Xen Dom0 memory configuration data and number of loopback interfaces:

# vi /onapp/onapp-hv.conf

Custom values must be set before the installer script runs.

Run the OnApp hypervisor installer script

# /onapp/onapp-hv-install/onapp-hv-xen-install.sh

Usage: /onapp/onapp-hv-install/onapp-hv-xen-install.sh [-c CONFIG_FILE] [-a] [-y] [-o] [-t] [-h]
Options
-c CONFIG_FILE Custom installer configuration file. Otherwise, the preinstalled one is used.
-a Non-interactive mode. Automatic installation process.
-y Update all packages on the box with ‘yum update’. The update will be processed if the -a option is used.
-o Xen + Open vSwitch installation
-t Download recovery templates and ISO(s) used to provision FreeBSD guests.
-h Print this info.

Configure the hypervisor for your cloud

/onapp/onapp-hv-install/onapp-hv-config.sh -h <CP_HOST_IP> -p [HV_HOST_IP] -f <FILE_TRANSFER_SERVER_IP>

<CP_HOST_IP> is the IP addresses of the Control Panel server.
<HV_HOST_IP> is the IP address of the Hypervisor.
<FILE_TRANSFER_SERVER_IP> is the IP address of the server that will hold your backups and templates.

Update necessary sysctl variables

Edit your /etc/sysctl.conf file. If the netfilter.ip_conntrack_max entry exists, update the value: if it doesn’t exist, add the following line to /etc/sysctl.conf:

# net.ipv4.netfilter.ip_conntrack_max = 256000

Reboot the hypervisor to complete the installation

# sudo shutdown -r now

Generate SSH keys

OnApp Control Panel Server uses SSH keys to authenticate with the hypervisors and backup server(s) in the cloud. The following commands need to be performed on the Control Panel server. Be ready to enter the passwords for your hypervisors and backup server.

Login to the Control Panel Server

wget http://downloads.repo.onapp.com/install-all-keys.sh
/bin/sh install-all-keys.sh

Data store installation

PLEASE NOTE:

    • To configure an integrated storage datastore, please consult the Admin guide.
    • This process assumes you have already configured a hypervisor to see the ISCSI/ATAoE block device it is connecting to, and that the SAN disk will be shown when running a fdisk -l.
    • All hypervisors need access to the same datastore. Ensure that you have the block device visible on all hypervisors.
    • VERY IMPORTANT: only perform this procedure once per data store!
    • ALSO IMPORTANT: take care when choosing the disk/partition you wish to use for storing VM data!

Add the new data store to OnApp via the WebUI

To create a data store:

    • Go to your Control Panel Settings menu.
  • Click the Data Stores icon.
  • Click the Create Data Store link at the bottom of the screen.
  • On the screen that appears:
    • Enter a label and IP address for your data store.
    • Move the slider to the right to enable a data store. When disabled, OnApp will not allow new disks to be created automatically on that data store. This is useful to prevent an established data store from becoming too full. It also lets you prevent the automatic creation of root disks on ‘special’ data stores (high speed, etc).
    • Click Next.
    • Set disk capacity in GB.
    • If required, you can also bind the data store with a local hypervisor. This is helpful if you wish that the data store and a hypervisor were located on the same physical server thus decreasing the time needed for a hypervisor-data store connection.
    • If required, you can also assign the data store to a data store zone. The drop-down menu lists all data store zones set up in the cloud (to add or edit data store zones, see the section on Data store zones in the Settings section of this guide)
    • Select the lvm data store type.
  • When you’ve finished configuring the store, click the Create Data Store button.
  • To use the data store, you have to assign it either to a hypervisor or a hypervisor zone.

    Find the data store’s unique identifier

    This will be needed to create your volume group below.

    Read the IDENTIFIER from the data stores screen: http://xxx.xxx.xxx.xxx/settings/data_stores

    Next we will create a physical volume (PV) and a volume group (VG) but what does this mean? PVs are the physical hard drives that are accessible on your system (/dev/sda1 /dev/sdb3 for example), and on these PVs you create one or more VG (volume groups), and in each volume group you can create one or more logical volumes (LV). Logical Volumes can exceed the size of a single physical volume when multiple PVs are used.

    Create the physical volume

    SSH into a hypervisor that is able to connect to this datastore. Create the physical volume:

    # pvcreate --metadatasize 50M /dev/xxx

    Replace xxx with the real device.

    Create the volume group

    # vgcreate onapp-IDENTIFIER /dev/xxx

    Replace xxx with the real device and IDENTIFIER with the info from the datastore page in the UI.

    Test hypervisor/volume group visibility

    Now you have the new datastore formatted you should be able to see the volume group from all hypervisors. To test this, run pvscan and vgscan on all hypervisors. Make sure you can see all identifiers on all hypervisors.

    Create a new data store zone

    • Go to your Control Panel’s Settings menu and click the Data store zones icon.
    • Click the Add New Data store zone button.
    • On the screen that follows, give your data store zone a name (label) and then click the Save button.

    Add data stores to the data store zone by clicking the + button corresponding with the data store you wish to add.

    Downloading individual templates

    You can download individual templates by logging into the server which you will store your templates: if you have setup a backup server, your templates will be in there, otherwise templates will be stored on the Control Panel Server.

    You can navigate to http://templates.repo.onapp.com/Linux/ to see all the templates that are available. Note that free versions of OnApp only allow you to install a small number “base” templates.

    Login to your ‘template’ server and run the following command

    # wget http://templates.repo.onapp.com/Linux/centos-6.3-x64-1.4-xen.kvm.kvm_virtio.tar.gz

    Control Panel cloud configuration

    Create hypervisors and hypervisor zones

    1. Create a new hypervisor zone:
      • Go to your Control Panel’s Settings menu and click the Hypervisor Zones icon.
      • Click the Add New Hypervisor Zone button.
      • On the screen that follows, give your hypervisor zone a name (label).
      • Make sure that the disable failover option is selected.
      • Click the Save button to finish.
    2. Add your new hypervisor to the control panel:
      • Go to your Control Panel’s Settings menu and click the Hypervisors icon.
      • Click the Add New Hypervisor button and fill in the form on the screen that appears:
        • The hypervisor’s IP address should be its IP on the management network
        • Make sure that “disable failover” is selected
        • Make sure that you select the “Enable” option
      • Click the Add Hypervisor button to finish. You can view the hypervisor under the main Hypervisors menu.
    3. Add that hypervisor to your new hypervisor zone:
      • Go to your Control Panel’s Settings menu and click the Hypervisor Zones icon
      • Click the label of the zone you want to add a hypervisor to.
      • The screen that appears will show you all hypervisors in the cloud, organized into two lists – those assigned to the zone already, and those that are unassigned.
      • In the unassigned list, find the hypervisor you want to add to the zone, and click the Add icon next to it.
  • Create networks and network zones

      1. Create a new network zone
        • Go to your Control Panel’s Settings menu and click the Network zones icon.
        • Click the Add New Network zone button.
        • On the screen that follows, give your network zone a name (label) and then click the Save button.
      2. Create a new network
        • Go to your Control Panel’s Settings menu and click the Networks icon.
        • Click the Add New Network button at the end of the list.
        • On the screen that follows, give the new network a name (label), a VLAN number, and assign it to a network zone if required.
        • Click the Add Network button to finish.

    The network label is simply your choice of a human-readable name – “public”, “external”, “1Gb”, “10Gb” etc.

    The VLAN field only needs to be given a value if you are tagging the IP addresses you will add to this network with a VLAN ID (IEEE 802.1Q). If you plan to tag IP addresses in this way, you need to make sure the link to the public interface on the hypervisors is a trunked network port. If you are not VLAN tagging addresses, this field can be left blank and the public port on the hypervisor can be an access port.

    1. Add that network to your new network zone
      • Go to your Control Panel’s Settings menu and click the Network Zones icon.
      • Click the label of the zone you want to add a network to.
      • The screen that appears will show you all networks in the cloud, organized into two lists – those assigned to the zone already, and those that are unassigned.
      • In the unassigned list, find the network you want to add to the zone, and click the Add icon next to it.
    2. Add a range of IP addresses to the new network
      • Go to your Control Panel’s Settings menu.
      • Click the Networks icon: the screen that appears shows every network available in your cloud.
      • Click the name (label) of the network you want to add addresses to. On the screen that follows you’ll see a list of all IP addresses currently assigned to this network.
      • Click the Add New IP Address button at the bottom of the screen, and complete the form that appears:
        • IP Address – add a range of addresses. For example:
          • ’192.168.0.2-254' or '192.168.0.2-192.168.0.254' (IPv4) '2001:db8:8:800:200C:417A-427A' (IPv6).
      • Netmask – for example: '255.255.255.0' (IPv4) or ’24’ (IPv6).
      • Gateway – enter a single IP to specify a gateway. If you leave this blank the address will be added without a gateway.
      • Don’t use as primary during VM build – If you tick this box, the IP addresses you add will never be assigned as primary IPs. Primary IPs are only allocated to VMs when the VM is built, so with this box ticked, the address range will never be assigned to a newly built VM.
    3. Click the Add New IP Address button to finish.
    4. You can add up to 1,000 IP addresses at once. To add more than 1,000 addresses, repeat the procedure again.
    5. Join datastores to hypervisors
      • Go to your Control Panel’s Settings menu and click the Hypervisors icon.
      • Click the label of the hypervisor you want to manage data stores for.
      • On the screen that appears, click the Manage Data Stores link in the Actions section.
      • On the screen that follows, you’ll see a list of all data stores currently associated with this hypervisor:
        • To add a data store join, choose a data store from the drop-down menu and click the Add Data Store button.
        • To remove a data store join, click the Delete icon next to it. You’ll be asked for confirmation before the store is removed.
    6. Join networks to hypervisors
      • Go to your Control Panel’s Settings menu and click the Hypervisors icon.
      • Click the label of the hypervisor you want to manage networks for.
      • On the screen that appears, click the Manage Networks link in the Actions section.
      • On the screen that follows, you’ll see a list of all networks currently associated with this hypervisor:
        • To add a new network join, choose a network from the drop-down menu, enter its interface name (eth0, eth1) and click the Add Network button.
        • To remove a network join, click the Delete icon next to it. You’ll be asked for confirmation before the network is removed.

    Note that when you join the network to a hypervisor you must specify the relevant NIC: this should be a dedicated NIC with a blank config that is patched to route the network in question.

    That’s it! You should now be able to create VMs. But one last thing. After creating the VMs I noticed that i could ping them, but they had not internet access. Upon some investigation it was found that the VMs had no DNS servers configured. The Solution to this was to add Google’s public DNS servers to /etc/resolv.conf. This was done by editing /etc/resolv.conf and adding the following lines:

    nameserver 8.8.8.8
    nameserver 8.8.4.4

Similar Posts