Product SiteDocumentation Site

Cloud Area Padovana

Users Guide

Edition 1.28

Legal Notice

Copyright (c) 2014 INFN - "Istituto Nazionale di Fisica Nucleare" - Italy
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Abstract

This document explains how to use the services of the Cloud Area Padovana.
It was heavily inspired from the CERN Cloud User Guide
1. Overview of the Cloud Area Padovana
1.1. Projects
1.2. Network access
1.3. Getting help
1.4. Acknowledge Cloud Area Padovana in your paper
2. Registration
2.1. Registration in the Production service
2.1.1. Apply for an account
2.1.2. Apply for other projects
2.1.3. Manage project membership requests (only for project managers)
2.1.4. Manage project members requests (only for project managers)
3. Getting Started
3.1. Access the Cloud through the Dashboard
3.2. Creating a keypair
3.3. Setting security group(s)
3.4. Setting/changing password
3.5. Switching between projects
3.6. Accessing the Cloud through the Openstack command line tools
3.7. Accessing the Cloud through the euca2ools EC2 command line tools
4. Managing Virtual Machines
4.1. Creating Virtual Machines
4.1.1. Improve reliability: creating Virtual Machines from Volumes
4.2. Accessing to Virtual Machines
4.3. Accessing to other hosts/services from Virtual Machines
4.4. Allocating a public (floating) IP address to a Virtual Machine
4.5. Flavors
4.6. Stopping and Starting VMs
4.7. Contextualisation
4.8. Resizing Virtual Machines
4.9. Snapshotting Virtual Machines
4.10. Deleting Virtual Machines
5. Managing Containers
5.1. What is Docker?
5.2. How to run Docker Container in the Cloud
5.3. How to Upload Docker Images
6. Managing Storage
6.1. Ephemeral storage
6.2. Volumes
6.2.1. Create a Volume
6.2.2. Using (attaching) a Volume
6.2.3. Detaching a Volume
6.2.4. Deleting a Volume
6.2.5. Sharing a volume between multiple (virtual) machines
6.3. Accessing storage external to the Cloud
7. Orchestration
7.1. Creating a template
7.2. Creating a stack
8. Managing Images
8.1. Public Images
8.2. User Provided Images
8.3. Sharing Images
8.4. Building Images
8.4.1. Enabling INFN Padova LDAP based authentication on the Virtual Machine
8.5. Deleting Images
9. Creating Batch Clusters on the Cloud
9.1. Intro: the idea
9.2. Prerequisites
9.3. The cluster configuration
9.4. Start the elastic cluster
9.5. How slave nodes are installed and configured
9.6. Use the elastic cluster
9.7. Use the elastic cluster to run docker containers
9.8. How to find the EC2 (AMI) id of an image
10. Some basics on Linux administration
10.1. Introduction
10.2. Setting up 'sudo'
10.3. Managing software through the package manager
10.3.1. Managing software on RH distributions
10.3.2. Managing software on DEB distributions
10.4. Adding a user to your VM
10.5. Formatting/resizing a volume you just attached
10.6. Automatically remount volumes on reboot

Chapter 1. Overview of the Cloud Area Padovana

The Cloud Area Padovana is an OpenStack based cloud.
It implements a IaaS (Infrastructure as a Service): it allows the instantiation of Virtual Machines (VMs) of the desired environments (in terms of operating system, installed software, etc.) and of the desired flavors (in terms of processors, memory size, etc.).
Even if it is a single, logical Cloud service, its resources are spread in two different locations: INFN Padova and INFN Laboratori Nazionali di Legnaro (LNL).
The Cloud Area Padovana is currently based on the Mitaka version of the OpenStack middleware.

1.1. Projects

Projects (also known as tenants) are organizational units in the cloud. A project is used to map or isolate users and their resources.
Projects are given quotas on resource usage, in terms of virtual machines, cores, memory, storage, etc.
A project can have a single user as member (personal project) but the typical use case are shared projects, with multiple users as members, which can map to experiments, organizations, research groups, etc. A user can be on multiple projects at the same time and switch between them.
In the Cloud Area Padovana projects usually map to experiments or other research groups who "own" resources in the cloud service. Each project has a manager (usually the team leader), who is responsible to manage (accepting or refusing) membership requests for the project.
A special shared project, called guest is used for all other users: it has a small quota to allow testing the Cloud infrastructure.

Warning

Personal private projects are instead discouraged and are created only for convincing reasons.

1.2. Network access

Virtual Machines created in the Cloud Area Padovana are by default “visible” from the Local Area Networks (LANs) of both Padova and LNL. This means that e.g. users can access via ssh VMs of the Cloud directly from their desktops located in Padova or Legnaro. It is then possible to control which services/ports can be accessed using the security groups (discussed later) and firewalls on the relevant VMs.
If it is necessary to log on a VM of the Cloud from the Internet, it is necessary to go through a gate machine. For this purpose users can rely on the existing gate hosts in Padova and Legnaro. Users who don't have an account on such gate hosts can rely on a Cloud specific gate host (cld-gate.pd.infn.it): please contact if you need an account on such machine.
If needed, e.g. if a VM should host a service accessible from the Internet, such VM on the Cloud can be given a public IP. If this is the case please contact .
From a VM of the Cloud, it is possible to access the outside Internet network, while by default it is not possible to access a host/service hosted in Padova or Legnaro. If, for some reasons, you need to access some services hosted in Padova or Legnaro from the Cloud, please contact .

1.3. Getting help

If you have problems using the Cloud dell'area Padovana private cloud, please contact the admins at .

Warning

The Cloud Area Padovana support team provides support on the Cloud infrastructure, but doesn't provide support on the virtual machines you instatiated on the Cloud.
INFN-Padova computing and Network service can provide support only to INFN-Padova users and only for instances created using "blessed" images, (as described in Section 8.1, “Public Images”).
Changes and planned interventions to the service will be posted on the . All registered users to the Cloud are member of this mailing list.

1.4. Acknowledge Cloud Area Padovana in your paper

If you wish to give credit for the use of the Cloud Area Padovana infrastructure in your scientific publication or elsewhere, the following quote can be used:
"Cloud Area Padovana is acknowledged for the use of computing and storage facilities."

Chapter 2. Registration

To be able to use the Cloud Area Padovana service, first of all you need to apply for an account. The procedure to be followed is described in this chapter.

2.1. Registration in the Production service

2.1.1.  Apply for an account

The registration procedure in the production service is managed through the Horizon Openstack web service interface.
Go to https://cloud-areapd.pd.infn.it/dashboard in a browser. The following page should appear:

2.1.1.1.  Apply for an account using INFN-AAI

If you already have an account on the INFN Authenticaton and Authorization Infrastructure (INFN AAI) and therefore you have access to the INFN portal, from the OpenStack dashboard click on the icon on the left (INFN AAI).
Once logged, click on the Register button. A form such as the one in the following picture should appear.
Fill the form with the name of your organization (e.g. 'INFN Padova'), your phone number, the name of a contact person in Padova/Legnaro (e.g. your team leader), and if needed some notes.
For what concerns the Project Action (projects have been discussed in Section 1.1, “Projects”) you have three options:
  • Select Existing Projects
  • Create new project
  • Use guest project
Choose Select Existing Projects if you want to apply membership for one or more existing projects (choose them in the relevant box).
Select Create new project if instead you want to ask for the creation of a new project (and you are the leader of the experiment/ research group associated to this project). In this case you will have to specify also a Project name and a Project Description. You will also have to specify if this project must be private (a personal project where you will be the only member) or not.

Note

Public (i.e. not private) projects are projects where other users can apply for membership. They are supposed to be used for experiments or other research groups.

Warning

Personal private projects are discouraged and are created only for convincing reasons.

Note

The person who asks for the creation of a new project is automatically defined as the manager of this project, i.e. he/she will have to manage the membership requests for this project. So the request to create a new project should be done by the relevant experiment/group leader.
Select Use guest project if you want to apply membership for the guest project, i.e. the project with a small quota used to test the Cloud infrastructure.
When you have filled the form, click on the Read the AUP button. The following window will appear.
Read the AUP that you need to accept (by clicking the Accept button).
Finally click on the Register button and you are done.
Your request will be managed by the Cloud adminstrator and by the manager(s) of the project(s) for which you applied membership. You will get an e-mail when your request is approved (and therefore you can start using the Cloud Area Padovana) or if for some reason your request is refused.

2.1.1.2.  Apply for an account using User and Password

If and only if you don't have an account on the INFN Authenticaton and Authorization Infrastructure (INFN AAI), click on the icon on the right. The following image should appear:
Click then on the Register button. A form such as the one of the following image will appear.
Fill the form with your personal data (First Name, Last Name, Email Address, Organization (e.g. 'INFN Padova'), Phone number. Choose a User name (usually your family name: please note that it could be changed by the Cloud admins during the registration process) and a Password. Specify the name of a contact person in Padova/Legnaro (e.g. your team leader), and if needed provide some other info in the 'Notes' field.
For what concerns the Project Action you have three options:
  • Select Existing Projects
  • Create new project
  • Use guest project
Choose Select Existing Projects if you want to apply membership for one or more existing projects (choose them in the relevant box).
Select Create new project if instead you want to ask the creation of a new project. In this case you will have to specify also a Project name and a Project Description. You will also have to specify if this project must be private (a personal project where you will be the only member) or not.

Warning

Personal private projects are discouraged and are created only for convincing reasons.

Note

The person who asks for the creation of a new project is automatically defined as the manager of this project, i.e. he/she will have to manage the membership requests for this project. So the request to create a new project should be done by the relevant experiment/group leader.
Select Use guest project if you want to apply membership for the guest project, i.e. the project with a small quota used to test the Cloud infrastructure.
When you have filled the form, click on the Read the AUP button. The following window will appear.
Read the AUP that you need to accept (by clicking the Accept button).
Finally click on the Register button and you are done.
Your request will be managed by the Cloud adminstrator and by the manager(s) of the project(s) for which you applied membership. You will get an e-mail when your request is approved (and therefore you can start using the Cloud Area Padovana) or if for some reason your request is refused.

2.1.2. Apply for other projects

After you have been given an account on the Cloud Area Padovana, at any time you can ask the creation of a new project or the membership to an already existing project.
This can be done using the OpenStack dashboard clicking on Identity and then Projects (on the left). You should see something like:
Then click on Subscribe to project on the top right. A window such as this one will appear:
Using this window you can then ask the creation of a new project or the affiliation to already existing projects.

2.1.3. Manage project membership requests (only for project managers)

If you are the manager of a project, you will receive membership requests for this project that you will have to manage (approving or refusing them).
When a user applies to be member of a project that you manage, you will receive an e-mail such as this one:
To manage such requests, open the OpenStack web dashboard, i.e. go to https://cloud-areapd.pd.infn.it/dashboard in a browser. Log in (either using the INFN-AAI credentials, or using the username and password) and then click on Identity and then Subscriptions (on the left):
An image such as the following one, with the list of the pending requests, will appear.
To manage a membership request click on the corresponding Process button. A window such as the following one will appear:
Click on the Approve button to approve the request. Otherwise, to reject the request, click on the Reject button.

2.1.4. Manage project members requests (only for project managers)

If you are the manager of a project, you can list the members of your project and, if needed, change their role.
Open the OpenStack web dashboard, i.e. go to https://cloud-areapd.pd.infn.it/dashboard in a browser. Log in (either using the INFN-AAI credentials, or using the username and password), click on Identity and then Project Members (on the left). The list of users affiliated to your project will appear:
You can also change the role of a specific user (by cliccking on 'Toggle Role') from 'Project User' to 'Project manager' or viceversa.

Note

If a user is promoted to Project manager, she will then be allowed to manage affilitation requests to the project, as described in Section 2.1.3, “Manage project membership requests (only for project managers)”.
From this window you can also remove a specific user from the project you manage.

Chapter 3. Getting Started

3.1. Access the Cloud through the Dashboard

Once you have been given an account, you can access the functionality provided by the Cloud. There are several ways of interacting with the Cloud. The simplest one is the dashboard, a web based GUI.
To access the production service of the Cloud Area Padovana via the dashboard, you must simply go to https://cloud-areapd.pd.infn.it/dashboard in a browser.
You can now log either using the INFN-AAI credentials, or using the username and password.

3.2. Creating a keypair

You can now proceed to creating a key-pair. This is a secret key which will allow you to interact with your virtual machine once it is created. This key should be handled with similar security to a password or an ssh key so it should only be stored in a secure directory such as a private area in your home folder.
The steps are as follows:
  • Go to the Access & Security on the left hand side menu
  • In the keypairs section, select Create a keypair. You will need to give the keypair a name, such as my_key.
On completion of the operation, a file my_key.pem will be downloaded to your browser. This should be stored in a safe location. To keep it private, run: chmod 600 my_key.pem

3.3. Setting security group(s)

Security groups are sets of IP filter rules that define networking access and are applied to all instances within a project using that group. As described in Section 4.1, “Creating Virtual Machines”, when you create an instance you have to specify the security group to be used.
To set such IP rules, users can either add rules to the default security group or can add a new security group with the desided rules.
For example the following procedure enables SSH and ICMP (ping) access to instances using the default security group. The rules apply to all instances within a given project using this security group, and should be set (just once) for every project unless there is a reason to prohibit SSH or ICMP access to the instances. This procedure can be adjusted as necessary to add additional security group rules to a project, if needed.
  • Log in to the dashboard, choose a project, and click Access & Security. The Security Groups tab shows the security groups that are available for this project.
  • Select the default security group and click Edit Rules.
  • To allow SSH access, click Add Rule.
  • In the Add Rule dialog box, enter the following values:
    Rule SSH
    Remote CIDR
    CIDR 0.0.0.0/0

    Note

    To accept requests from a particular range of IP addresses, specify the IP address block in the CIDR box.
  • Click Add.
  • To add an ICMP rule, click Add Rule.
  • In the Add Rule dialog box, enter the following values:
    Rule All ICMP
    Direction Ingress
    Remote CIDR
    CIDR 0.0.0.0/0
  • Click Add.

Note

If you need to enable some services on a Virtual Machine, besides setting the specific IP rules through security groups, be sure that the relevant ports are also enabled (e.g. via iptables) on the Virtual Machine.

3.4. Setting/changing password

Note

If you log on the production cloud through INFN-AAI and if you don't need to access the cloud service using the OpenStack command line tools discussed in Section 3.6, “Accessing the Cloud through the Openstack command line tools” you can safely ignore this section.
If you access the production cloud through username and password, and you want to change such password, from the OpenStack dashboard select Settings (on the top) and then Activate Password.
If instead you log through INFN-AAI, you don't need to know this OpenStack password. You need to know it only if you need to access the service using the OpenStack command line tools discussed in Section 3.6, “Accessing the Cloud through the Openstack command line tools”.
To set such password (or to change it) use the same procedure: from the OpenStack dashboard select Settings (on the top) and then Activate Password.

Important

This is the password to authenticate with the Cloud service. It has nothing to do with the password of the Cloud virtual machines nor with the INFN-AAI password.

3.5. Switching between projects

As introduced in Section 1.1, “Projects”, a user can be on multiple projects at the same time. It is possible to switch between them using the Dashboard, clicking on CURRENT PROJECT as shown in the following figure.

3.6. Accessing the Cloud through the Openstack command line tools

It is possible to access the functionality of the Cloud also using the Openstack command line tools, even if most of the functionality provided by the Cloud can be accessed through the dashboard web interface. The Openstack End User Guide contains extended information on the command line tools in OpenStack. Documentation at http://docs.openstack.org/user-guide/content/install_clients.html explains how to install the Openstack client.
The OpenStack client is also installed on lx.pd.infn.it.
The OpenStack tools require a set of shell environment variables in order to run. These variables can be obtained and stored in an 'rc' file much like .profile when logging into a linux server. The environment variables are different per project that you work on. If you log into the dashboard, you will find an Access & Security menu on the left hand side. Then choose the API Access Tab to select the options related to using the program interfaces. Select among Download OpenStack RC file v2.0 and Download OpenStack RC file v3, to download the rc file for your current project (v2.0 or v3).
The v3 openrc file requires a quite recent version of the Openstack client .

Warning

Because of a bug, if you downloaded the v2.0 rc file, you have to edit it and replace "v3" with "v2.0" in the OS_AUTH_URL variable setting
This file is different for each of the projects you are working on. This file should be saved onto the machine where you want to run the commands from. If you use csh rather than bash/zsh for your shell, you would need to create a new version using 'setenv' rather than export.
In the production cloud, since services are secured using SSL, you need to edit the RC file adding the line:
export OS_CACERT=/etc/grid-security/certificates/INFN-CA-2015.pem
and you need to make sure that this file exists on the relevant client machine. The content of this '/etc/grid-security/certificates/INFN-CA-2015.pem' file must be the certificate of the INFN-CA in PEM format. This file is provided by the ca_INFN-CA-2015 package (that can be downloaded e.g. from the EUGridPMA web site. The certificate can also be downloaded from the INFN CA web site.
To test it works, run the rc script source file and enter your password to authenticate. To set/change your password, please refer to Section 3.4, “Setting/changing password”
The OpenStack command line tools (nova for compute resources, glance for images) can then be used, etc:
  $ . Muon\ Tomography-openrc.sh
  Please enter your OpenStack Password:
  
  $ nova list
  +--------------------------------------+----------+--------+------------+-------------+----------------------------------+
  | ID                                   | Name     | Status | Task State | Power State | Networks                         |
  +--------------------------------------+----------+--------+------------+-------------+----------------------------------+
  | 178c541d-ee63-4678-9f22-99d799cd715e | dock01   | ACTIVE | -          | Running     | Muon-Tomography-lan=10.64.18.105 |
  | 2bcf0cd1-5289-43ef-8d2e-bd18385f1df9 | mutom1   | ACTIVE | -          | Running     | Muon-Tomography-lan=10.64.18.118 |
  | 0e237a8f-82e6-48b8-8b5c-44ad9a39a8de | pin8     | ACTIVE | -          | Running     | Muon-Tomography-lan=10.64.18.90  |
  | 1f862ffa-d469-40aa-87a8-ef0d0a842919 | provaNFS | ACTIVE | -          | Running     | Muon-Tomography-lan=10.64.18.120 |
  +--------------------------------------+----------+--------+------------+-------------+----------------------------------+

Note

When you source the rc script you are asked for a password. If the password is wrong, you will be told (with an authentication error) only when you issue some OpenStack command.

3.7. Accessing the Cloud through the euca2ools EC2 command line tools

The Cloud Area Padovana also exposes a EC2 interface, which is a de-facto standard for computational clouds.
The euca2ools are command line tools that can be used to interact with an EC2 based cloud.
You can install the euca2ools package as follows:
CentOS
  # yum install euca2ools
Ubuntu
  # apt-get install euca2ools
The euca2ools are also installed on lx.pd.infn.it.
The euca2ools require a set of shell environment variables in order to run. These environment variables are different per project that you work on.
If you log into the dashboard, you will find an Access & Security menu on the left hand side. Then choose the API Access Tab to select the options related to using the program interfaces. Select the Download EC2 Credentials option on the top right to download the zip file for your current project. This zip file will be downloaded to the browser.
This file should be saved onto the machine where you want to run the commands from and unzipped into a private directory, e.g:
$ unzip alice-x509.zip 
Archive:  alice-x509.zip
extracting: pk.pem                  
extracting: cert.pem                
extracting: cacert.pem              
extracting: ec2rc.sh
ec2rc.sh gives the variables for accessing the Cloud with EC2 APIs. If you use a C shell based shell, you would need to adapt this using setenv.
To test it, you can e.g. try the following command that lists the existing instances for your project:
$ . ec2rc.sh 
$ euca-describe-instances
RESERVATIONr-41vjiwomae4b3654ea08441fabe232390ae908b6
INSTANCEi-000015b8ami-00000061mysql-serverrunningvenaruzzo-cloudpp0m1.small2014-09-02T08:44:24.000Znovaaki-00000064ari-0000006710.62.13.6instance-store
RESERVATIONr-bf7v6v6mae4b3654ea08441fabe232390ae908b6
INSTANCEi-000015b5ami-00000012server-6cbb5ded-ba65-4850-8078-1e9f1836e7derunningvenaruzzo-cloudpp0m1.small2014-09-01T10:41:34.000Znova10.62.13.2instance-store
BLOCKDEVICE/dev/vdbvol-0000001ffalse
RESERVATIONr-yanua8rgae4b3654ea08441fabe232390ae908b6
INSTANCEi-0000153bami-00000061xrootd-serverrunningvenaruzzo-cloudpp0m1.small2014-08-27T14:09:35.000Znovaaki-00000064ari-0000006710.62.13.4instance-store
BLOCKDEVICE/dev/vdbvol-00000031false
RESERVATIONr-sll3yfjlae4b3654ea08441fabe232390ae908b6
INSTANCEi-000015b2ami-0000001emonitoring-serverrunningvenaruzzo-cloudpp0m1.small2014-09-01T09:34:31.000Znova10.62.13.5instance-store

Warning

For some euca2ools distributions sourcing the ec2rc.sh script is not enough. You need to explictly specify the access and secret keys, and the endpoint, with the relevant command line options, e.g.:
    euca-describe-instances -I ${EC2_ACCESS_KEY} -S ${EC2_SECRET_KEY} -U ${EC2_URL}

Chapter 4. Managing Virtual Machines

4.1. Creating Virtual Machines

To create a Virtual Machine (VM) using the dashboard, you need to have already logged into the dashboard, created your private key (as explained in Section 3.2, “Creating a keypair”) and set the security group (as discussed in Section 3.3, “Setting security group(s)”) to be used for this VM.
To create a VM, the steps are as follows:
  • On the left hand menu, select the Project that must be used for this VM.
  • Go to Instances on the left hand menu. This will display a list of VMs currently running in your project.
  • Select the + Launch Instance button. A new menu appears.
    Here you can enter:
    • Instance name is the name of the machine you want to create.
    • Flavor is the size of the machine you want to create. These are specified using VCPUs (number of virtual CPUs, disk space for the system disk, size for the RAM memory). Selecting small flavors is more economical and the flavor of a virtual machine can be changed later if required. Flavors are discussed in Section 4.5, “Flavors”.
    • Instance Count is the number of virtual machines to be started.
    • As Instance Boot Source select Boot from Image or Boot from Snapshot and then specify its name.
  • Switch the tab for Access & Security.
    • As Keypair select the keypair you created. This will allow you to log in as root using this SSH key.
    • You can also specify the admin (usually root) password of the instance.

      Warning

      Please note that this setting of the admin password is not guaranteed to always work (the image can't support the “injection” of this password). It is therefore suggested to use the ssh-key mechanisms.
    • Specify the Security group to be used for this VM (security groups are discussed in Section 3.3, “Setting security group(s)”).
  • Swith the tab for Networking.
    You should see one network called <ProjectName>-lan and, if the possibility to use public IP numbers was requested for your project, also another network called <ProjectName>-wan. Select the former one if your VM doesn't need to be visible on the Internet. Select instead the <ProjectName>-wan network if your VM must have a public IP. You will then need to allocate a public (floating) IP address to this instance, as explained in Section 4.4, “Allocating a public (floating) IP address to a Virtual Machine”.

    Warning

    By default the possibility to use public IP numbers is disabled and therefore by default the <ProjectName>-wan network doesn't exist. If public IPs are needed for your project, please contact .
  • Select Launch to start the virtual machine being created. When this returns to the overview screen, there will be a line with the instance name, ip adress and status. The status should be 'Active' once the install is complete.
Once the status of the machine is 'Active', you can watch the console to see it installing and booting. On the right hand side of the table, under More, there is a pull down menu. Options include View Log and Console.
For a Linux machine, select Console to access to the console of the VM.

Note

Virtual Machines instantiated on the Cloud by default aren't registered in the DNS. This means that you'll have to refer to them using their IP numbers.
For Virtual Machines supposed to have a long life, INFN Padova users may ask (contacting ) to have them registered in the DNS. If possible (i.e. if the chosen names are sensible enough and it there are no ambiguities) the registered names in the DNS will be the same as the ones chosen as Instance names.

4.1.1. Improve reliability: creating Virtual Machines from Volumes

By default Virtual Machines are instantiated using the local disk of the Cloud compute node. This means that, in case of failure of the Compute node, it may happen that the virtual machine content is lost.
For production servers which are not fully redundant and load balanced, to improve the availability its is advisable to use an external storage volume for the system disk of the virtual machine. The advantage is also that, if the compute node hosting the virtual machine has to be switched off e.g. for maintenance, the Cloud administrator before doing this operation can live-migrate the instance to another Cloud compute node basically without any service interruption.
The procedure is the following:
  • Create a volume using the desired image as source.
  • Launch a Virtual Machine with that volume as the system disk.
To create a volume from an image, go to the Volumes tab of the Dahsboard, and click on Create Volume.
After having set the Volume Name, set Image for Volume Source and specify the image name in the Use image as a source field.
Set the Size of the root disk (Size (GiB) field).
Complete the operation clicking on Create Volume.

Note

Please select 'ceph' (the default) as type for the volume
Once the volume has been created, you can launch the Virtual Machine.
In the Launch Instance form, that appears after having clicked on the Launch Instance button, please select Boot from volume in the Instance Boot Source field, and specify the name of the previously created volume in the Volume field.
Then proceed as explained for the creation of a "normal" instance.

Note

You can create only one virtual machine from a volume created using an image as source.

4.2. Accessing to Virtual Machines

Assuming you created a Linux virtual machine using the <ProjectName>-lan network, you can now log to this VM from the Local Area Networks (LANs) of both Padova and LNL.
Assuming you stored your my_key keypair in ~/private and the IP number of this VM is 10.64.15.3:
$ ssh -i ~/private/my_key root@10.64.15.3
The username to be used depends on the used image. It can be 'root' or it can be a sudo-enabled user (e.g. for the centos Cloud images available on the internet, the 'centos' user must be used).

Warning

Please note that, unless you asked for the registration of your VMs in the DNS, you must use their IP addresses to access them.
If it is necessary to log on this VM from the Internet, it is necessary to go through a gate machine. For this purpose you can rely on the existing gate hosts in Padova and Legnaro. If you don't have an account on one of such gate hosts, you can use a Cloud specific gate host (cld-gate.pd.infn.it): contact if you need an account on such machine.
If needed, e.g. if a VM should host a service accessible from the Internet, such VM on the Cloud can be given a public IP. For this purpose it is needed:

Note

Please note that by default the possibility to use public IP numbers is disabled. If public IPs are needed for your project, please contact .
To control which services/ports of your virtual machine can be accessed, please use the security groups (discussed in Section 3.3, “Setting security group(s)”) and firewalls (e.g. iptables) on the relevant VM.

4.3. Accessing to other hosts/services from Virtual Machines

From a VM of the Cloud, it is possible to access the outside Internet network, but by default it is not possible to access a host/service hosted in Padova or Legnaro. If, for some reasons, you need to access some services hosted in Padova or Legnaro from the Cloud, please contact .

4.4. Allocating a public (floating) IP address to a Virtual Machine

When an instance is created in OpenStack, it is automatically assigned a fixed IP address. If a VM needs to be visible on the Internet, in addition to the fixed IP address a public (floating) IP address must be attached to the instance. Floating IP addresses can have their associations modified at any time, regardless of the state of the instances involved.
To assign a floating IP to a VM, first of all create the VM as described in Section 4.1, “Creating Virtual Machines”, using the <ProjectName>-wan network. The following procedure then details the reservation of a floating IP address from an existing pool of public addresses and the association of that address with a specific instance.
  • From the dashboard click Access & Security.
  • Click the Floating IPs tab, which shows the floating IP addresses allocated to instances.
  • Click Allocate IP to Project.
  • Choose Ext as the pool from which to pick the IP address and click Allocate IP.
  • Click on the Associate for the just allocated floating IP.
  • In the Manage Floating IP Associations dialog box, choose the following options:
    • The IP Address field is filled automatically, but you can add a new IP address by clicking the + button
    • In the Ports to be associated field, select a port from the list (the list shows all the instances with their fixed IP addresses).
  • Click Associate.
To disassociate an IP address from an instance, click the Disassociate button.
To release the floating IP address back into the pool of addresses, click the More button and select the Release Floating IP option.

Note

By default the possibility to use public IP numbers is disabled and therefore by default it is not possible to allocate a floating IP to an instance. If public IPs are needed for your project, please contact specifying what is/are the relevant service(s) and the port(s) that need to be open.

4.5. Flavors

As shown in Section 4.1, “Creating Virtual Machines”, when an instance has to be created it is necessary to specify the flavor to be used for this VM.
Flavors define the virtual machine size such as:
  • Number of virtual CPU cores (VCPUs)
  • Amount of memory
  • Disk space
Information about the flavors can be seen in the Flavor Details box that appears in the Dashboard when you launch a new instance.

Warning

For what concerns VCPUs, please note that the Cloud Area Padovana is configured to allow some “overbooking” so that a physical core is mapped to 4 VCPUs.
A disk size of 0 means that the size of the disk is the same as that in the image. For other cases, it may be necessary to resize the partitions.

Note

If you find that a specific flavor you require is not available, please contact .

4.6. Stopping and Starting VMs

VMs can be stopped and started in different ways, available in the Dashboad menu (InstancesMore).

Warning

The cleanest way to shutdown (or reboot) an instance is however to log on the VM and issue from the shell the shutdown/reboot command. In fact if the Soft Reboot or Hard Reboot or Shutdown actions are chosen, there could be problems with networking when the VM is later restarted.
Pause/Unpause allows for temporary suspension of the VM. The VM is kept in memory but it is not allocated any CPU time.
Suspend/Resume stores the VM onto disk and recovers it later. This is faster than stop/start and the VM returns to the place is was when the suspend was performed rather than a new boot cycle.

4.7. Contextualisation

Contextualisation is the process to configure a virtual machine after it has been installed. Typical examples would be to create additional users, install software packages or call a configuration management system. These steps can be used to take a reference image and customise it further. Contextualisation is only run once when the VM is created.
The public images provided in the Cloud Area Padovana include a contextualisation feature using the open source cloud-init package.
With cloud-init, data to be used for contextualisation are called user data.
Using the Openstack command line tool, the --user_data option of the nova boot command must be used, e.g.:
nova boot my_vm --image "SL65-Padova-x86_64-20141023-QCOW2" \
  --flavor m1.xsmall --user_data my_data.txt --key_name my_key
For example to run a command during contextualisation, the #cloud-config directive can be used:
cat > cern-config-users.txt << EOF
#cloud-config
runcmd:
 - [ /usr/bin/yum, "install", -y, "cern-config-users" ]
 - [ /usr/sbin/cern-config-users, --setup-all ]
EOF
User data can be provided as a gzip file if needed where the user data is larger than 16384 bytes, e.g.:
cat > userdata4zip.txt <<EOF
#!/bin/sh
wget -O /tmp/geolist.txt http://frontier.cern.ch/geolist.txt
EOF
gzip -c userdata4zip.txt > userdata4zip.gz

nova boot my_server --image "SL65-Padova-x86_64-20141023-QCOW2" \
  --flavor m1.xsmall --user_data userdata4zip.gz --key_name my_key
With the #include or Content-Type: text/x-include-url directives, it is possible to specify a list of URLs, one url per line. The userdata passed by the urls can be plain txt, gzip file or mime-multi-part script. E.g.:
cat userdata.txt <<EOF
#! /bin/bash
wget -O /tmp/robots.txt http://www.ubuntu.com/robots.txt
EOF

cat > userdata4include.txt <<EOF
#include
# entries are one url per line. comment lines beginning with '#' are allowed
# urls are passed to urllib.urlopen, so the format must be supported there
http://frontier.cern.ch/userdata.txt
EOF
cloud-init supply also a method called "multiple part" to supply user data in multiple ways, which means you can use userdata script and cloud-config (or other methods recognized by cloud-init) at the same time. cloud-init provides a script (write-mime-multipart) to generate a final userdata file. Here is an example:
cat > userdata4script <<EOF
#! /bin/bash
mkdir -p /tmp/rdu
echo "Hello World!" > helloworld.txt
EOF
 
cat userdata4config
#cloud-config
runcmd:
 - [ wget, "http://slashdot.org", -O, /tmp/index.html ]
 
cat userdata4include
#include
# entries are one url per line. comment lines beginning with '#' are allowed
# urls are passed to urllib.urlopen, so the format must be supported there
http://frontier.cern.ch/userdata.txt
Then use write-mime-multipart to generate userdata4multi.txt and use it to launch an instance:
write-mime-multipart -o userdata4multi.txt userdata4script userdata4config userdata4inc

nova boot my_server --image "SL65-Padova-x86_64-20141023-QCOW2" \
  --flavor m1.xsmall --user_data userdata4multi.txt --key_name my_key
On Internet a lot of documentation (along with examples) is available on cloud-init, such as in the Ubuntu Documentation.

4.8. Resizing Virtual Machines

If the size of a virtual machine needs to be changed, such as adding more memory or cores, this can be done using the resize operations. Using resize, you can select a new flavor for your virtual machine. The operation will reboot the virtual machine and will take several minutes of downtime, so this operation should be scheduled as it will lead to application downtime.
To resize a VM using the Horizon graphical Interface:
  • Detach any attached volume as decribed in Section 6.2.3, “Detaching a Volume”

    Warning

    Failure in doing so might lead to VM and/or Volume corruption!
  • Select the Instances menu and the Resize Instance option on the actions.
  • In the Resize Instance box select the desired flavor.
  • After the new flavor has been selected, the status will become 'resize' or 'migrating'.
  • The status will change after several minutes to 'Confirm' or 'Revert Resize/Migrate'. You may need to refresh the web browser page to reflect the new status.
  • Select Confirm Resize/Migrate if you wish to change the instance to the new configuration.
The status will then change to 'Active' once completed.

4.9. Snapshotting Virtual Machines

It is possible to create a snapshot of a VM, and then the saved snapshot can be used to start new instance(s).
The new instance can also have a flavor different (but not smaller for what concern the disk size) than the source VM. It is therefore suggested to use the smallest (in terms of disk) possible flavor for the source VM (for SL6.x related images this is cloudareapd.xsmall).
Before doing a snapshot, it is needed to clean the network configurations of the “source” VM. For SL6.x related OS this can be done using the following instructions, once logged as root on the “source” VM:
/bin/rm /etc/udev/rules.d/70-persistent-net.rules
/bin/rm /lib/udev/write_net_rules
Before doing the snapshot (that can be done using the web-based dashboard selecting the Instances menu and the Create Snapshot option on the actions), it is recommended to shut down the VM to make sure that all data is flushed to disk. To perform this shutdown of the VM, please don't do it through Openstack. Instead please do the shutdown within the VM (i.e. log as root to the VM and issue shutdown -h now or poweroff).

Warning

In the production Cloud it is possible to save in the Image service only snapshots with a maximum size of 25 GB.

4.10. Deleting Virtual Machines

VMs can be deleted using the Terminate Instance option in the OpenStack dashboard.

Important

Note: This will immediately terminate the instance, delete all content of the virtual machine and erase the ephemeral disk. This operation is not recoverable.

Chapter 5. Managing Containers

Warning

The support for container in OpenStack described in this chapter is implemented through a software component (nova-docker) that is not supported anymore. In particular it won't be available in the next OpenStack release that will be installed.
The Cloud Area Padovana provides an experimental service to run Docker Containers in a cloud environment.

5.1. What is Docker?

Docker is a tool that allows you to run applications in a container. In this way the applications and all their dependencies, packaged in a virtual container, are fully portable and ready to run as designed in any environment. Containers are lightweight and run without the extra load of a hypervisor, so many applications and environments can run on a single kernel never interfering one another. Docker operates at the OS level, so it can still be run inside a VM!
For more information about Docker and how to install it, please look at Docker Documentation.

5.2. How to run Docker Container in the Cloud

Docker containers can be run on the Cloud in the very same way used to instantiate virtual machines. But you need first to register a docker image in the OpenStack image service (glance). This is explained in next section.

5.3. How to Upload Docker Images

To istantiate Docker container the same way you launch VM (see Section 4.1, “Creating Virtual Machines”), you need a docker image saved in glance, i.e. the OpenStack imaging service.
To save a docker image in glance, you need to operate on a computer where:
Please follow these commands to load a docker image in your project.
First of all download a docker image from Docker Registry:
docker pull <image-name>
Then save it and create an image in Glance (see Chapter 8, Managing Images for a reference on "how to create and upload images"):
docker save <image-name> | glance image-create --container-format=docker --disk-format=raw --name <image-name>
For example:
docker pull centos 

docker save centos; | glance image-create --container-format=docker --disk-format=raw --name centos
 
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 363070ea9529855747a3a511b5f0b785     |
| container_format | docker                               |
| created_at       | 2016-11-07T07:59:39Z                 |
| disk_format      | raw                                  |
| id               | e7a6b90b-19ed-4560-8945-6ee42e5c3fed |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | centos                               |
| owner            | cbf5e7bab17d4e8cbad235d8eba924c8     |
| protected        | False                                |
| size             | 204124160                            |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2016-11-07T07:59:52Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+


Warning

Be sure that the <image-name> in Docker is the same as in Glance.

Note

If an image is not compatible with all flavors, it is recommended to specify the options --min-disk and/or --min-ram in the glance command
docker save <image-name> | glance image-create --container-format=docker --disk-format=raw --min-disk DISK_GB --min-ram DISK_RAM --name <image-name>
--min-disk DISK_GB The minimum size of the disk needed to boot the image, in gigabytes.
--min-ram DISK_RAM The minimum amount of RAM needed to boot the image, in megabytes.
An istance using docker image has to be scheduled on a Cloud compute-node running nova-docker. To do so it's necessary to define this property on the image, with the command:
glance image-update --property hypervisor_type=docker <image-id> 
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 363070ea9529855747a3a511b5f0b785     |
| container_format | docker                               |
| created_at       | 2016-11-07T07:59:39Z                 |
| disk_format      | raw                                  |
| hypervisor_type  | docker                               |
| id               | e7a6b90b-19ed-4560-8945-6ee42e5c3fed |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | centos                               |
| owner            | cbf5e7bab17d4e8cbad235d8eba924c8     |
| protected        | False                                |
| size             | 204124160                            |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2016-11-07T08:02:42Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+

In this example <image-id> is e7a6b90b-19ed-4560-8945-6ee42e5c3fed.

Note

There should be a long lived process running in the docker image, otherwise the instance will not be able to spawn successfully. This can be done by using CMD or ENTRYPOINT in Dockerfile or specifing the command through the glance image property 'os_command_line', e.g:
glance image-update --property os_command_line='/usr/sbin/sshd -D' <image-id> 
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 363070ea9529855747a3a511b5f0b785     |
| container_format | docker                               |
| created_at       | 2016-11-07T07:59:39Z                 |
| disk_format      | raw                                  |
| hypervisor_type  | docker                               |
| id               | e7a6b90b-19ed-4560-8945-6ee42e5c3fed |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | centos                               |
| os_command_line  | /usr/sbin/sshd -D                    |
| owner            | cbf5e7bab17d4e8cbad235d8eba924c8     |
| protected        | False                                |
| size             | 204124160                            |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2016-11-07T08:37:20Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+

Warning

With respect to this example, you need a docker image with openssh package installed, which provides the '/usr/sbin/sshd' executable (for a better understanding on how to set up a docker image with SSHD service see this link).
Once the image is correctly loaded and updated you can run the docker image the same way you launch a VM (see Section 4.1, “Creating Virtual Machines”).

Chapter 6. Managing Storage

There are several ways of handling disk storage in the Cloud Area Padovana:
  • Ephemeral storage exists only for the life of a virtual machine instance. It will persist across reboots of the guest operating system but when the instance is deleted so is the associated storage. The size of the ephemeral storage is defined in the virtual machine flavor.
  • Volumes are persistent virtualized block devices independent of any particular instance. Volumes may be attached to a single instance at a time, but may be detached or re-attached to a different instance while retaining all data, much like a USB drive. The size of the volume can be selected when it is created within the quota limits for the particular project. Please also consider that in the Cloud Area Padovana volumes are implemented through storage systems which are more reliable than the ones used for ephemeral storage.

6.1. Ephemeral storage

Ephemeral storage exists only for the life of a virtual machine instance. It will persist across reboots of the guest operating system but when the instance is deleted so is the associated storage. The size of the ephemeral storage is defined in the virtual machine flavor.
Among the flavor details (that are listed in the Dashboard when a VM has to be launched or can be seen using the nova flavor-list command), there is an attribute called 'Ephemeral'. When you use a flavor with an ephemeral disk size different from zero, the instance is booted with an extra virtual disk whose size is indicated by the ephemeral value. This ephemeral disk can be useful where you want to partition the second disk or have a specific disk configuration which is not possible within the system disk configuration.

Warning

Please note that backups are not performed on ephemeral storage systems.

6.2. Volumes

Volumes are persistent virtualized block devices independent of any particular instance. Volumes may be attached to a single instance at a time (i.e. not like a distributed filesystem such as Lustre or Gluster), but they may be detached or re-attached to a different instance while retaining all data, much like a USB drive.

6.2.1. Create a Volume

The steps to add a Volume are:
Using the Dashboard, click on Volumes and then Create Volume. In the "Create Volume” window specify the name of the volume (testvol in the example below) and the desired size (50 GB in the example). As Volume Source specify “No source, empty volume”.
Multiple volume types exist, and you might also want/need to specify the type to be used for the volume to be created (in the example above the iscsi-infnpd volume type was chosen).
If not specified the default ceph volume type will be used.
You might need to specify the volume type in particular because different quotas for the different volume types might have been set for your project. Unfortunately the OpenStack dashboard shows only the overall quota. To see the quota per each volume type you need to use the OpenStack CLI (as explained in Section 3.6, “Accessing the Cloud through the Openstack command line tools”) and run the cinder quota-usage ${OS_PROJECT_ID} command.
E.g.:
$ cinder quota-usage ${OS_PROJECT_ID}
+------------------------+--------+----------+-------+
| Type                   | In_use | Reserved | Limit |
+------------------------+--------+----------+-------+
| backup_gigabytes       | 0      | 0        | 1000  |
| backups                | 0      | 0        | 10    |
| gigabytes              | 105    | 0        | 300   |
| gigabytes_ceph         | 70     | 0        | 100   |
| gigabytes_gluster      | 0      | 0        | -1    |
| gigabytes_iscsi-infnpd | 35     | 0        | 200   |
| per_volume_gigabytes   | 0      | 0        | 1000  |
| snapshots              | 0      | 0        | 10    |
| snapshots_ceph         | 0      | 0        | -1    |
| snapshots_gluster      | 0      | 0        | -1    |
| snapshots_iscsi-infnpd | 0      | 0        | -1    |
| volumes                | 7      | 0        | 20    |
| volumes_ceph           | 5      | 0        | -1    |
| volumes_gluster        | 0      | 0        | -1    |
| volumes_iscsi-infnpd   | 2      | 0        | -1    |
+------------------------+--------+----------+-------+
In this example the project was given 300 GB. For the iscsi-infnpd volume type the quota is 200 GB, while for the ceph volume type it is possible to use at most 100 GB.

Warning

If you try to create a volume using a type for which the quota is over, you will see a generic 'Unable to create volume' error message.

6.2.2. Using (attaching) a Volume

The new defined volume will appear in the Volumes tab.
To attach this volume to an existing instance, click on Edit attachments and select the relevant Virtual Machine.
... select the relevant Virtual Machine...
...and click on "Attach Volume".
Log in to the instance and check if the disk has been added:
grep vdb /proc/partitions
 252       48   12582912 vdb
If needed, create a file system on it (this will scratch the disk !):
mkfs -t ext4 /dev/vdb
Mount the volume:
mount /dev/vdb /mnt

6.2.3. Detaching a Volume

To detach a volume from an instance, first of all log into the virtual machine that has the volume mounted, and unmount it:
umount /mnt
Then, using the Dashboard, click on Volumes, click on Edit attachments for the relevant volume and select Detach Volume. The detached volume can then be associated to another VM, as described above (you won't have to re-create the file system, otherwise you will lose the content of the volume !)

6.2.4. Deleting a Volume

If a volume is not needed any longer, to completely remove it (note that this step cannot be reverted!):
  • if needed, detach the volume from the associated instance
  • using the Dashboard, click on Volumes, select the relevant volume and then select Delete Volumes

Warning

Please note that backups are not performed on volumes.

6.2.5. Sharing a volume between multiple (virtual) machines

As discussed in Section 6.2, “Volumes”, a volume may be attached to a single instance. However it can be shared with other virtual machines of the Cloud (and/or with other hosts) using NFS.
Once a volume has been created, attached to an instance (acting as a NFS server) and formatted (the procedure has been explained in Section 6.2, “Volumes”, create the mount point and mount the volume on this virtual machine:
mkdir /dataNfs
mount /dev/sdb /dataNfs
Ensure that on this virtual machine the needed packages are installed. For RHEL 6 based systems these are:
nfs-utils
nfs-utils-lib
rpcbind
For RHEL 7 based systems these are:
nfs-utils
rpcbind
Insert the correct export directive in the /etc/exports file. For example if the volume must be visible to all the virtual machines of the same subnet 10.64.15.* (check the subnet with the ifconfig command) the content of the /etc/exports file should be:
/dataNfs 10.62.15.*(async,rw,no_root_squash)
Check the firewall on the virtual machine. Ensure that the other instances can access to ports 111, 2049, 875 UPD and TCP.
Check the security group (see Section 3.3, “Setting security group(s)”): access to ports 111, 875 and 2049 (IPv4 Ingress TCP and UDP) should be guaranteed:
(Re)start the relevant services. For RHEL 6 based systems:
service nfs restart
For RHEL 7 based systems:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl restart nfs-server
systemctl restart nfs-lock
systemctl restart nfs-idmap
To mount the volume on the other VMs, check that the rpcbind and nfs-utils packages are installed, and then issue a mount command such as this one:
mount -t nfs 10.62.15.4:/dataNfs /mnt

Note

Please note that this procedure can be used to mount a volume also on hosts outside the Cloud: it is just necessary to specify the IP address of these hosts in the /etc/exports file on the instance acting as NFS server.

6.3. Accessing storage external to the Cloud

As explained in Section 1.2, “Network access” from an instance of the Cloud by default it is not possible to access a host/service hosted in Padova or Legnaro. This also means that by default on a virtual machine of the Cloud Area Padovana it is not possible to mount a file system exported from a storage server hosted in Padova or Legnaro.
Accessing a storage system external to the Cloud from a virtual machine in an efficient way require some non negligible effort with respect to network configuration. If however there is such need, please contact .

Chapter 7. Orchestration

The Cloud Area Padovana provides an orchestration service, implemented through the OpenStack Heat component, that allows you to spin up multiple instances, and other cloud services in an automated fashion.
In Heat parlance, a stack is the collection of objects, or resources, that will be created by Heat. This might include instances (VMs), networks, subnets, routers, ports, security groups, security group rules, etc.
Heat uses the idea of a template to define a stack. If you want to have a stack that creates two instances connected by a private network, then your template would contain the definitions for two instances, a network, a subnet, and two network ports.
Either native HOT templates, and AWS CloudFormation (CFN) templates are supported. Templates in HOT (Heat Orchestration Template) format are typically, but not necessarily required to be, expressed as YAML. CFN (AWS CloudFormation) formatted templates are instead typically expressed in JSON.

7.1. Creating a template

This is a working example of a HOT template which:
  • creates a virtual machine connected to a project network;
  • creates a storage volume;
  • Attaches this volume to the previously created VM;
  • Associates a floating IP to this virtual machine.
heat_template_version: 2013-05-23

description: Template which creates a VM and a cinder volume; then it attaches the volume to this VM; a floating IP is given to the VM.

parameters:
  instance_name:
    type: string
    description: VM Name
    constraints:
    - allowed_pattern: "[a-zA-Z0-9-]+"

resources:

  my_volume:
    type: OS::Cinder::Volume
    properties:
      name: "myVolume"
      size: 3

  my_instance:
    type: OS::Nova::Server
    properties:
      name: { get_param: instance_name }
      key_name: sgaravat-cloud-areap
      image: SL66-x86_64-20150521
      flavor: cldareapd.xsmall
      security_groups: [ssh]
      admin_pass: heattest
      networks: [{"network": OCP-wan}]

  my_volume_attachment:
    type: OS::Cinder::VolumeAttachment
    properties:
      volume_id: { get_resource: my_volume }
      instance_uuid: { get_resource: my_instance }

  my_fip:
    type: OS::Nova::FloatingIP
    properties:
      pool: Ext

 my_fip_association:
    type: OS::Nova::FloatingIPAssociation
    properties:
      floating_ip: { get_resource: my_fip }
      server_id: { get_resource: my_instance }

outputs:
  instance_fixed_ip:
    description: fixed ip assigned to the server
    value: { get_attr: [my_instance, first_address] }

Templates have three sections:
# This is required.
heat_template_version: 2013-05-23

parameters:
  # parameters go here

resources:
  # resources go here (this section is required)

outputs:
  # outputs go here
The resources section specifies what resources Heat should create:
resources:
  my_resource_id:
    type: a_resource_type
    properties:
      property1: ...
      property2: ...
Hardcoded values can be replaced with parameters. The actual value to be used is then specified when the stack is created. In our example a parameter is used for the name of the VM to be created:
parameters:
  instance_name:
    type: string
    description: VM Name
    constraints:
    - allowed_pattern: "[a-zA-Z0-9-]+"
Sometimes we want to extract information about a stack. In our example the output is the fixed IP of the VM created by the stack:
outputs:
  server_ip:
    description: fixed ip assigned to the server
    value: { get_attr: [my_server, first_address]:
Heat templates allows also to insert user data via cloud-init, e.g:
server01:
    type: OS::Nova::Server
    properties:
      image: sl66
      flavor: cldareapd.xsmall
      user_data_format: RAW
      user_data:
        str_replace:
          template: |
            #!/bin/sh
            yum install -y httpd
            service httpd start
            iptables -I INPUT 4 -m state --state NEW -p tcp --dport 80 -j ACCEPT
            service iptables save
            service iptables restart
Resource startup order can be managed in Heat, as explained in this page. For example it is possible to create a VM only when another one has been successfully started.
The Heat Orchestration Template (HOT) specification is available here.

7.2. Creating a stack

To create a stack using the browser please select the Orchestration menu and choose Stacks.
Then click Launch Stack. You will be prompted to select a template.
You will then be asked to fill in the parameters of the template and launch the stack.
Then you can follow the status of your stack on the dashboard.

Chapter 8. Managing Images

In a cloud environment, Virtual Machines are instantiated from images. These images are registered in an Image Management Service, in our case provided by the Glance OpenStack component.

8.1. Public Images

Some images in the Cloud Area Padovana are provided by the Cloud administrators. These images are public, and visible to all users. They appear in the Public tab of the Images & Snapshots menu.
The SL6x-INFNPadova-x86-64-<date> and CentOS7x-INFNPadova-x86-64-<date> images are basic SL6.x / CentOS 7.x images which also include cloud-init to perform contextualization based on the user data specified when the VM are instantiated. They also configure CVMFS and the relevant squid servers.
Such images also configure the Padova LDAP server for user authentication. This means that it is just necessary to “enable” the relevant accounts on the VM adding in the /etc/passwd file:
+name1::::::
+name2::::::
...
and creating their home directories.
Changes done in /etc/passwd could not be applied immediately by the system. In this case a:
nscd -i passwd
should help.

Note

The SL6x-INFNPadova-x86-64-<date> and CentOS7x-INFNPadova-x86-64-<date> images also allow INFN-Padova system administrators to log (with admin privileges) on the instance.
INFN-Padova computing and Network service () can provide support only for instances created using such images (only to INFN-Padova users).

8.2. User Provided Images

Along with the standard images, users can provide their own images and upload them to the Cloud Image service: these images are private, meaning that they are only available to the users of the project they are uploaded for.

Note

Users are not allowed to publish public (i.e. available to all projects) images. This means that you don't have to use the --public flag with the glance image-create command, otherwise you will get a quite cryptic error message such as:
Request returned failure status.
403 Forbidden
Access was denied to this resource.
    (HTTP 403)
Many open source projects such as Ubuntu and Fedora now produce pre-build images which can be used for certain clouds. If these are available, it is much easier to use these compared to building your own.
Building your own images is also possible, as described in Section 8.4, “Building Images”.
Taking an example of the Ubuntu image, after having downloaded the image from the relevant web site, to upload such image (using the command line tools) it is necessary first of all to authenticate to OpenStack using the openrc source, and then the following command must be issued:
glance image-create --name=ubuntu-trusty --disk-format=qcow2 \
--container-format=bare --property hypervisor_type=qemu \
--file trusty-server-cloudimg-amd64-disk1.img
Once loaded, the image can then be used to create virtual machines.
Some system software is delivered in ISO image format. For example, these steps show how to create an image from the Freedos ISO available at http://www.freedos.org/download/and downloaded to fd11src.iso:
glance image-create --name freedos11 --disk-format=iso \
--container-format=bare --file=fd11src.iso

Note

In the production Cloud only images with a maximum size of 25 GB are allowed.

Warning

No backup is currently done on user provided images and on snapshots. Therefore users with private images should keep a copy of the images they have uploaded in their private archives.

8.3. Sharing Images

As mentioned before, users are not allowed to publish public images. However images can be shared between different projects. This is currently only possible via the command line tools.
If an image has been uploaded to your currently active project, using the procedure described in Section 8.2, “User Provided Images”, you can then use the glance member-create operations to share that image with another project.
To share an image, first source the project profile for the project containing the image you want to share and find its id with the glance image-show command (e0d35322-d509-4202-9b40-7d74e7aca0b6 in the example):
$ glance image-show ubuntu-trusty
+----------------------------+--------------------------------------+
| Property                   | Value                                |
+----------------------------+--------------------------------------+
| Property 'hypervisor_type' | qemu                                 |
| checksum                   | 2538873a402068e7c48ba5e0b896d222     |
| container_format           | bare                                 |
| created_at                 | 2014-10-25T05:06:13                  |
| deleted                    | False                                |
| disk_format                | qcow2                                |
| id                         | e0d35322-d509-4202-9b40-7d74e7aca0b6 |
| is_public                  | False                                |
| min_disk                   | 0                                    |
| min_ram                    | 0                                    |
| name                       | ubuntu-trusty                        |
| owner                      | beaeede3841b47efb6b665a1a667e5b1     |
| protected                  | False                                |
| size                       | 255853056                            |
| status                     | active                               |
| updated_at                 | 2014-10-25T05:07:01                  |
+----------------------------+--------------------------------------+
You now need to find the id of the project you wish to share the image with. This will generally be done by looking at the openrc file of that project and finding the OS_TENANT_ID variable (in this example, it is 65bf648d-8d07-48af-8c3a-fa162a5b283f).
Using this tenant ID as an argument of the command line of member-create, the image (e0d35322-d509-4202-9b40-7d74e7aca0b6 in this example) can then be shared:
glance  member-create e0d35322-d509-4202-9b40-7d74e7aca0b6 65bf648d-8d07-48af-8c3a-fa162a5b283f # share it
 
glance member-list --image-id e0d35322-d509-4202-9b40-7d74e7aca0b6 # check it is shared

8.4. Building Images

Users can also build custom images, that can then been uploaded in the Cloud Image service as described in Section 8.2, “User Provided Images”.
There are several tools providing support for image creation. Some of them are described in the Openstack documentation.
Oz is one of such tools that automates the process of creating a virtual machine image file. Besides the official OZ documentation, some notes (and examples) to build Scientific Linux 6.x and CentOS 6.x resizable images in QCOW2 format are available here.
The public SL6.x images of the Cloud Area Padovana are built using OZ. The relevant source files are available here.
A machine where Oz is installed is available to the users of the Cloud Area Padovana. To have access to this machine, please contact the administrators at . Oz requires a configuration file that should be created as follows exchanging USERNAME for the login name of your user:
$ cat ~/.oz/oz.cfg

[paths]
output_dir = /tmp/USERNAME/oz/images
data_dir = /tmp/USERNAME/oz/data
screenshot_dir = /tmp/USERNAME/oz/screenshot

[libvirt]
uri=qemu:///system

[cache]
original_media = yes
modified_media = no
jeos = no
The directories specified in this configuration file must be manually created.

8.4.1. Enabling INFN Padova LDAP based authentication on the Virtual Machine

When creating a custom image, it might be needed to enable a LDAP server to manage authentication for users. This section explains how to enable the INFN Padova's LDAP server for user authentication on the VMs of the Cloud. To do that, the following LDAP client configurations, targeted to SL6.x systems, need to be available on the image used to start the VMs.
First of all, the following packages must be installed:
  • openssl
  • openldap
  • openldap-clients
  • pam-ldap
  • nss-pam-ldapd
  • nss-tools
  • nscd
Then the following files (included in this ldap.tar tar file) must be installed on the Virtual Machine:
  • /etc/openldap/cacerts/cacert.pem
  • /etc/openldap/ldap.conf
  • /etc/pam_ldap.conf
  • /etc/nsswitch.conf
  • /etc/nslcd.conf
  • /etc/pam.d/system-auth-ac
  • /etc/pam.d/password-auth-ac
To do that, it is enough to log on the VM and:
cd /
tar xvf / path/ldap.tar
Make sure that the following links exist:
/etc/pam.d/password-auth -> password-auth-ac
/etc/pam.d/system-auth -> system-auth-ac
Then it is necessary to start the nslcd and nscd services:
service nslcd start
service nscd start
chkconfig nslcd on
chkconfig nscd on
Then it is just necessary to “enable” the relevant accounts on the VM adding in the /etc/passwd file:
+name1::::::
+name2::::::
...
and creating their home directories.
Changes done in /etc/passwd could not be applied immediately by the system. In this case a:
nscd -i passwd
should help.

Note

Please note that the SL6x-INFNPadova-x86-64-<date> and CentOS7x-INFNPadova-x86-64-<date> images have already the LDAP client properly configured to use the Padova LDAP server. Using these images it is just necessary to enable the relevant users in /etc/passwd and create their home directories.

8.5. Deleting Images

Images that are not used anymore can be deleted. Deletion of images is permanent and cannot be reversed.
To delete an update, Log in to the dashboard. Then select the appropriate project from the drop down menu at the top left. On the Project tab, open the Compute tab and click Images category. Then select the images that you want to delete. Then click Delete Images. In the Confirm Delete Images dialog box, click Delete Images to confirm the deletion.

Warning

Don't delete an image if are there are virtual machines created using this image, otherwise these VMs won't be able to start if hard rebooted.

Chapter 9. Creating Batch Clusters on the Cloud

The virtual machines provided by the Cloud Area Padovana can also be used to implement batch clusters where users can run their applications (that can be normal jobs or docker containers).
In this chapter we explain how to implement a dynamic batch cluster based on HTCondor.

9.1. Intro: the idea

You create on the cloud a virtual machine that acts as a master for a dynamic batch system (implemented using HTCondor). When you create the master, using the instructions reported in Section 4.1, “Creating Virtual Machines”, you will need to specify some user-data to describe the cluster configuration, as described below.
The master node will be able to spawn (using the EC2 euca2ools) new slave nodes (where jobs are executed) when jobs are submitted to the batch system. The elastic cluster will provide a number of virtual resources that scales up or down depending on your need. The total number of active virtual nodes is dynamic.
The master node will act also as submitting machine: you can log in on this node and submit jobs to the batch system. These jobs will be run on the slave nodes, get done, and eventually the slaves will be released.

Note

The master and the slaves must use the same image (which should have all the needed software). However the master can use a different flavor with respect to the slave nodes.

9.2. Prerequisites

  • You should be registered in the Cloud as member of a project.
  • You need to have created a SSH key-pairs, ad explained in Section 3.2, “Creating a keypair”. This will allow to log in the master and in the slave nodes.
  • You need to download the EC2 credentials of the project you want to use (see Section 3.7, “Accessing the Cloud through the euca2ools EC2 command line tools”). You can download them from the dashboard as following:
    Open the dashboard, select the project (drop down menu on top left of the dashboard), go to Access & Security (under the 'Compute' section), go to 'Api Access' section and here click on [Download EC2 credentials].
    You'll get a zip file with a name like: Project-x509.zip , where Project is the one you have chosen. The zip contains the following files:
    $ unzip Project-x509.zip
    Archive:  Project-x509.zip
    extracting: pk.pem
    extracting: cert.pem
    extracting: cacert.pem
    extracting: ec2rc.sh
    
    Extract all these files somewhere safe. The content of your ec2rc.sh file is something like:
    $ cat ec2rc.sh
    
    #!/bin/bash
    
    NOVARC=$(readlink -f "${BASH_SOURCE:-${0}}" 2>/dev/null) || NOVARC=$(python -c 'import os,sys; print os.path.abspath(os.path.realpath(sys.argv[1]))' "${BASH_SOURCE:-${0}}")
    NOVA_KEY_DIR=${NOVARC%/*}
    export EC2_ACCESS_KEY=<access_key>
    export EC2_SECRET_KEY=<secret_key>
    export EC2_URL=https://cloud-areapd.pd.infn.it:8773/services/Cloud
    export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
    export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
    export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
    export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
    export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
    
    alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
    alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"
    
    where EC2_ACCESS_KEY and EC2_SECRET_KEY are different for each project and user. The elastiq service uses these values to instantiate and to kill VMs in the specific project.
  • You need to identify the image to be used for the master and for the slaves. Currently supported operating systems are RHEL6.x and derivates (SL6.x, CentOS6.x, etc.), RHEL7.x and derivates, and Ubuntu. uCernVM based images are also supported. For such image you also need to know the relevant EC2 (AMI) id (see Section 9.8, “How to find the EC2 (AMI) id of an image”).
  • You need to set a specific security group to be used for the master node. This security group must include the following rules:
    Direction    Ether Type    IP Protocol    Port Range         Remote IP Prefix
    Egress       IPv4          Any            Any                0.0.0.0/0	
    Egress       IPv6          Any            Any                ::/0	
    Ingress      IPv4          ICMP           Any                0.0.0.0/0 	
    Ingress      IPv4          TCP            22 (SSH)           0.0.0.0/0	
    Ingress      IPv4          TCP            9618               0.0.0.0/0	
    Ingress      IPv4          TCP            41000 - 42000      0.0.0.0/0
    

    Note

    Instead of modifying the rules of an existing security group, we suggest to create a new security group named e.g. "master_security_group". Security groups are discussed in Section 3.3, “Setting security group(s)”.
    The slave nodes will instead use the default security group of your project. This group must include the following rule:
    Direction  Ether Type  IP Protocol  Port Range   Remote IP Prefix   Remote Security Group
    Ingress    IPv4        Any          Any          -                 <master_security_group>
    where <master_security_group> is the name of the security group that was chosen for the master node.
  • You need to download ECM software. As explained in Section 9.3, “The cluster configuration”, this will be used to create the batch cluster configuration:
    $ git clone https://github.com/CloudPadovana/ECM.git
    

9.3. The cluster configuration

Properly configure the ecm.conf file stored in the ECM directory (created when you downloaded via git the ECM software)
$ cat ecm.conf
FLAVOR_VMS=<flavor_name>
MAX_VMS=<max_vms>
MIN_VMS=<min_vms>
JOBS_PER_VM=<jobs_per_vm>
IDLE_TIME=<idle_time>
KEY_NAME==<ssh_key_name>
Where:
  • <FLAVOR_VMS> is the name of the flavor to be used for the slave nodes. Flavors have been discussed in Section 4.5, “Flavors”. Available flavors are listed in the dashboard when you try to launch a VM Two useful flavors are the following:
    • cldareapd.small
      1 VCPU, 2GB RAM
    • cldareapd.medium
      2 VCPU, 4GB RAM
  • <MAX_VMS> is the maximum number of slave nodes that can be instantiated.
  • <MIN_VMS> is the minimum number of slave nodes (never terminated, always available).
  • <JOBS_PER_VM> is the maximum number of jobs that will be run in a single slave node.

    Important

    You have to verify that the number of jobs per VM is compatible with the number of VCPU of the selected flavor.
  • <IDLE_TIME> is the time (in seconds) after which inactive VMs will be killed.
  • <KEY_NAME> is the ssh key (previously created, as explained in Section 3.2, “Creating a keypair”) to be injected in the batch cluster nodes.

Note

The batch system will use each VCPU as a separate job slot. So if you have a flavor with 4 VCPUs, and you submit 1 job, the master will create 1 slave and use 1 of the 4 available VCPUs. If you submit 4 jobs, again the master will create 1 slave, and will use all the 4 VCPUs. Large flavors means less machines to be created but possibly a sub-optimal usage of resources.

9.4. Start the elastic cluster

To start the elastic cluster, you must only instantiate the master node. When you create such master, you will need to specify some user-data to describe the cluster configuration. The ecm.py script allows to create such user-data file, using as input the ecm.conf file previously filled (see Section 9.3, “The cluster configuration”).
First of all you have to set the relevant EC2 credentials:
$ source ec2rc.sh
Then you must launch ecm.py file and follow the instructions
$ python ecm.py
Choose the operating system installed in the image that you will use for bothe the master and worker nodes
Choose the Operating System (OS) type that you want to use for your master and worker nodes:
1: Scientific Linux 6
2: Ubuntu
3: uCernVM
4: CentOS 7
OS type ?

Important

You can choose only one of these SO and you have to use it to instantiate both master node and worker nodes.
Select the image that you want to use
Select the image for your Scientific Linux 6 based master and your Scientific Linux 6 based WNs:
1: SL68-x86_64-20161107
2: SL67-x86_64-20151017
3: SL66-x86_64-20150521
4: SL66-x86_64-20150309
5: SL66-x86_64-20150131
6: SL65-x86_64-20151029
7: Other image. [WARNING] You have to know the EC2-id of image

Warning

If you choose "Other image" you have to manually insert the image id in EC2 format (see Section 9.8, “How to find the EC2 (AMI) id of an image”).
The script will now print something like:
    Now you can use the master-scientific-2016-12-04-13.39.05 file to
    instantiate the master node of your elastic cluster.
Now you have to start the master node. As explained in Section 4.1, “Creating Virtual Machines”. go to 'Instances' and create a new instance with [Launch Instances].
In the details selects:
[details]
  • Instance Name => whatever you like
  • Flavor => whatever you like; can be different from the flavor chosen for the slave nodes
  • Image name => The same image chosen for the slaves.
[Access and Security]
  • Key pair => The key-pair that will be used to log on the nodes of the batch cluster
  • Security Group => the security group for the master (choose only this one)
[post creation]
  • Customization Script Source => the user_data_file created by the ecm.py script
Then press launch.
Once you requested the creation of the master node, after some minutes, you will see that the master virtual machine and some slave nodes ( depending on the "MIN_VMS" attribute you defined in the ecm.conf) created. Get the IP address of master, and log in on this machine using the key you have imported/created before i.e:
$ ssh -i ~/.ssh/id_rsa root@10.64.YY.XXX
For security reasons, as root you can not submit jobs to the HTCondor cluster. So make sure that a 'normal' account exists in the master node. In case you can create using the command:
# adduser <username>
Create a password for this account:
# passwd <username>
You might also consider to rely on the User LDAP, as explained in Section 8.4.1, “Enabling INFN Padova LDAP based authentication on the Virtual Machine”.
You have to import any external disk, create homes, etc, as you would do in a normal machine.

9.5. How slave nodes are installed and configured

The istantiation of slaves nodes is managed by the elastiq service running in the master node.
The min and max number of nodes is set by user in the ecm.conf: the total number of active nodes will change dinamically with jobs need.
The installation of condor and its configuration on slaves is automatically provided by ECM. Inside the user_data_file created by the ecm.py script there is the parameter SLAVE_USERDATA whose value is the script for the installation and configuration of the slave, coded in b64. The original uncoded script used for condor installation and configuration on slaves is stored in the ECM/slave_files directory. There is a file for each Operating system, depending on the system selected for the master. These files provide the basic configuration for condor, to support both the Vanilla and Docker universes.
Generally the user doesn't need to modify these files. If, for some reasons, you need to modify the condor configuration or you need to install additional packages on the slaves, this script can be modified: ECM will take care to code it in b64 and add the new value to the SLAVE_USERDATA parameter in the user_data_file.

9.6. Use the elastic cluster

Log on the master node using your unpriviledged account. Check if condor is running with the command:
# condor_q
-- Schedd: 10-64-20-181.virtual-analysis-facility : <10.64.20.181:41742>
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD

0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended
Check the status of the cluster:
# condor_status
Name               OpSys      Arch   State     Activity LoadAv Mem   ActvtyTime

slot1@10-64-22-84. LINUX      X86_64 Unclaimed Idle      0.000 1977  2+12:44:58
slot2@10-64-22-84. LINUX      X86_64 Unclaimed Idle      0.000 1977  2+12:45:25
                     Total Owner Claimed Unclaimed Matched Preempting Backfill

        X86_64/LINUX     2     0       0         2       0          0        0

               Total     2     0       0         2       0          0        0
Create you HTCondor classad.
A simple example is the following:
$ cat test.classad

Universe = vanilla
Executable = /home/<username>/test.sh
Log = test.log.$(Cluster)$(Process)
Output = test.out.$(Cluster)$(Process)
Error = test.err.$(Cluster)$(Process)
Queue <number_of_jobs_to_submit>
where test.sh is the executable you want to run.
Submit your jobs issuing the command:
$ condor_submit test.classad
and check their status with:
$ condor_q

Note

You can find documentation about HTCondor here.

9.7. Use the elastic cluster to run docker containers

The HTCondor elastic cluster can also be used to run docker containers. You don't need to install docker on your images: this is done by ECM.
Once the cluster is created, check if Docker is enabled on the slaves:
# condor_status -l | grep -i Docker
StarterAbilityList = "HasTDP,HasEncryptExecuteDirectory,HasFileTransferPluginMethods,HasJobDeferral,HasJICLocalConfig,HasJICLocalStdin,HasPerFileEncryption,HasDocker,HasFileTransfer,HasReconnect,HasVM,HasMPI,HasRemoteSyscalls,HasCheckpointing"
DockerVersion = "Docker version 1.7.1, build 786b29d/1.7.1"
HasDocker = true
StarterAbilityList = "HasTDP,HasEncryptExecuteDirectory,HasFileTransferPluginMethods,HasJobDeferral,HasJICLocalConfig,HasJICLocalStdin,HasPerFileEncryption,HasDocker,HasFileTransfer,HasReconnect,HasVM,HasMPI,HasRemoteSyscalls,HasCheckpointing"
DockerVersion = "Docker version 1.7.1, build 786b29d/1.7.1"
HasDocker = true
The following is a simple example which runs a docker container, which is downloaded from docker-hub:
$ cat test-docker.classad

universe = docker
docker_image = debian
executable = /bin/cat  
arguments = /etc/hosts 
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
Log = test-docker.log.$(Cluster).$(Process)
Output = test-docker.out.$(Cluster).$(Process)
Error = test-docker.err.$(Cluster).$(Process)
request_memory = 100M
Queue <number_of_jobs_to_submit>

9.8. How to find the EC2 (AMI) id of an image

As explained above, to use the Elastic batch cluster you need to know the EC2 (AMI) id of the image you want to use.
First of all you need to install the euca2ools and to download the EC2 credentials for your project, as explained in Section 3.7, “Accessing the Cloud through the euca2ools EC2 command line tools”.
Uncompress (unzip) the EC2 zip file, and source the ec2rc.sh script to set the correct environment:
$ source ec2rc.sh
To discover the EC2 id of your image or snapshot, use the 'euca-describe-images' command:
$ euca-describe-images

IMAGE   ami-0000031b    None (uCernVM 2.3-0)    beaeede3841b47efb6b665a1a667e5b1        available       public                  machine                         instance-store
IMAGE   ami-00000447    snapshot        36b1ddb5dab8404dbe7fc359ec95ecf5        available       public                  machine                         instance-store
Please note that for some euca2ools distributions sourcing the ec2rc.sh script is not enough. You need to explictly specify the access and secret keys, and the endpoint, with the relevant command line options:
  euca-describe-images -I ${EC2_ACCESS_KEY} -S ${EC2_SECRET_KEY} -U ${EC2_URL}
In the example above:
  • the EC2 image id of the uCernVM 2.3-0 image is ami-0000031b
  • there is a snapshot whose EC2 is is ami-00000447.

Note

In case you have snapshot on the output of the euca-describe-images you notice that you have no name associated with the ami-id. To obtain a nicely formatted list of (ami-id, name) couples you can use the following command:
$ euca-describe-images --debug 2>&1 | grep 'imageId\|name' | sed 'N;s/\n/ /'

      <imageId>ami-00000002</imageId>       <name>cirros</name>
      <imageId>ami-0000000d</imageId>       <name>Fedora 20 x86_64</name>
      <imageId>ami-00000010</imageId>       <name>Centos 6 x86_64</name>
      <imageId>ami-00000013</imageId>       <name>Centos 7 x86_64</name>
      <imageId>ami-0000001b</imageId>       <name>ubuntu-14.04.3-LTSx86_64</name>
      <imageId>ami-0000005d</imageId>       <name>matlab-2015a-glnxa64</name>
      <imageId>ami-00000057</imageId>       <name>Win7-Pro-X86_64-ENU</name>
      <imageId>ami-00000027</imageId>       <name>Fedora 23 x86_64</name>
      <imageId>ami-00000069</imageId>       <name>Win7-Photoscan</name>
      <imageId>ami-0000002d</imageId>       <name>photo-slave</name>
      <imageId>ami-0000004e</imageId>       <name>s-medium-snap</name>
      <imageId>ami-0000005a</imageId>       <name>uCernVM 3.6.5</name>
      <imageId>ami-000000ae</imageId>       <name>archlinux</name>
      <imageId>ami-000000c6</imageId>       <name>ubuntu-16.04.1-LTS x86_64</name>
      <imageId>ami-000000c9</imageId>       <name>Fedora 25 x86_64</name>
      <imageId>ami-000000d2</imageId>       <name>x2go-thinclient-server</name>
      <imageId>ami-000000d5</imageId>       <name>Win7-test</name>
      <imageId>ami-000000db</imageId>       <name>matlab-2016b</name>
      <imageId>ami-000000d8</imageId>       <name>ubuntu-16.04.1+Matlab_2016b</name>
...
      Note that items may appear multiple times...
...
You can also see all the information of an image, e.g.:
$ euca-describe-images --debug ami-00000447
or:
$ euca-describe-images -I ${EC2_ACCESS_KEY} -S ${EC2_SECRET_KEY} -U ${EC2_URL} --debug ami-00000447
The returned output will be something like:
  <requestId>req-c56c3694-c555-464a-b21d-2c86ccc020be</requestId>
  <imagesSet>
    <item>
      <description/>
      <imageOwnerId>36b1ddb5dab8404dbe7fc359ec95ecf5</imageOwnerId>
      <isPublic>true</isPublic>
      <imageId>ami-00000447</imageId>
      <imageState>available</imageState>
      <architecture/>
      <imageLocation>snapshot/imageLocation>
      <rootDeviceType>instance-store</rootDeviceType>
      <rootDeviceName>/dev/sda1</rootDeviceName>
      <imageType>machine</imageType>
      <name>cernvm_230_ldap_pd</name>
    </item>
  </imagesSet>
</DescribeImagesResponse>

Chapter 10. Some basics on Linux administration

10.1. Introduction

 
The possession of great power necessarily implies great responsibility.
 
 --William Lamb, 2nd Viscount Melbourne
The Cloud Area Padovana provides an infrastructure where your virtual machines can live. After you have activated your virtual machine(s) you are on your own for the most part of the day to day administration tasks.

Warning

We will only focus on Linux VMs, showing differences between the RedHat (CentOS, Fedora, ... ) and Debian (Ubuntu, Mint, ... ) distributions.
Throughout this chapter we will address the former with RH and the latter with DEB.
Some of these everyday tasks might be:
  • Use your VM as root only when needed;
  • Installing/deinstalling software;
  • Adding another user to your VM;
  • Formatting the volume you just attached to your VM;
  • Automatically remount the volume on next reboot.
In the following sections we provide some very small introductory instructions on performing such tasks.

10.2. Setting up 'sudo'

Nobody (not even administrators) use a Unix/Linux system always as root.
If you do you should stop immediately (no jokes!).
Normally you have your user with limited privileges and, when needed, you use su (which stands for 'switch user') to become root, perform the privileged task and then you go back to the normal user.
A more flexible approach is using sudo (Super User DO) to perform the 'one shot' task or to allow certain users to perform only some tasks as the superuser. The configuration of sudo is performed by modifying the /etc/sudoers file or (better) by adding a file in the /etc/sudoers.d directory.
Follow these instructions to allow the user paolo (change this to your username) to perform any command as the superuser providing his own password, or to modify your user privileges (in this case there is already a file with your username in the /etc/sudoers.d directory):
  • Become the root user:
    • RH: using su and providing the root password
    • DEB: using sudo su - and providing your password
  • Create the file /etc/sudoers.d/paolo using your preferred editor;
  • Add this line inside the file:
          paolo            ALL = (ALL) ALL 
    
    If you want the user to do everything without even providing the password put this line instead:
          paolo            ALL = NOPASSWD: ALL
    
  • change the file permissions: chmod 0440 /etc/sudoers.d/paolo
Now when paolo logs in to the VM he is allowed to perform superuser tasks by prefixing the command with sudo.
If you want to limit the user to perform only certain commands (or classes of commands, e.g. installing/deinstalling software) you can look at the sudo documentation on your VM using
man sudoers

10.3. Managing software through the package manager

RedHat and Debian based linux distributions both have their software management system. On RH each software is packaged in rpm form (RPM stands for RedHat Package Manager) while DEB uses deb packages.
Package contents are not only copied on the VM when installed, but are also listed on a database that can be queried to search for new software, to find out which package installed which files and so on. (Note: there is no such functionality on Windows servers...).
Rather than manipulating the packages directly whith the rpm (RH) or dpkg (DEB) commands, you will mostly use a command line package manager such as yum or apt.
We will now try to install the wget package on a RH and a DEB system.

10.3.1. Managing software on RH distributions

Let's try to install the wget software on CentOS or Fedora linux.
We will use yum (dnf on Fedora 21) and rpm for the task.
Since we will be performing operations as the superuser, if you haven't already, please set up sudo first.
  1. Query the system to search for wget (no need to be root for that):
    [paolo@maz03 ~]$ yum search wget
    Loaded plugins: fastestmirror
    Trying other mirror.
    base                                                     | 3.6 kB     00:00     
    extras                                                   | 3.4 kB     00:00     
    updates                                                  | 3.4 kB     00:00     
    (1/4): base/7/x86_64/group_gz                              | 154 kB   00:00     
    (2/4): extras/7/x86_64/primary_db                          |  87 kB   00:00     
    (3/4): updates/7/x86_64/primary_db                         | 3.9 MB   00:00     
    (4/4): base/7/x86_64/primary_db                            | 5.1 MB   00:01     
    Determining fastest mirrors
     * base: mi.mirror.garr.it
     * extras: mi.mirror.garr.it
     * updates: mi.mirror.garr.it
    ============================== N/S matched: wget ===============================
    wget.x86_64 : A utility for retrieving files using the HTTP or FTP protocols
    
      Name and summary matches only, use "search all" for everything.
    
  2. Install the wget package (note that we are using sudo here):
    [paolo@maz03 ~]$ sudo yum install wget
    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
     * base: mi.mirror.garr.it
     * extras: mi.mirror.garr.it
     * updates: mi.mirror.garr.it
    Resolving Dependencies
    --> Running transaction check
    ---> Package wget.x86_64 0:1.14-10.el7_0.1 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package        Arch             Version                   Repository      Size
    ================================================================================
    Installing:
     wget           x86_64           1.14-10.el7_0.1           base           545 k
    
    Transaction Summary
    ================================================================================
    Install  1 Package
    
    Total download size: 545 k
    Installed size: 2.0 M
    Is this ok [y/d/N]: y
    Downloading packages:
    wget-1.14-10.el7_0.1.x86_64.rpm                            | 545 kB   00:00     
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : wget-1.14-10.el7_0.1.x86_64                                  1/1 
      Verifying  : wget-1.14-10.el7_0.1.x86_64                                  1/1 
    
    Installed:
      wget.x86_64 0:1.14-10.el7_0.1                                                 
    
    Complete!
    
  3. Query the rpm database to see what has been installed:
    [paolo@maz03 ~]$ rpm -ql wget
    /etc/wgetrc
    /usr/bin/wget
    /usr/share/doc/wget-1.14
    /usr/share/doc/wget-1.14/AUTHORS
    /usr/share/doc/wget-1.14/COPYING
    /usr/share/doc/wget-1.14/MAILING-LIST
    /usr/share/doc/wget-1.14/NEWS
    /usr/share/doc/wget-1.14/README
    /usr/share/doc/wget-1.14/sample.wgetrc
    /usr/share/info/wget.info.gz
    /usr/share/locale/be/LC_MESSAGES/wget.mo
    .....
    .....
    /usr/share/locale/sv/LC_MESSAGES/wget.mo
    /usr/share/locale/tr/LC_MESSAGES/wget.mo
    /usr/share/locale/uk/LC_MESSAGES/wget.mo
    /usr/share/locale/vi/LC_MESSAGES/wget.mo
    /usr/share/locale/zh_CN/LC_MESSAGES/wget.mo
    /usr/share/locale/zh_TW/LC_MESSAGES/wget.mo
    /usr/share/man/man1/wget.1.gz
    
  4. You now decide you don't need wget anymore. Remove the package (root needed!):
    [paolo@maz03 ~]$ sudo yum remove wget
    Loaded plugins: fastestmirror
    Resolving Dependencies
    --> Running transaction check
    ---> Package wget.x86_64 0:1.14-10.el7_0.1 will be erased
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package       Arch            Version                     Repository      Size
    ================================================================================
    Removing:
     wget          x86_64          1.14-10.el7_0.1             @base          2.0 M
    
    Transaction Summary
    ================================================================================
    Remove  1 Package
    
    Installed size: 2.0 M
    Is this ok [y/N]: y
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Erasing    : wget-1.14-10.el7_0.1.x86_64                                  1/1 
      Verifying  : wget-1.14-10.el7_0.1.x86_64                                  1/1 
    
    Removed:
      wget.x86_64 0:1.14-10.el7_0.1                                                 
    
    Complete!
    

10.3.2. Managing software on DEB distributions

Let's try to install the wget software on Debian or Ubuntu linux.
We will use apt and dpkg for the task.
Since we will be performing operations as the superuser, if you haven't already, please set up sudo first.
  1. Update your local cache of available softwares (superuser privileges needed):
    ubuntu@maz03:~$ sudo apt-get update
    sudo: unable to resolve host maz03
    Ign http://nova.clouds.archive.ubuntu.com trusty InRelease
    Ign http://nova.clouds.archive.ubuntu.com trusty-updates InRelease
    Hit http://nova.clouds.archive.ubuntu.com trusty Release.gpg
    Get:1 http://nova.clouds.archive.ubuntu.com trusty-updates Release.gpg [933 B]
    Hit http://nova.clouds.archive.ubuntu.com trusty Release
    Ign http://security.ubuntu.com trusty-security InRelease
    .....
    .....
    Fetched 10.2 MB in 3s (3257 kB/s)                                              
    Reading package lists... Done
    
  2. Query the cache for wget (no privileges needed).
    Note that DEB systems also query descriptions and 'related' softwares.
    ubuntu@maz03:~$ apt-cache search wget
    devscripts - scripts to make the life of a Debian Package maintainer easier
    texlive-latex-extra - TeX Live: LaTeX additional packages
    wget - retrieves files from the web
    abcde - A Better CD Encoder
    apt-mirror - APT sources mirroring tool
    apt-zip - Update a non-networked computer using apt and removable media
    axel - light download accelerator - console version
    axel-dbg - light download accelerator - debugging symbols
    axel-kapt - light download accelerator - graphical front-end
    filetea - Web-based file sharing system
    getdata - management of external databases
    libcupt3-0-downloadmethod-wget - alternative front-end for dpkg -- wget download method
    puf - Parallel URL fetcher
    pwget - downloader utility which resembles wget (implemented in Perl)
    snarf - A command-line URL grabber
    wput - tiny wget-like ftp-client for uploading files
    
  3. Install wget as the superuser:
    ubuntu@maz03:~$ sudo apt-get install wget
    sudo: unable to resolve host maz03
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following NEW packages will be installed:
      wget
    0 upgraded, 1 newly installed, 0 to remove and 25 not upgraded.
    Need to get 269 kB of archives.
    After this operation, 651 kB of additional disk space will be used.
    Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty-updates/main wget amd64 1.15-1ubuntu1.14.04.1 [269 kB]
    Fetched 269 kB in 0s (1218 kB/s)
    Selecting previously unselected package wget.
    (Reading database ... 51118 files and directories currently installed.)
    Preparing to unpack .../wget_1.15-1ubuntu1.14.04.1_amd64.deb ...
    Unpacking wget (1.15-1ubuntu1.14.04.1) ...
    Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
    Processing triggers for install-info (5.2.0.dfsg.1-2) ...
    Setting up wget (1.15-1ubuntu1.14.04.1) ...
    
  4. Query the deb database and see what files have been installed by wget:
    ubuntu@maz03:~$ dpkg -L wget
    /.
    /usr
    /usr/bin
    /usr/bin/wget
    /usr/share
    /usr/share/man
    /usr/share/man/man1
    /usr/share/man/man1/wget.1.gz
    /usr/share/info
    /usr/share/info/wget.info.gz
    /usr/share/doc
    /usr/share/doc/wget
    /usr/share/doc/wget/copyright
    /usr/share/doc/wget/AUTHORS
    /usr/share/doc/wget/NEWS.gz
    /usr/share/doc/wget/MAILING-LIST
    /usr/share/doc/wget/README
    /usr/share/doc/wget/changelog.Debian.gz
    /etc
    /etc/wgetrc
    
  5. You now decide you don't need wget anymore. Remove the wget software from the system (keep config files).
    Note: you can alternatively 'purge' the software completely as described in the 'purge' section.
    ubuntu@maz03:~$ sudo apt-get remove wget
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following packages will be REMOVED:
      wget
    0 upgraded, 0 newly installed, 1 to remove and 25 not upgraded.
    After this operation, 651 kB disk space will be freed.
    Do you want to continue? [Y/n] Y
    (Reading database ... 51129 files and directories currently installed.)
    Removing wget (1.15-1ubuntu1.14.04.1) ...
    Processing triggers for install-info (5.2.0.dfsg.1-2) ...
    Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
    
  6. Discover which files have been left behind by the wget software:
    ubuntu@maz03:~$ dpkg -L wget
    /etc
    /etc/wgetrc
    
  7. Completely remove (purge) all the files installed by wget:
    ubuntu@maz03:~$ sudo apt-get purge wget
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following packages will be REMOVED:
      wget*
    0 upgraded, 0 newly installed, 1 to remove and 25 not upgraded.
    After this operation, 0 B of additional disk space will be used.
    Do you want to continue? [Y/n] Y
    (Reading database ... 51119 files and directories currently installed.)
    Removing wget (1.15-1ubuntu1.14.04.1) ...
    Purging configuration files for wget (1.15-1ubuntu1.14.04.1) ...
    

10.4. Adding a user to your VM

You may need to give access to your VM to another user. Given that there are no graphical tools or fancy icons to do the task you are going to user some command line tools.
We are going to add the user 'pemazzon' (Paolo E. Mazzon) to your system.
  1. $ sudo useradd -m -c 'Paolo E. Mazzon' pemazzon
    
    The meaning of parameters is:
    • -m = create a 'home directory' for the user under /home
    • -c = set this as a description of the user
  2. Warning

    It may be necessary to enable password authentications through ssh. Check the file /etc/ssh/sshd_config and be sure that you have
    ChallengeResponseAuthentication yes
    inside. If you modified that file restart the ssh service using
    DEB systems: sudo restart ssh
    or
    RH systems: sudo systemctl restart sshd
  3. Set a password for the user: you will decide a password that will be valid just for the first login. You will force the user to change it immediately.
    $ sudo passwd pemazzon
    
    ... enter twice times the
        password you want to
        set for the user ...
    
  4. Force the user to change his password on first logon:
    $ sudo chage -d 0 pemazzon
    
  5. Mail the user the password you have set.

10.5. Formatting/resizing a volume you just attached

We already show on Section 6.2.2, “Using (attaching) a Volume” how to start using a volume you have attached to your VM. We will give you here some more details.
If you just created an empty volume you first need to create a filesystem on it before you can put some data inside. The volume you just attached is merely 'raw space' and has no concept about files and directories.
You may also think about partitioning your volume, e.g. to split volume space in 'slices', as you may have done installing linux.
Given that in the Cloud Area Padovana you can add as many volumes you want (up to your volume quota, of course) partitioning a volume is simply not recommended.
Suppose now that you have filled the volume space. You have the option to resize it from the cloud dashboard but the result may not be the one you expect until you do some operations from inside your VM.
We are going to resize the volume 'test' from 2 to 4 GB and use the newly available space on a VM.
We will create the volume from scratch. Obviously you can jump to step Section 10.5, “Formatting/resizing a volume you just attached” if you are resizing an existing volume.
  1. Create a 2 GB volume named 'test' and attach it to one of your VM as described in Section 6.2, “Volumes”
  2. Create a filesystem and mount it as described in Section 6.2.2, “Using (attaching) a Volume”
  3. Check the available space is 2 GB and the filesystem is filling up the partition
    ubuntu@maz03:~$ sudo fdisk -l /dev/vdb
    
    Disk /dev/vdb: 2154 MB, 2154823680 bytes
    15 heads, 30 sectors/track, 9352 cylinders, total 4208640 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/vdb doesn't contain a valid partition table
    
    ubuntu@maz03:~$ df -k /mnt
    Filesystem     1K-blocks  Used Available Use% Mounted on
    /dev/vdb         2005688  3096   1880992   1% /mnt
    
Let's resize the volume
  1. Umount it first from the VM (if mounted):
    ubuntu@maz03:~$ sudo umount /dev/vdb
    
  2. Detach it from the VM using the dashboard: use "Edit Attachments" and confirm your request.
  3. When the volume is detached the "Extend Volume" option will be available. Select it...
  4. ... and grow the volume to, say, 4GB:
  5. Now attach again the volume to the VM and let's check, from inside the VM, what's happening:
    ubuntu@maz03:~$ sudo mount /dev/vdb /mnt
    ubuntu@maz03:~$ sudo fdisk -l /dev/vdb
    
    Disk /dev/vdb: 4309 MB, 4309647360 bytes
    16 heads, 63 sectors/track, 8350 cylinders, total 8417280 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/vdb doesn't contain a valid partition table
    
    The disk size is now 4309 MB, so the system recognize the fact that the volume have grown.
    Let's check the available space:
    ubuntu@maz03:~$ df -k /mnt
    Filesystem     1K-blocks  Used Available Use% Mounted on
    /dev/vdb         2005688  3096   1880992   1% /mnt
    
    we see here that it is still 2 GB!. This is due to the fact that the filesystem has not been touched by the resize operation: the volume service of the cloud has no knowledge of what's inside your volume.
    To use the new space we need to resize the filesystem, obviously from inside the VM, to let it span all the volume:
    ubuntu@maz03:~$ sudo umount /mnt
    ubuntu@maz03:~$ sudo resize2fs /dev/vdb
    resize2fs 1.42.9 (4-Feb-2014)
    old_desc_blocks = 1, new_desc_blocks = 1
    The filesystem on /dev/vdb is now 1052160 blocks long.
    
    ubuntu@maz03:~$ sudo mount /dev/vdb /mnt
    ubuntu@maz03:~$ df -k /mnt
    Filesystem     1K-blocks  Used Available Use% Mounted on
    /dev/vdb         4078888  4120   3873956   1% /mnt
    
    You can now see you have all the 4 GB available.

10.6. Automatically remount volumes on reboot

Connecting a volume to your VM using the 'mount' command is a one-shot solution. If you need to reboot your VM for some reason you will have to re-issue the command again.
Forget to do so might cause the following:
  1. You write data under the /mnt directory (or wherever you mount your volume) thinking you are writing on your volume with, say, 1 TB of space;
  2. The volume is not mounted there so you are writing instead on the same space where your operating system lives;
  3. You eventually fill up your filesystem and your VM crash/starts malfunctioning;
  4. Your VM might not boot anymore and you have to call for help.
We will now create an entry on the /etc/fstab file to remount the volume (the volumes?) upon reboot.

Warning

A big warning! DO NOT edit the /etc/fstab file by transferring it on a windows machine and then back to your VM. Bad things will happen...
The /mnt directory is normally used as the 'mount point' for various devices. Normally you would create a directory under /mnt for each device and attach the device on that directory. Obviously this is not mandatory: you can mount filesystems almost everywhere (e.g. /data, /opt/myprograms and so on.)
All the operations will be performed as the supersuser.
  1. Acquire root privileges
    ubuntu@maz03:~$ sudo su -
    root@maz03:~#
    
  2. Create the 'mount point'
    root@maz03:~# mkdir -p /mnt/volume1
    
  3. Edit the /etc/fstab file: we will use the 'nano' editor for that:
    root@maz03:~# nano /etc/fstab
    
    Your screen should look like this one:
  4. Add a line telling you want to mount the device /dev/vdb under /mnt/volume1 (you have already created an ext4 filesystem on it).
    This should be the content of your file:
  5. Write your file to disk by pressing CTRL+o ...
    ... and confirming with enter.
  6. Exit the editor by pressing CTRL+x. Go back to your normal user by issuing the 'exit' command or by pressing CTRL+d
Now your volume will appear under the '/mnt/volume1' directory everytime your VM boots up. You can also mount the volume just issuing
sudo mount /mnt/volume1
The system will lookup in /etc/fstab and mount the correct volume corresponding to the /mnt/volume1 mount point.