Nova: NFS shared data store

If you want to use functionalities like live migration you will need to have a shared data store. One of the options (not the best, but it’s ok for testing) is to share a folder from one server using NFS.

The instances are saved by default locally in each compute host on /var/lib/nova/instances. To create a shared data store we are going to implement the following steps:

  • Install and configure NFS in the controller node
  • share the folder /var/lib/nova/instances with our compute hosts
  • allow the NFS port in the firewall
  • mount this shared folder to the local /var/lib/nova/instances

To install nfs execute the following steps

yum install nfs-utilssystemctl enable rpcbindsystemctl enable nfs-serversystemctl start rpcbindsystemctl start nfs-server

Now edit the file /etc/exports to add the following lines. Replace the IPs for the ones for your compute nodes

/var/lib/nova/instances 192.168.3.21(rw,sync,no_root_squash)
/var/lib/nova/instances 192.168.3.41(rw,sync,no_root_squash)

Now execute the following command to export the folders and restart the NFS service (the restart is not strictly necessary as the 1st command should export the shares without a reboot. But sometimes I found issues and I had to restart the service).

exportfs -avr
systemctl restart nfs-server

Next step is to allow the NFS connections. Add the following lines to the file /etc/sysconfig/ipdables after the other input rules

-A INPUT -s 192.168.3.21/32 -p tcp -m multiport --dports 2049 -m comment --comment "NFS shared with compute node 1" -j ACCEPT
-A INPUT -s 192.168.3.21/32 -p udp -m multiport --dports 2049 -m comment --comment "NFS shared with compute node 1" -j ACCEPT
-A INPUT -s 192.168.3.41/32 -p tcp -m multiport --dports 2049 -m comment --comment "NFS shared with compute node 2" -j ACCEPT
-A INPUT -s 192.168.3.41/32 -p udp -m multiport --dports 2049 -m comment --comment "NFS shared with compute node 2" -j ACCEPT

Restart the iptables service to apply the new rules

systemctl restart iptables

Now from the compute nodes execute the following command to check that folders are exported correctly

showmount -e 192.168.3.11

Now you can proceed to mount the share in each compute node. To do it add the following line to your /etc/fstab file

192.168.3.11:/var/lib/nova/instances /var/lib/nova/instances nfs4 defaults 0 0

And mount the file system

mount -a

Now you should have a shared data store where your instances will be saved and you should be able to perform live migrations.

Cinder: change the quota for a tenant

First of all you can list the default quota applied to a tenant by running the following command. In this one you are looking the admin tenant

[root@os-controller-01 ~(keystone_admin)]# cinder quota-defaults admin
+---------------------+-------+
|       Property      | Value |
+---------------------+-------+
|      gigabytes      |  1000 |
| gigabytes_glusterfs |   -1  |
|   gigabytes_iscsi   |   -1  |
|    gigabytes_lvm    |   -1  |
|      snapshots      |   10  |
| snapshots_glusterfs |   -1  |
|   snapshots_iscsi   |   -1  |
|    snapshots_lvm    |   -1  |
|       volumes       |   10  |
|  volumes_glusterfs  |   -1  |
|    volumes_iscsi    |   -1  |
|     volumes_lvm     |   -1  |
+---------------------+-------+

To show the current applied quota you can run the command cinder quota-show admin. If the default quota has not been changed this will show the same.

[root@os-controller-01 ~(keystone_admin)]# cinder quota-show admin
+---------------------+-------+
|       Property      | Value |
+---------------------+-------+
|      gigabytes      | 1000  |
| gigabytes_glusterfs |   -1  |
|   gigabytes_iscsi   |   -1  |
|    gigabytes_lvm    |   -1  |
|      snapshots      |   10  |
| snapshots_glusterfs |   -1  |
|   snapshots_iscsi   |   -1  |
|    snapshots_lvm    |   -1  |
|       volumes       |   10  |
|  volumes_glusterfs  |   -1  |
|    volumes_iscsi    |   -1  |
|     volumes_lvm     |   -1  |
+---------------------+-------+

Now if you want to change the limit of gigabytes from 1000 to 500 you can use the following command:

[root@os-controller-01 ~(keystone_admin)]# cinder quota-update --gigabytes 500 admin
+---------------------+-------+
|       Property      | Value |
+---------------------+-------+
|      gigabytes      |  500  |
| gigabytes_glusterfs |   -1  |
|   gigabytes_iscsi   |   -1  |
|    gigabytes_lvm    |   -1  |
|      snapshots      |   10  |
| snapshots_glusterfs |   -1  |
|   snapshots_iscsi   |   -1  |
|    snapshots_lvm    |   -1  |
|       volumes       |   10  |
|  volumes_glusterfs  |   -1  |
|    volumes_iscsi    |   -1  |
|     volumes_lvm     |   -1  |
+---------------------+-------+

Now the quota has been changed. If you look the defaults using the cinder quota-defaults admin command it will show 1000 but if you look the current applied quota using the cinder quota-show admin command it will show 500.

 

Cinder: create a volume and attach it to one instance

To create a new block storage volume execute the following command

root@os-controller-01 .ssh(keystone_admin)]# cinder create 10 --display-name sbv_a-instance02_data_01
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-05-08T18:41:23.006885      |
| display_description |                 None                 |
|     display_name    |       sbv_a-instance02_data_01       |
|      encrypted      |                False                 |
|          id         | 7d257820-c696-44ec-8074-804eae087b3a |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |       Display Name       | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------------+------+-------------+----------+-------------+
| 7d257820-c696-44ec-8074-804eae087b3a | available | sbv_a-instance02_data_01 |  10  |     None    |  false   |             |
| e10b8be0-bfd6-4f19-87d6-936ecd5a8194 | available |       sbv_data_01        |  10  |     None    |  false   |             |
+--------------------------------------+-----------+--------------------------+------+-------------+----------+-------------+

With this command we have created a block volume called sbv_a-instance02_data_01 of 10GB. You will have to run the command cinder list to check that the creation finished. When the status says available it means it has finished.

Before to attach the volume to the instance lets go to check the current disks that the instance has. This will facilitate the location of the new disk once attached.

$ sudo fdisk -l

Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *       16065     2088449     1036192+  83  Linux

By issuing fdisk -l you can see you only got one disk for now. Now you can proceed to attach the volume to the instance. To do it you will run the following command where the first parameter is the instance name and the second the volume id. Finally listing volumes we can be it has been associated.

[root@os-controller-01 .ssh(keystone_admin)]# nova volume-attach a-instance02 7d257820-c696-44ec-8074-804eae087b3a
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 7d257820-c696-44ec-8074-804eae087b3a |
| serverId | 8c13516c-0f47-4938-a943-9bc3847889e9 |
| volumeId | 7d257820-c696-44ec-8074-804eae087b3a |
+----------+--------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  |       Display Name       | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------------------+------+-------------+----------+--------------------------------------+
| 7d257820-c696-44ec-8074-804eae087b3a |   in-use  | sbv_a-instance02_data_01 |  10  |     None    |  false   | 8c13516c-0f47-4938-a943-9bc3847889e9 |
| e10b8be0-bfd6-4f19-87d6-936ecd5a8194 | available |       sbv_data_01        |  10  |     None    |  false   |                                      |
+--------------------------------------+-----------+--------------------------+------+-------------+----------+--------------------------------------+

The volume created has been attached as the device /dev/vdb to the instance. If you run again the fdisk -l command you should be able to see the new disk.

$ sudo fdisk -l

Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *       16065     2088449     1036192+  83  Linux

Disk /dev/vdb: 10.7 GB, 10737418240 bytes
16 heads, 63 sectors/track, 20805 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/vdb doesn't contain a valid partition table

Finally you can proceed to use this disk as any other disk. You can create the partition table, create a partition, format the partition and mount the partition as you wish.

Neutron: assign a floating IP to one instance

Floating IPs allow the instances to be reachable from outside the internal network. A floating IP is just a IP from the external network natted to the ip on the internal network.

The first thing you need to do is from the pool of IPs allowed for the external network take one for you. First of all you will check what is your external network and then create an IP there

[root@os-controller-01 .ssh(keystone_admin)]# neutron net-list
+--------------------------------------+-----------+-----------------------------------------------------+
| id                                   | name      | subnets                                             |
+--------------------------------------+-----------+-----------------------------------------------------+
| 6a598fa3-b0cd-4e6b-bec2-471f594dd5a9 | all-ext   | aa1910f3-f615-41f0-acce-7d03273b7680 192.168.3.0/24 |
| 7e9df0d3-cd37-4018-a65e-2550f37827a7 | admin-int | b08a5adb-fbd5-4b04-8c83-eee52fb17609 172.16.1.0/24  |
| 290b74fe-5451-45af-a24c-820f3efd0b8c | t1-int    | 71480213-f924-472b-8e28-b3c97ddc2af9 172.16.2.0/24  |
+--------------------------------------+-----------+-----------------------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# neutron floatingip-create all-ext
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.3.6                          |
| floating_network_id | 6a598fa3-b0cd-4e6b-bec2-471f594dd5a9 |
| id                  | b99d1dd7-54b4-4447-88cd-584dc836482d |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | 861d3f9259414e2aaaf42a03a11691e0     |
+---------------------+--------------------------------------+

Now you can check that the IP has been created but has not been assigned yet to your instance

[root@os-controller-01 .ssh(keystone_admin)]# neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 6c9a5099-3ea8-40c4-9105-a18c6df41142 | 172.16.1.2       | 192.168.3.3         | a72704e3-c177-41c7-9663-be1a493760a5 |
| aa840dff-5847-4204-81c5-bf82e68e22e4 | 172.16.2.4       | 192.168.3.5         | ff7da9e6-e63d-404b-a798-a725404c49cc |
| b99d1dd7-54b4-4447-88cd-584dc836482d |                  | 192.168.3.6         |                                      |
+--------------------------------------+------------------+---------------------+--------------------------------------+

The last step is to associate the floating IP with the internal port for the instance. You will need to know the port id. To get the port execute the following command and filter it by the instance internal IP using grep.

[root@os-controller-01 .ssh(keystone_admin)]# neutron  port-list | grep 172.16.1.4
| 18d8e7ba-35ab-430e-a593-2bafb3d79091 |      | fa:16:3e:39:fd:aa | {"subnet_id": "b08a5adb-fbd5-4b04-8c83-eee52fb17609", "ip_address": "172.16.1.4"}  |

Finally associate both the floating IP and the port running the following command

[root@os-controller-01 .ssh(keystone_admin)]# neutron floatingip-associate b99d1dd7-54b4-4447-88cd-584dc836482d 18d8e7ba-35ab-430e-a593-2bafb3d79091
Associated floating IP b99d1dd7-54b4-4447-88cd-584dc836482d

Just check that the association has been done properly

[root@os-controller-01 .ssh(keystone_admin)]# neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 6c9a5099-3ea8-40c4-9105-a18c6df41142 | 172.16.1.2       | 192.168.3.3         | a72704e3-c177-41c7-9663-be1a493760a5 |
| aa840dff-5847-4204-81c5-bf82e68e22e4 | 172.16.2.4       | 192.168.3.5         | ff7da9e6-e63d-404b-a798-a725404c49cc |
| b99d1dd7-54b4-4447-88cd-584dc836482d | 172.16.1.4       | 192.168.3.6         | 18d8e7ba-35ab-430e-a593-2bafb3d79091 |
+--------------------------------------+------------------+---------------------+--------------------------------------+

Now you should be able to ping and ssh your instance from the external network.

Nova: launch an instance

The command to launch instances is nova boot. Let’s go to check the help to see how this command works:

[root@os-controller-01 .ssh(keystone_admin)]# nova help boot
usage: nova boot [--flavor <flavor>] [--image <image>]
                 [--image-with <key=value>] [--boot-volume <volume_id>]
                 [--snapshot <snapshot_id>] [--min-count <number>]
                 [--max-count <number>] [--meta <key=value>]
                 [--file <dst-path=src-path>] [--key-name <key-name>]
                 [--user-data <user-data>]
                 [--availability-zone <availability-zone>]
                 [--security-groups <security-groups>]
                 [--block-device-mapping <dev-name=mapping>]
                 [--block-device key1=value1[,key2=value2...]]
                 [--swap <swap_size>]
                 [--ephemeral size=<size>[,format=<format>]]
                 [--hint <key=value>]
                 [--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>]
                 [--config-drive <value>] [--poll]
                 <name>

Boot a new server.

Positional arguments:
  <name>                        Name for the new server

Optional arguments:
  --flavor <flavor>             Name or ID of flavor (see 'nova flavor-list').
  --image <image>               Name or ID of image (see 'nova image-list').
  --image-with <key=value>      Image metadata property (see 'nova image-
                                show').
  --boot-volume <volume_id>     Volume ID to boot from.
  --snapshot <snapshot_id>      Snapshot ID to boot from (will create a
                                volume).
  --min-count <number>          Boot at least <number> servers (limited by
                                quota).
  --max-count <number>          Boot up to <number> servers (limited by
                                quota).
  --meta <key=value>            Record arbitrary key/value metadata to
                                /meta.js on the new server. Can be specified
                                multiple times.
  --file <dst-path=src-path>    Store arbitrary files from <src-path> locally
                                to <dst-path> on the new server. You may store
                                up to 5 files.
  --key-name <key-name>         Key name of keypair that should be created
                                earlier with the command keypair-add
  --user-data <user-data>       user data file to pass to be exposed by the
                                metadata server.
  --availability-zone <availability-zone>
                                The availability zone for server placement.
  --security-groups <security-groups>
                                Comma separated list of security group names.
  --block-device-mapping <dev-name=mapping>
                                Block device mapping in the format <dev-
                                name>=<id>:<type>:<size(GB)>:<delete-on-
                                terminate>.
  --block-device key1=value1[,key2=value2...]
                                Block device mapping with the keys: id=UUID
                                (image_id, snapshot_id or volume_id only if
                                using source image, snapshot or volume)
                                source=source type (image, snapshot, volume or
                                blank), dest=destination type of the block
                                device (volume or local), bus=device's bus
                                (e.g. uml, lxc, virtio, ...; if omitted,
                                hypervisor driver chooses a suitable default,
                                honoured only if device type is supplied)
                                type=device type (e.g. disk, cdrom, ...;
                                defaults to 'disk') device=name of the device
                                (e.g. vda, xda, ...; if omitted, hypervisor
                                driver chooses suitable device depending on
                                selected bus), size=size of the block device
                                in GB (if omitted, hypervisor driver
                                calculates size), format=device will be
                                formatted (e.g. swap, ntfs, ...; optional),
                                bootindex=integer used for ordering the boot
                                disks (for image backed instances it is equal
                                to 0, for others need to be specified) and
                                shutdown=shutdown behaviour (either preserve
                                or remove, for local destination set to
                                remove).
  --swap <swap_size>            Create and attach a local swap block device of
                                <swap_size> MB.
  --ephemeral size=<size>[,format=<format>]
                                Create and attach a local ephemeral block
                                device of <size> GB and format it to <format>.
  --hint <key=value>            Send arbitrary key/value pairs to the
                                scheduler for custom use.
  --nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>
                                Create a NIC on the server. Specify option
                                multiple times to create multiple NICs. net-
                                id: attach NIC to network with this UUID
                                (either port-id or net-id must be provided),
                                v4-fixed-ip: IPv4 fixed address for NIC
                                (optional), v6-fixed-ip: IPv6 fixed address
                                for NIC (optional), port-id: attach NIC to
                                port with this UUID (either port-id or net-id
                                must be provided).
  --config-drive <value>        Enable config drive
  --poll                        Report the new server boot progress until it
                                completes.

You will need to know what flavor, image, key, security group and network to use for this new instance. Let’s go to list this information before to issue the command to create the instance.

[root@os-controller-01 .ssh(keystone_admin)]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@os-controller-01 .ssh(keystone_admin)]# glance image-list
+--------------------------------------+----------------+-------------+------------------+-----------+--------+
| ID                                   | Name           | Disk Format | Container Format | Size      | Status |
+--------------------------------------+----------------+-------------+------------------+-----------+--------+
| 64a194f4-6626-4e82-b656-48229004187e | centos7        | qcow2       | bare             | 899874816 | active |
| 58aa158b-c1c0-4ca6-a210-bb8f4cd4307f | centos7_x86-64 | qcow2       | bare             | 899874816 | active |
| daa2bd98-1af6-48a0-b334-98bc8578aaec | cirros         | qcow2       | bare             | 13200896  | active |
| 1b98d3fd-29fa-41a4-808e-9299693e50c5 | cirros2        | qcow2       | bare             | 13287936  | active |
+--------------------------------------+----------------+-------------+------------------+-----------+--------+
[root@os-controller-01 .ssh(keystone_admin)]# nova keypair-list
+------------+-------------------------------------------------+
| Name       | Fingerprint                                     |
+------------+-------------------------------------------------+
| admin-key1 | 02:33:0b:f5:3b:de:d8:a4:2d:9d:4b:8b:25:a1:89:db |
| admin_key2 | 12:cc:5d:92:67:56:71:95:3b:28:7e:88:06:95:49:5e |
| admin_key3 | 2b:96:67:40:29:f0:aa:7b:f9:0c:ab:82:43:77:c6:b7 |
| admin-key4 | 54:0f:f4:2a:5b:f7:13:63:5a:9b:1f:86:82:57:59:ae |
+------------+-------------------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# neutron security-group-list
+--------------------------------------+-------------+--------------------+
| id                                   | name        | description        |
+--------------------------------------+-------------+--------------------+
| 00a4c13e-1e19-4d92-990f-43c0985ae771 | t1-sg1      | icmp and ssh       |
| 08e461c8-cc1e-4464-be88-4290b11c5cf4 | admin-sg1   | ssh and icmp       |
| 0f479ff1-ed81-4319-a63b-e30fff46e7ed | ossecgroup1 | Allows SSH and WEB |
| 3d198411-ddf0-4eb8-a57d-366c0eafd393 | default     | default            |
| 4cb6846a-6eca-47c1-b1e4-10200ecc5559 | default     | default            |
| 720851b6-04c5-41f7-8dc6-a24c7ff2d94c | admin-sg2   |                    |
| 8ade4bc6-bcd4-4dc5-af01-5768845438d5 | default     | default            |
| ce6c007c-3c16-4ccb-a435-51c5ea26b5e3 | default     | default            |
| d934d790-bee4-4cd3-b036-85163f601ac1 | default     | default            |
| ea4a825e-0eb8-4ce7-b0a8-2bc5dfeadcc6 | default     | default            |
| ecb0a4a8-aa6e-49df-856e-c538f81683f7 | test1       | ping ssh           |
| f19ec6d9-9de5-48c6-9d50-4884b44e273b | default     | default            |
| fc6d334d-5a58-4af1-a3dd-2fd6cc04c91e | default     | default            |
+--------------------------------------+-------------+--------------------+
[root@os-controller-01 .ssh(keystone_admin)]# neutron net-list
+--------------------------------------+-----------+-----------------------------------------------------+
| id                                   | name      | subnets                                             |
+--------------------------------------+-----------+-----------------------------------------------------+
| 6a598fa3-b0cd-4e6b-bec2-471f594dd5a9 | all-ext   | aa1910f3-f615-41f0-acce-7d03273b7680 192.168.3.0/24 |
| 7e9df0d3-cd37-4018-a65e-2550f37827a7 | admin-int | b08a5adb-fbd5-4b04-8c83-eee52fb17609 172.16.1.0/24  |
| 290b74fe-5451-45af-a24c-820f3efd0b8c | t1-int    | 71480213-f924-472b-8e28-b3c97ddc2af9 172.16.2.0/24  |
+--------------------------------------+-----------+-----------------------------------------------------+

Now we can proceed to launch the instance by issuing the following command:

[root@os-controller-01 .ssh(keystone_admin)]# nova boot --flavor m1.tiny --image cirros2 --key-name admin-key4 --security-groups admin-sg2 --nic net-id=7e9df0d3-cd37-4018-a65e-2550f37827a7 a-instance02
+--------------------------------------+------------------------------------------------+
| Property                             | Value                                          |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                         |
| OS-EXT-AZ:availability_zone          |                                                |
| OS-EXT-SRV-ATTR:host                 | -                                              |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                              |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000b                              |
| OS-EXT-STS:power_state               | 0                                              |
| OS-EXT-STS:task_state                | scheduling                                     |
| OS-EXT-STS:vm_state                  | building                                       |
| OS-SRV-USG:launched_at               | -                                              |
| OS-SRV-USG:terminated_at             | -                                              |
| accessIPv4                           |                                                |
| accessIPv6                           |                                                |
| adminPass                            | hyjcNWq7AGio                                   |
| config_drive                         |                                                |
| created                              | 2016-05-08T17:27:10Z                           |
| flavor                               | m1.tiny (1)                                    |
| hostId                               |                                                |
| id                                   | 8c13516c-0f47-4938-a943-9bc3847889e9           |
| image                                | cirros2 (1b98d3fd-29fa-41a4-808e-9299693e50c5) |
| key_name                             | admin-key4                                     |
| metadata                             | {}                                             |
| name                                 | a-instance02                                   |
| os-extended-volumes:volumes_attached | []                                             |
| progress                             | 0                                              |
| security_groups                      | admin-sg2                                      |
| status                               | BUILD                                          |
| tenant_id                            | 861d3f9259414e2aaaf42a03a11691e0               |
| updated                              | 2016-05-08T17:27:10Z                           |
| user_id                              | 96702222956b4ca2bd1ffabfd5450d05               |
+--------------------------------------+------------------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# nova list
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
| ID                                   | Name         | Status  | Task State | Power State | Networks                          |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
| 8c13516c-0f47-4938-a943-9bc3847889e9 | a-instance02 | BUILD   | spawning   | NOSTATE     | admin-int=172.16.1.4              |
| f57225b3-7e2f-4953-9991-35ccead4775e | a-instance1  | SHUTOFF | -          | Shutdown    | admin-int=172.16.1.2, 192.168.3.3 |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# nova list
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
| ID                                   | Name         | Status  | Task State | Power State | Networks                          |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
| 8c13516c-0f47-4938-a943-9bc3847889e9 | a-instance02 | ACTIVE  | -          | Running     | admin-int=172.16.1.4              |
| f57225b3-7e2f-4953-9991-35ccead4775e | a-instance1  | SHUTOFF | -          | Shutdown    | admin-int=172.16.1.2, 192.168.3.3 |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+

You will need to run nova list to check if the deployment is finished. Once finished the state will change from BUILD to ACTIVE. Now we should try that we can connect to the new instance. As the new instance does not have yet a floating IP assigned, the only way to reach it is using another instance connected to the same router. I have another instance called a-inatance1 connected to the same network. Let’s go to power it on and try to ping and access from this instance to the new one.

[root@os-controller-01 .ssh(keystone_admin)]# nova start a-instance1
[root@os-controller-01 .ssh(keystone_admin)]# nova list
+--------------------------------------+--------------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks                          |
+--------------------------------------+--------------+--------+------------+-------------+-----------------------------------+
| 8c13516c-0f47-4938-a943-9bc3847889e9 | a-instance02 | ACTIVE | -          | Running     | admin-int=172.16.1.4              |
| f57225b3-7e2f-4953-9991-35ccead4775e | a-instance1  | ACTIVE | -          | Running     | admin-int=172.16.1.2, 192.168.3.3 |
+--------------------------------------+--------------+--------+------------+-------------+-----------------------------------+

As the other instance already has a floating IP assigned, you can access using this floating IP. But let’s go to say it does not have a floating IP. You could open the console attached to this instance. You only have to log to the Horizon portal and go to the project section. From there Compute/Instances and in the actions menu from the instance a-instance1 click console. Now you can log in using the default user and password that is prompted in the console by cirros and try to ping the ip of the new instance.

In the following post Neutron: assign a floating IP to one instance will be showed how to assign a floating IP to the instance that you just created.

Create security groups

The security group is a firewall applied at instance level. Each tenant can create their security groups and apply them to the instances on the deployment process or later. For this example you are going to create a new security group called admin-sg2 that allows pings and ssh from everywhere (so ingress traffic).

[root@os-controller-01 .ssh(keystone_admin)]# neutron security-group-create admin-sg2
Created a new security_group:
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                | Value                                                                                                                                                                                                                                                                                                                         |
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| description          |                                                                                                                                                                                                                                                                                                                               |
| id                   | 720851b6-04c5-41f7-8dc6-a24c7ff2d94c                                                                                                                                                                                                                                                                                          |
| name                 | admin-sg2                                                                                                                                                                                                                                                                                                                     |
| security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "861d3f9259414e2aaaf42a03a11691e0", "port_range_max": null, "security_group_id": "720851b6-04c5-41f7-8dc6-a24c7ff2d94c", "port_range_min": null, "ethertype": "IPv4", "id": "9caa9a3d-5e87-4cca-9c10-5e3d390e410c"} |
|                      | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "861d3f9259414e2aaaf42a03a11691e0", "port_range_max": null, "security_group_id": "720851b6-04c5-41f7-8dc6-a24c7ff2d94c", "port_range_min": null, "ethertype": "IPv6", "id": "3d25f8c1-78ab-4ffd-a750-8bbed3d5d1bf"} |
| tenant_id            | 861d3f9259414e2aaaf42a03a11691e0                                                                                                                                                                                                                                                                                              |
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# neutron security-group-rule-list | grep -i admin-sg2
| 3d25f8c1-78ab-4ffd-a750-8bbed3d5d1bf | admin-sg2      | egress    |          |                  |              |
| 9caa9a3d-5e87-4cca-9c10-5e3d390e410c | admin-sg2      | egress    |          |                  |              |

As you can see the group is created. If you check the rules like in the second command, you will see that by default when a security group is created all the egress traffic is allowed.

Now lets go to add the rules that will allow us to ping and connect via SSH to the instance.

[root@os-controller-01 .ssh(keystone_admin)]# neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0 admin-sg2
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | d5ea6c83-072e-4090-9029-f8646702ed27 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 720851b6-04c5-41f7-8dc6-a24c7ff2d94c |
| tenant_id         | 861d3f9259414e2aaaf42a03a11691e0     |
+-------------------+--------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# neutron security-group-rule-create --protocol icmp --remote-ip-prefix 0.0.0.0/0 admin-sg2
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | f4cadfe5-0be3-455e-8ea7-d9ad73253c63 |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 720851b6-04c5-41f7-8dc6-a24c7ff2d94c |
| tenant_id         | 861d3f9259414e2aaaf42a03a11691e0     |
+-------------------+--------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# neutron security-group-rule-list | grep -i admin-sg2
| 3d25f8c1-78ab-4ffd-a750-8bbed3d5d1bf | admin-sg2      | egress    |          |                  |              |
| 9caa9a3d-5e87-4cca-9c10-5e3d390e410c | admin-sg2      | egress    |          |                  |              |
| d5ea6c83-072e-4090-9029-f8646702ed27 | admin-sg2      | ingress   | tcp      | 0.0.0.0/0        |              |
| f4cadfe5-0be3-455e-8ea7-d9ad73253c63 | admin-sg2      | ingress   | icmp     | 0.0.0.0/0        |              |
[root@os-controller-01 .ssh(keystone_admin)]# neutron help | grep -i security-group
  security-group-create          Create a security group.
  security-group-delete          Delete a given security group.
  security-group-list            List security groups that belong to a given tenant.
  security-group-rule-create     Create a security group rule.
  security-group-rule-delete     Delete a given security group rule.
  security-group-rule-list       List security group rules that belong to a given tenant.
  security-group-rule-show       Show information of a given security group rule.
  security-group-show            Show information of a given security group.
  security-group-update          Update a given security group.
[root@os-controller-01 .ssh(keystone_admin)]# neutron security-group-rule-show d5ea6c83-072e-4090-9029-f8646702ed27
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | d5ea6c83-072e-4090-9029-f8646702ed27 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 720851b6-04c5-41f7-8dc6-a24c7ff2d94c |
| tenant_id         | 861d3f9259414e2aaaf42a03a11691e0     |
+-------------------+--------------------------------------+

You have created the rules and checked the neutron help to see how to check the configuration for a specific rule. Now you have a security group ready to be used when you deploy an instance.

 

 

 

 

Nova: create key pairs for SSH access

When an OpenStack instance is deployed you have to select a key pair that will be used for the instance. This is like how Amazon Web Services instances are configured as well. Therefore to log in you do not need to have the password, just the correct key.

To create a new key execute the following commands

[root@os-controller-01 ~(keystone_admin)]# nova keypair-add admin_key3 > admin_key3.key
[root@os-controller-01 ~(keystone_admin)]# ls -l
total 56
-rw-r--r--  1 root root  1680 May  8 14:07 admin_key3.key
-rw-------. 1 root root  1480 Mar 28 18:34 anaconda-ks.cfg
drwxr-xr-x  2 root root   124 Apr 24 19:32 cp24042016
-rw-------  1 root root   211 Apr  3 17:53 keystonerc_admin
-rw-------  1 root root   175 Apr  3 17:53 keystonerc_demo
-rw-------  1 root root   230 Apr 24 13:26 keystonerc_testuser1
-rw-------  1 root root   207 May  2 15:07 keystonerc_user1
-rw-------  1 root root 30365 Apr 24 19:35 os_deployment.txt
[root@os-controller-01 ~(keystone_admin)]# nova keypair-list
+------------+-------------------------------------------------+
| Name       | Fingerprint                                     |
+------------+-------------------------------------------------+
| admin-key1 | 02:33:0b:f5:3b:de:d8:a4:2d:9d:4b:8b:25:a1:89:db |
| admin_key2 | 12:cc:5d:92:67:56:71:95:3b:28:7e:88:06:95:49:5e |
| admin_key3 | 2b:96:67:40:29:f0:aa:7b:f9:0c:ab:82:43:77:c6:b7 |
+------------+-------------------------------------------------+

In the first command you are creating the new key and saving the private part in the file named admin_key3.key. Doing the ls we can see that file has been created in the directory where you are currently in. This file is the one you need to download to your computer in order to be able to log lately. In the last command issued, the nova keypair-list, you can see the new key has been added and is ready to use when you deploy a new instance.

There is another option. May be you already have a key that you want to use in your OpenStack environment. You could import this key instead of generating it. for the purpose of this example you are going to create a key using the ssh-keygen command instead of the nova command you used before. After that we are going to import the key to your OpenStack environment.

[root@os-controller-01 ~(keystone_admin)]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /root/.ssh/admin-key4.key
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/admin-key4.key.
Your public key has been saved in /root/.ssh/admin-key4.key.pub.
The key fingerprint is:
54:0f:f4:2a:5b:f7:13:63:5a:9b:1f:86:82:57:59:ae root@os-controller-01.wired.local
The key's randomart image is:
+--[ RSA 2048]----+
|         .+      |
|         . +     |
|        .   o  . |
|       .   .  +  |
|        S o .o=. |
|         +...=o= |
|        .. o.E=o |
|          . . .o.|
|                .|
+-----------------+
[root@os-controller-01 ~(keystone_admin)]# cd .ssh
[root@os-controller-01 .ssh(keystone_admin)]# ls -l
total 20
-rw------- 1 root root 1679 May  8 14:26 admin-key4.key
-rw-r--r-- 1 root root  415 May  8 14:26 admin-key4.key.pub
-r-------- 1 root root  827 Apr 11 19:59 authorized_keys
-rw------- 1 root root 1679 Mar 28 21:28 id_rsa
-rw-r--r-- 1 root root  415 Mar 28 21:28 id_rsa.pub
[root@os-controller-01 .ssh(keystone_admin)]# nova keypair-add --pub_key admin-key4.key.pub admin-key4
[root@os-controller-01 .ssh(keystone_admin)]# nova keypair-list
+------------+-------------------------------------------------+
| Name       | Fingerprint                                     |
+------------+-------------------------------------------------+
| admin-key1 | 02:33:0b:f5:3b:de:d8:a4:2d:9d:4b:8b:25:a1:89:db |
| admin_key2 | 12:cc:5d:92:67:56:71:95:3b:28:7e:88:06:95:49:5e |
| admin_key3 | 2b:96:67:40:29:f0:aa:7b:f9:0c:ab:82:43:77:c6:b7 |
| admin-key4 | 54:0f:f4:2a:5b:f7:13:63:5a:9b:1f:86:82:57:59:ae |
+------------+-------------------------------------------------+

With this you already have the key that you will need when you deploy your new instance.

 

 

Change the default volume group created for cinder volumes

The default configuration created by PackStack is explained in the post called Cinder: default configuration applied by PackStack. Here you are going to change this to have more space and better performance. It’s important to understand that we are going to remove the current volume group so is better to do this at the beginning, before you start creating volumes for your instance. Otherwise you will lose your previously created block volumes.

The first thing to do is add some extra disks to the controller node. These disks should have some type of raid protection to be save in case a disk fails. If you are deploying your test environment in an enterprise platform probably you have a storage cabinet where you can create these disks and export/present them to the controller node. The disks (called virtual volumes in HP 3PAR cabinets, virtual disks in HP EVA and whatever in other storage cabinets) have been probably created by your storage administrator with some sort of raid protection.  For our home lab we are not so lucky so what we are going to do is just create two additional virtual disks using the virtual machine manager and attach them to the controller node. After that we will rename the current cinder volume group and we will create a new one using the new disks.

I have added 2 100Gb disks to my controller node. To check they are being seen by the controller node you can run a fdisk -l:

[root@os-controller-01 ~(keystone_admin)]# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0006a53c

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     1026047      512000   83  Linux
/dev/vda2         1026048    83886079    41430016   8e  Linux LVM

Disk /dev/mapper/rhel-root: 38.2 GB, 38214303744 bytes, 74637312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/rhel-swap: 4160 MB, 4160749568 bytes, 8126464 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/vdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

You can see the last two disks added that were not present before are /dev/vdb and /dev/vdc. You are going to use these disks to create your new cinder volume group. But before lets go to rename the current one

[root@os-controller-01 ~(keystone_admin)]# vgrename cinder-volumes cinder-volumes-old
  Volume group "cinder-volumes" successfully renamed to "cinder-volumes-old"
[root@os-controller-01 ~(keystone_admin)]# vgs
  VG                 #PV #LV #SN Attr   VSize  VFree 
  cinder-volumes-old   1   0   0 wz--n- 20.60g 20.60g
  rhel                 1   2   0 wz--n- 39.51g 44.00m

Now lets go to create the new volume group. To do it you will use the whole disk, you are not going to create partitions on these disks even is a totally valid option. First of all lets go to create the physical volumes on these disks:

[root@os-controller-01 ~(keystone_admin)]# pvcreate /dev/vdb /dev/vdc
  Physical volume "/dev/vdb" successfully created
  Physical volume "/dev/vdc" successfully created

Now you have to create the volume group

[root@os-controller-01 ~(keystone_admin)]# vgcreate cinder-volumes /dev/vdb /dev/vdc
   Volume group "cinder-volumes" successfully created
[root@os-controller-01 ~(keystone_admin)]# vgs
  VG                 #PV #LV #SN Attr   VSize   VFree  
  cinder-volumes       2   0   0 wz--n- 199.99g 199.99g
  cinder-volumes-old   1   0   0 wz--n-  20.60g  20.60g
  rhel                 1   2   0 wz--n-  39.51g  44.00m

Now you can see you have a new VG named cinder-volumes. If you try to create a new volume it should be created there. Let’s go to check it

[root@os-controller-01 ~(keystone_admin)]# cinder create 10 --display-name sbv_data_01
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-05-08T12:37:21.583465      |
| display_description |                 None                 |
|     display_name    |             sbv_data_01              |
|      encrypted      |                False                 |
|          id         | e10b8be0-bfd6-4f19-87d6-936ecd5a8194 |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@os-controller-01 ~(keystone_admin)]# lvs
  LV                                          VG             Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  volume-e10b8be0-bfd6-4f19-87d6-936ecd5a8194 cinder-volumes -wi-a----- 10.00g                                                    
  root                                        rhel           -wi-ao---- 35.59g                                                    
  swap                                        rhel           -wi-ao----  3.88g                                                    
[root@os-controller-01 ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| e10b8be0-bfd6-4f19-87d6-936ecd5a8194 | available | sbv_data_01  |  10  |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

As you can see a new logical volume named volume-e10b8be0-bfd6-4f19-87d6-936ecd5a8194 of 10GB has been created in the volume group that we just created.

Finally you can proceed to delete the old volume group if there is no required information there by issuing the vgremove command.

 

 

 

Cinder: default configuration applied by PackStack

PackStack installs by  default the cinder service on the controller node. Look at the answer file used to deploy the environment to see how was configured:

[root@os-controller-01 ~]# grep -i cinder /root/os_deployment.txt 
# Storage (Cinder)
CONFIG_CINDER_INSTALL=y
# Cinder.
# The password to use for the Cinder to access DB
CONFIG_CINDER_DB_PW=fafc6b3729a94f48
# The password to use for the Cinder to authenticate with Keystone
CONFIG_CINDER_KS_PW=e1129d1a556e4326
# The Cinder backend to use, valid options are: lvm, gluster, nfs,
CONFIG_CINDER_BACKEND=lvm
# Create Cinder's volumes group. This should only be done for testing
# on a proof-of-concept installation of Cinder. This will create a
CONFIG_CINDER_VOLUMES_CREATE=y
# Cinder's volumes group size. Note that actual volume size will be
CONFIG_CINDER_VOLUMES_SIZE=20G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
# Cinder to use.  Format: ip-address:/export-name   Defaults to ''.
CONFIG_CINDER_NETAPP_NFS_SHARES=
# to '/etc/cinder/shares.conf'.
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
# (utilized by Cinder volume type extra_specs support). If this
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=

There are lots of parameters you can configure at installation time. The main ones are:

  • CONFIG_CINDER INSTALL=y specifies that cinder has to be installed
  • CONFIG_CINDER_BACKEND=lvm specifies that the block storage volumes created for the instances will be stored as lvm volumes in the controller node. The controller node will export these volumes to the instances.
  • CONFIG_CINDER_VOLUMES_CREATE=y specifies that a volume group has to be created where the volumes for the instances will be created.
  • CONFIG_CINDER_VOLUMES_SIZE=20G specifies the size of the volume group

So if you execute a vgs command on your controller node you will see the list of volume groups created. You can see a VG called cinder-volumes of 20GB. Here is where the block storage volumes for the instances will be created.

[root@os-controller-01 ~]# vgs
  VG             #PV #LV #SN Attr   VSize  VFree 
  cinder-volumes   1   1   0 wz--n- 20.60g 19.60g
  rhel             1   2   0 wz--n- 39.51g 44.00m

This configuration is fine for small testing purposes. If you need more space for the block volumes and better performance you will need change this configuration. In the Change the default volume group created for cinder volumes post you can learn how to do it.

Glance: image quotas

In glance is not possible configure a specific quota for one tenant or user. The quota is configured as a global configuration parameter and applies to all the users. what means that each user can have the same limits that the other ones. The quotas are defined in the file /etc/glance/glance-api.conf. To check what quotas can be configured you can grep this file looking for the quota word:

[root@os-controller-01 glance(keystone_admin)]# grep quota glance-api.conf 
#image_member_quota=128
#image_property_quota=128
#image_tag_quota=128
#image_location_quota=10
# Set a system wide quota for every user.  This value is the total number
#user_storage_quota=0

The user_storage_quota is the one that limits the space that a user can use for images. By default is set to 0 that means there is no limit. If you wish to set up a limit of 10GB per user you will need to uncomment the line and change this value to 10737418240 ( the value has to be specified in bytes).