The command to launch instances is nova boot. Let’s go to check the help to see how this command works:
[root@os-controller-01 .ssh(keystone_admin)]# nova help boot
usage: nova boot [--flavor <flavor>] [--image <image>]
[--image-with <key=value>] [--boot-volume <volume_id>]
[--snapshot <snapshot_id>] [--min-count <number>]
[--max-count <number>] [--meta <key=value>]
[--file <dst-path=src-path>] [--key-name <key-name>]
[--user-data <user-data>]
[--availability-zone <availability-zone>]
[--security-groups <security-groups>]
[--block-device-mapping <dev-name=mapping>]
[--block-device key1=value1[,key2=value2...]]
[--swap <swap_size>]
[--ephemeral size=<size>[,format=<format>]]
[--hint <key=value>]
[--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>]
[--config-drive <value>] [--poll]
<name>
Boot a new server.
Positional arguments:
<name> Name for the new server
Optional arguments:
--flavor <flavor> Name or ID of flavor (see 'nova flavor-list').
--image <image> Name or ID of image (see 'nova image-list').
--image-with <key=value> Image metadata property (see 'nova image-
show').
--boot-volume <volume_id> Volume ID to boot from.
--snapshot <snapshot_id> Snapshot ID to boot from (will create a
volume).
--min-count <number> Boot at least <number> servers (limited by
quota).
--max-count <number> Boot up to <number> servers (limited by
quota).
--meta <key=value> Record arbitrary key/value metadata to
/meta.js on the new server. Can be specified
multiple times.
--file <dst-path=src-path> Store arbitrary files from <src-path> locally
to <dst-path> on the new server. You may store
up to 5 files.
--key-name <key-name> Key name of keypair that should be created
earlier with the command keypair-add
--user-data <user-data> user data file to pass to be exposed by the
metadata server.
--availability-zone <availability-zone>
The availability zone for server placement.
--security-groups <security-groups>
Comma separated list of security group names.
--block-device-mapping <dev-name=mapping>
Block device mapping in the format <dev-
name>=<id>:<type>:<size(GB)>:<delete-on-
terminate>.
--block-device key1=value1[,key2=value2...]
Block device mapping with the keys: id=UUID
(image_id, snapshot_id or volume_id only if
using source image, snapshot or volume)
source=source type (image, snapshot, volume or
blank), dest=destination type of the block
device (volume or local), bus=device's bus
(e.g. uml, lxc, virtio, ...; if omitted,
hypervisor driver chooses a suitable default,
honoured only if device type is supplied)
type=device type (e.g. disk, cdrom, ...;
defaults to 'disk') device=name of the device
(e.g. vda, xda, ...; if omitted, hypervisor
driver chooses suitable device depending on
selected bus), size=size of the block device
in GB (if omitted, hypervisor driver
calculates size), format=device will be
formatted (e.g. swap, ntfs, ...; optional),
bootindex=integer used for ordering the boot
disks (for image backed instances it is equal
to 0, for others need to be specified) and
shutdown=shutdown behaviour (either preserve
or remove, for local destination set to
remove).
--swap <swap_size> Create and attach a local swap block device of
<swap_size> MB.
--ephemeral size=<size>[,format=<format>]
Create and attach a local ephemeral block
device of <size> GB and format it to <format>.
--hint <key=value> Send arbitrary key/value pairs to the
scheduler for custom use.
--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>
Create a NIC on the server. Specify option
multiple times to create multiple NICs. net-
id: attach NIC to network with this UUID
(either port-id or net-id must be provided),
v4-fixed-ip: IPv4 fixed address for NIC
(optional), v6-fixed-ip: IPv6 fixed address
for NIC (optional), port-id: attach NIC to
port with this UUID (either port-id or net-id
must be provided).
--config-drive <value> Enable config drive
--poll Report the new server boot progress until it
completes.
You will need to know what flavor, image, key, security group and network to use for this new instance. Let’s go to list this information before to issue the command to create the instance.
[root@os-controller-01 .ssh(keystone_admin)]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@os-controller-01 .ssh(keystone_admin)]# glance image-list
+--------------------------------------+----------------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+----------------+-------------+------------------+-----------+--------+
| 64a194f4-6626-4e82-b656-48229004187e | centos7 | qcow2 | bare | 899874816 | active |
| 58aa158b-c1c0-4ca6-a210-bb8f4cd4307f | centos7_x86-64 | qcow2 | bare | 899874816 | active |
| daa2bd98-1af6-48a0-b334-98bc8578aaec | cirros | qcow2 | bare | 13200896 | active |
| 1b98d3fd-29fa-41a4-808e-9299693e50c5 | cirros2 | qcow2 | bare | 13287936 | active |
+--------------------------------------+----------------+-------------+------------------+-----------+--------+
[root@os-controller-01 .ssh(keystone_admin)]# nova keypair-list
+------------+-------------------------------------------------+
| Name | Fingerprint |
+------------+-------------------------------------------------+
| admin-key1 | 02:33:0b:f5:3b:de:d8:a4:2d:9d:4b:8b:25:a1:89:db |
| admin_key2 | 12:cc:5d:92:67:56:71:95:3b:28:7e:88:06:95:49:5e |
| admin_key3 | 2b:96:67:40:29:f0:aa:7b:f9:0c:ab:82:43:77:c6:b7 |
| admin-key4 | 54:0f:f4:2a:5b:f7:13:63:5a:9b:1f:86:82:57:59:ae |
+------------+-------------------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# neutron security-group-list
+--------------------------------------+-------------+--------------------+
| id | name | description |
+--------------------------------------+-------------+--------------------+
| 00a4c13e-1e19-4d92-990f-43c0985ae771 | t1-sg1 | icmp and ssh |
| 08e461c8-cc1e-4464-be88-4290b11c5cf4 | admin-sg1 | ssh and icmp |
| 0f479ff1-ed81-4319-a63b-e30fff46e7ed | ossecgroup1 | Allows SSH and WEB |
| 3d198411-ddf0-4eb8-a57d-366c0eafd393 | default | default |
| 4cb6846a-6eca-47c1-b1e4-10200ecc5559 | default | default |
| 720851b6-04c5-41f7-8dc6-a24c7ff2d94c | admin-sg2 | |
| 8ade4bc6-bcd4-4dc5-af01-5768845438d5 | default | default |
| ce6c007c-3c16-4ccb-a435-51c5ea26b5e3 | default | default |
| d934d790-bee4-4cd3-b036-85163f601ac1 | default | default |
| ea4a825e-0eb8-4ce7-b0a8-2bc5dfeadcc6 | default | default |
| ecb0a4a8-aa6e-49df-856e-c538f81683f7 | test1 | ping ssh |
| f19ec6d9-9de5-48c6-9d50-4884b44e273b | default | default |
| fc6d334d-5a58-4af1-a3dd-2fd6cc04c91e | default | default |
+--------------------------------------+-------------+--------------------+
[root@os-controller-01 .ssh(keystone_admin)]# neutron net-list
+--------------------------------------+-----------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-----------+-----------------------------------------------------+
| 6a598fa3-b0cd-4e6b-bec2-471f594dd5a9 | all-ext | aa1910f3-f615-41f0-acce-7d03273b7680 192.168.3.0/24 |
| 7e9df0d3-cd37-4018-a65e-2550f37827a7 | admin-int | b08a5adb-fbd5-4b04-8c83-eee52fb17609 172.16.1.0/24 |
| 290b74fe-5451-45af-a24c-820f3efd0b8c | t1-int | 71480213-f924-472b-8e28-b3c97ddc2af9 172.16.2.0/24 |
+--------------------------------------+-----------+-----------------------------------------------------+
Now we can proceed to launch the instance by issuing the following command:
[root@os-controller-01 .ssh(keystone_admin)]# nova boot --flavor m1.tiny --image cirros2 --key-name admin-key4 --security-groups admin-sg2 --nic net-id=7e9df0d3-cd37-4018-a65e-2550f37827a7 a-instance02
+--------------------------------------+------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-0000000b |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | hyjcNWq7AGio |
| config_drive | |
| created | 2016-05-08T17:27:10Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 8c13516c-0f47-4938-a943-9bc3847889e9 |
| image | cirros2 (1b98d3fd-29fa-41a4-808e-9299693e50c5) |
| key_name | admin-key4 |
| metadata | {} |
| name | a-instance02 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | admin-sg2 |
| status | BUILD |
| tenant_id | 861d3f9259414e2aaaf42a03a11691e0 |
| updated | 2016-05-08T17:27:10Z |
| user_id | 96702222956b4ca2bd1ffabfd5450d05 |
+--------------------------------------+------------------------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# nova list
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
| 8c13516c-0f47-4938-a943-9bc3847889e9 | a-instance02 | BUILD | spawning | NOSTATE | admin-int=172.16.1.4 |
| f57225b3-7e2f-4953-9991-35ccead4775e | a-instance1 | SHUTOFF | - | Shutdown | admin-int=172.16.1.2, 192.168.3.3 |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
[root@os-controller-01 .ssh(keystone_admin)]# nova list
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
| 8c13516c-0f47-4938-a943-9bc3847889e9 | a-instance02 | ACTIVE | - | Running | admin-int=172.16.1.4 |
| f57225b3-7e2f-4953-9991-35ccead4775e | a-instance1 | SHUTOFF | - | Shutdown | admin-int=172.16.1.2, 192.168.3.3 |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------+
You will need to run nova list to check if the deployment is finished. Once finished the state will change from BUILD to ACTIVE. Now we should try that we can connect to the new instance. As the new instance does not have yet a floating IP assigned, the only way to reach it is using another instance connected to the same router. I have another instance called a-inatance1 connected to the same network. Let’s go to power it on and try to ping and access from this instance to the new one.
[root@os-controller-01 .ssh(keystone_admin)]# nova start a-instance1
[root@os-controller-01 .ssh(keystone_admin)]# nova list
+--------------------------------------+--------------+--------+------------+-------------+-----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------+--------+------------+-------------+-----------------------------------+
| 8c13516c-0f47-4938-a943-9bc3847889e9 | a-instance02 | ACTIVE | - | Running | admin-int=172.16.1.4 |
| f57225b3-7e2f-4953-9991-35ccead4775e | a-instance1 | ACTIVE | - | Running | admin-int=172.16.1.2, 192.168.3.3 |
+--------------------------------------+--------------+--------+------------+-------------+-----------------------------------+
As the other instance already has a floating IP assigned, you can access using this floating IP. But let’s go to say it does not have a floating IP. You could open the console attached to this instance. You only have to log to the Horizon portal and go to the project section. From there Compute/Instances and in the actions menu from the instance a-instance1 click console. Now you can log in using the default user and password that is prompted in the console by cirros and try to ping the ip of the new instance.
In the following post Neutron: assign a floating IP to one instance will be showed how to assign a floating IP to the instance that you just created.