Tuesday, August 15, 2017

Ceph for Glance in Red Hat OpenStack

In the previous post we saw the deployment of OpenStack Platform 10 with Ceph.

Lets log into the Controller Node, and look at all the services running.

[root@overcloud-controller-0 ~]# openstack service list
+----------------------------------+------------+----------------+
| ID                               | Name       | Type           |
+----------------------------------+------------+----------------+
| 316642206a194dc58cb4528adbc90f5d | heat-cfn   | cloudformation |
| 593f9fde36634401977d2806a9e6f6f5 | gnocchi    | metric         |
| 6fd62024660942ec8d6451429f03d336 | ceilometer | metering       |
| 762a53b0efd54947b3d6d2d6405440b0 | cinderv2   | volumev2       |
| 7783685d418845b9acc1bad50fbbfe7d | nova       | compute        |
| 93987d0d0fea4f02a73e7825c9b4adfe | glance     | image          |
| 9a9990c7780945aca10fbeba19cc4729 | aodh       | alarming       |
| a13c6596df864b17a33dc674a278c31c | cinderv3   | volumev3       |
| bdb018992e1146b499d5b56d8fdfce29 | heat       | orchestration  |
| d0e8d3c58784485888d5b36d71af39a4 | keystone   | identity       |
| d7a05ca55d2349d2b1f4dbc5fb79fb2d | cinder     | volume         |
| e238bbf05e8547e7b4b57ee4dba52a63 | neutron    | network        |
| fb32e87e977742899426a848077b09db | swift      | object-store   |
+----------------------------------+------------+----------------+

Lets now look at all the Ceph related packages installed on the Controller Node.

[root@overcloud-controller-0 ~]# yum list installed | grep -i ceph
ceph-base.x86_64                    1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
ceph-common.x86_64                  1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
ceph-mon.x86_64                     1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
ceph-osd.x86_64                     1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-osd-signed
ceph-radosgw.x86_64                 1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-tools-signed
ceph-selinux.x86_64                 1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
fcgi.x86_64                         2.4.0-27.el7cp     @rhos-10.0-ceph-2.0-mon-signed
gperftools-libs.x86_64              2.4-8.el7          @rhos-10.0-ceph-2.0-mon-signed
leveldb.x86_64                      1.12.0-7.el7cp     @rhos-10.0-ceph-2.0-mon-signed
libbabeltrace.x86_64                1.2.4-4.el7cp      @rhos-10.0-ceph-2.0-mon-signed
libcephfs1.x86_64                   1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
librados2.x86_64                    1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
librbd1.x86_64                      1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
librgw2.x86_64                      1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
lttng-ust.x86_64                    2.4.1-4.el7cp      @rhos-10.0-ceph-2.0-mon-signed
puppet-ceph.noarch                  2.3.0-5.el7ost     @rhos-10.0-signed        
python-cephfs.x86_64                1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
python-flask.noarch                 1:0.10.1-5.el7     @rhos-10.0-ceph-2.0-mon-signed
python-jinja2.noarch                2.7.2-2.el7cp      @rhos-10.0-ceph-2.0-mon-signed
python-netifaces.x86_64             0.10.4-3.el7ost    @rhos-10.0-ceph-2.0-tools-signed
python-rados.x86_64                 1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
python-rbd.x86_64                   1:10.2.7-28.el7cp  @rhos-10.0-ceph-2.0-mon-signed
                                                       @rhos-10.0-ceph-2.0-tools-signed
                                                       @rhos-10.0-ceph-2.0-tools-signed
userspace-rcu.x86_64                0.7.16-1.el7cp     @rhos-10.0-ceph-2.0-mon-signed

Lets check the Ceph cluster status(as user admin) before we begin doing anything. I'll ignore the HEALTH_WARN for the time being. These users are Ceph users and not linux system users. The Ceph users are created using the "ceph auth get-or-create <ceph-user-name>" command

[root@overcloud-controller-0 ~]# ceph -s
    cluster 56a3d310-7ba4-11e7-a540-525400ccb563
     health HEALTH_WARN
            224 pgs degraded
            224 pgs stuck degraded
            224 pgs stuck unclean
            224 pgs stuck undersized
            224 pgs undersized
            recovery 56/84 objects degraded (66.667%)
     monmap e1: 3 mons at {overcloud-controller-0=172.17.3.21:6789/0,overcloud-controller-1=172.17.3.20:6789/0,overcloud-controller-2=172.17.3.15:6789/0}
            election epoch 10, quorum 0,1,2 overcloud-controller-2,overcloud-controller-1,overcloud-controller-0
     osdmap e20: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v7155: 224 pgs, 6 pools, 346 kB data, 28 objects
            86552 kB used, 30613 MB / 30697 MB avail
            56/84 objects degraded (66.667%)
                 224 active+undersized+degraded

Lets verify if the required Ceph configurations are created to integrate with Glance. We will see that a Ceph user "client.openstack" with read permission for monitor, and read, write, execute permission for the created pool.

[root@overcloud-controller-0 ~]# ceph osd pool ls
rbd
metrics
images
backups
volumes
vms

[root@overcloud-controller-0 ~]# ceph auth list
installed auth entries:

osd.0
key: AQAAWI5Z0ygRCBAA4p3emGMKLyvpUxuViBK28w==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQACWI5ZIFwIAxAA+59beKFKKmvdg6K6XiBGKg==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQCVu4hZAAAAABAAFCkRGc+Rx6MS8iJx1nrsoQ==
caps: [mds] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-osd
key: AQCVu4hZAAAAABAAFCkRGc+Rx6MS8iJx1nrsoQ==
caps: [mon] allow profile bootstrap-osd
client.openstack
key: AQCVu4hZAAAAABAAZIqVy2txgF4DfJUyU/6N6A==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics

The pool created for Glance is "images". Lets look into some details of the "images" pool.

[root@overcloud-controller-0 ~]# ceph osd pool stats images
pool images id 2
  nothing is going on



Next, in the /etc/ceph directory we see the keyring for the user "client.openstack" and the ceph.conf files.

[root@overcloud-controller-0 ~]# ls -l /etc/ceph/
total 16
-rw-------. 1 root root 129 Aug 12 01:15 ceph.client.admin.keyring
-rw-r--r--. 1 root root 262 Aug 12 01:15 ceph.client.openstack.keyring
-rw-r--r--. 1 root root 561 Aug 12 01:15 ceph.conf
-rw-r--r--. 1 root root  92 Jul  5 20:22 rbdmap

Lets look into the keyring file for the "openstack" user.

[root@overcloud-controller-0 ~]# cat /etc/ceph/ceph.client.openstack.keyring 
[client.openstack]
key = AQCVu4hZAAAAABAAZIqVy2txgF4DfJUyU/6N6A==
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"

We will no see if we can check the Ceph cluster status as user "openstack".

[root@overcloud-controller-0 ~]# ceph --id openstack -s
    cluster 56a3d310-7ba4-11e7-a540-525400ccb563
     health HEALTH_WARN
            224 pgs degraded
            224 pgs stuck degraded
            224 pgs stuck unclean
            224 pgs stuck undersized
            224 pgs undersized
            recovery 56/84 objects degraded (66.667%)
     monmap e1: 3 mons at {overcloud-controller-0=172.17.3.21:6789/0,overcloud-controller-1=172.17.3.20:6789/0,overcloud-controller-2=172.17.3.15:6789/0}
            election epoch 10, quorum 0,1,2 overcloud-controller-2,overcloud-controller-1,overcloud-controller-0
     osdmap e20: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v7159: 224 pgs, 6 pools, 346 kB data, 28 objects
            86552 kB used, 30613 MB / 30697 MB avail
            56/84 objects degraded (66.667%)
                 224 active+undersized+degraded
  client io 68 B/s rd, 0 op/s rd, 0 op/s wr

Now, just for kicks lets try getting the cluster status as user "foo".

[root@overcloud-controller-0 ~]# ceph --id foo -s
2017-08-16 00:26:11.418444 7f5b36807700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.foo.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2017-08-16 00:26:11.418458 7f5b36807700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2017-08-16 00:26:11.418459 7f5b36807700  0 librados: client.foo initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound

Next, we'll see how Glance is configured to use "rbd" for backend. We can see below that rbd_storage_pool is "images", and "rbd_store_user" is openstack.

[root@overcloud-controller-0 ~]# cat /etc/glance/glance-api.conf | grep rbd
#         * rbd
stores = glance.store.http.Store,glance.store.rbd.Store
#     * rbd
# Allowed values: file, filesystem, http, https, swift, swift+http, swift+https, swift+config, rbd, sheepdog, cinder, vsphere
default_store = rbd
#rbd_store_chunk_size = 8
#rbd_store_pool = images
rbd_store_pool = images
# section in rbd_store_ceph_conf.
#     * rbd_store_ceph_conf
#rbd_store_user = <None>
rbd_store_user = openstack
#     * rbd_store_user
#rbd_store_ceph_conf = /etc/ceph/ceph.conf
#         * rbd

Next, we can see the status of "openstack-glance-api"

[root@overcloud-controller-0 ~]# systemctl status openstack-glance-api
openstack-glance-api.service - OpenStack Image Service (code-named Glance) API server
   Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-08-12 01:48:39 UTC; 3 days ago
 Main PID: 190781 (glance-api)
   CGroup: /system.slice/openstack-glance-api.service
           ├─190781 /usr/bin/python2 /usr/bin/glance-api
           ├─190894 /usr/bin/python2 /usr/bin/glance-api
           ├─190895 /usr/bin/python2 /usr/bin/glance-api
           ├─190896 /usr/bin/python2 /usr/bin/glance-api
           └─190897 /usr/bin/python2 /usr/bin/glance-api

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

Lets download a cirros image, and then upload it onto Glance.

[root@overcloud-controller-0 ~]# curl -o /tmp/cirros.qcow2 \
>  http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 12.6M  100 12.6M    0     0   297k      0  0:00:43  0:00:43 --:--:--  590k


[root@overcloud-controller-0 ~]# openstack image create  --disk-format qcow2 --container-format bare --public  --file /tmp/cirros.qcow2 cirros
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                                      |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6                                                                                                                                                                           |
| container_format | bare                                                                                                                                                                                                       |
| created_at       | 2017-08-16T01:38:46Z                                                                                                                                                                                       |
| disk_format      | qcow2                                                                                                                                                                                                      |
| file             | /v2/images/9b8972f9-6116-4bbe-97de-b416a74cdad2/file                                                                                                                                                       |
| id               | 9b8972f9-6116-4bbe-97de-b416a74cdad2                                                                                                                                                                       |
| min_disk         | 0                                                                                                                                                                                                          |
| min_ram          | 0                                                                                                                                                                                                          |
| name             | cirros                                                                                                                                                                                                     |
| owner            | f0ac0df6e1be446394f28ad66fb40f3c                                                                                                                                                                           |
| properties       | direct_url='rbd://56a3d310-7ba4-11e7-a540-525400ccb563/images/9b8972f9-6116-4bbe-97de-b416a74cdad2/snap', locations='[{u'url': u'rbd://56a3d310-7ba4-11e7-a540-525400ccb563/images/9b8972f9-6116-4bbe-     |
|                  | 97de-b416a74cdad2/snap', u'metadata': {}}]'                                                                                                                                                                |
| protected        | False                                                                                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                                                                                          |
| size             | 13287936                                                                                                                                                                                                   |
| status           | active                                                                                                                                                                                                     |
| tags             |                                                                                                                                                                                                            |
| updated_at       | 2017-08-16T01:38:53Z                                                                                                                                                                                       |
| virtual_size     | None                                                                                                                                                                                                       |
| visibility       | public                                                                                                                                                                                                     |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Using the rbd command we will see that the image name is the "id" of the Glance image. 

[root@overcloud-controller-0 ~]# rbd --id openstack -p images ls
9b8972f9-6116-4bbe-97de-b416a74cdad2

The "ceph df" command gives a good detail of all the ceph pools and the usage of each of them.

[root@overcloud-cephstorage-0 ceph]# ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    30697M     30582M         115M          0.38 
POOLS:
    NAME        ID     USED       %USED     MAX AVAIL     OBJECTS 
    rbd         0           0         0        10194M           0 
    metrics     1        609k         0        10194M          47 
    images      2      12976k      0.04        10194M           7 
    backups     3           0         0        10194M           0 
    volumes     4           0         0        10194M           0 
    vms         5           0         0        10194M           0 

For more details on the osd. The ceph-osd did not start for some reason on overcloud-cephstorage-1, I need to root cause that.

[root@overcloud-cephstorage-0 ceph]# ceph osd df tree
ID WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE VAR  PGS TYPE NAME                        
-1 0.02917        - 30697M   115M 30582M 0.38 1.00   0 root default                     
-2 0.01459        - 15348M 59464k 15290M 0.38 1.00   0     host overcloud-cephstorage-2 
 0 0.01459  1.00000 15348M 59464k 15290M 0.38 1.00 224         osd.0                    
-3 0.01459        - 15348M 59228k 15291M 0.38 1.00   0     host overcloud-cephstorage-0 
 1 0.01459  1.00000 15348M 59228k 15291M 0.38 1.00 224         osd.1                    
              TOTAL 30697M   115M 30582M 0.38                                           

MIN/MAX VAR: 1.00/1.00  STDDEV: 0

I have 1 x HDD on each of the Ceph Storage nodes dedicated to Ceph, so for that reason we have both data and journal on the same disk. The output of "fdisk -l" clearly shows that the first partition of vdb is used for data, while the second partition is used for journal.

[root@overcloud-cephstorage-0 ~]# fdisk -l

Disk /dev/vda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000451ac

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1            2048        4095        1024   83  Linux
/dev/vda2   *        4096    41943006    20969455+  83  Linux
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: EF24FDED-84D2-4723-A855-092A9438F6F7


#         Start          End    Size  Type            Name
 1     10487808     41943006     15G  unknown         ceph data
 2         2048     10487807      5G  unknown         ceph journal



Red Hat OpenStack 10 deployment with Ceph Storage

I had deployed Red Hat OpenStack Platform 10 (Neuton) on my Dell box using Virtual Machines.

The deployment had 3 x Controller Nodes, 1 x Compute Node, and 3 x Ceph Nodes


[root@openstack ~]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 249   undercloud                     running
 278   overcloud-controller3          running
 279   overcloud-ceph2                running
 280   overcloud-compute1             running
 281   overcloud-ceph3                running
 282   overcloud-ceph1                running
 285   overcloud-controller2          running
 286   overcloud-controller1          running
 -     overcloud-compute2             shut off

I will not go through the entire deployment process, but will show some of the yaml files needed for the deployment.

Here is the command/script that was used to deploy the Overcloud.

[stack@undercloud ~]$ cat run_deploy.sh.ceph 
#!/bin/bash

set -o verbose
source stackrc
pushd /home/stack

openstack overcloud deploy --templates ~/my_templates -e /home/stack/my_templates/advanced-networking.yaml -e /home/stack/my_templates/storage-environment.yaml --ntp-server 0.north-america.pool.ntp.org --control-flavor control --control-scale 3 --compute-flavor compute --compute-scale 1 --ceph-storage-flavor ceph-storage --ceph-storage-scale 3 --neutron-tunnel-types vxlan --neutron-network-type vxlan

echo DONE

Next, we will look at the storage-environment.yaml file used in this deployment.

[stack@undercloud ~]$ cat my_templates/storage-environment.yaml
## A Heat environment file which can be used to set up storage
## backends. Defaults to Ceph used as a backend for Cinder, Glance and
## Nova ephemeral storage.
resource_registry:
  OS::TripleO::Services::CephMon: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-osd.yaml
  OS::TripleO::Services::CephClient: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-client.yaml

  OS::TripleO::NodeUserData: /home/stack/my_templates/firstboot/wipe-disks.yaml

parameter_defaults:

  #### BACKEND SELECTION ####

  ## Whether to enable iscsi backend for Cinder.
  CinderEnableIscsiBackend: false
  ## Whether to enable rbd (Ceph) backend for Cinder.
  CinderEnableRbdBackend: true
  ## Cinder Backup backend can be either 'ceph' or 'swift'.
  CinderBackupBackend: ceph
  ## Whether to enable NFS backend for Cinder.
  # CinderEnableNfsBackend: false
  ## Whether to enable rbd (Ceph) backend for Nova ephemeral storage.
  NovaEnableRbdBackend: true
  ## Glance backend can be either 'rbd' (Ceph), 'swift' or 'file'.
  GlanceBackend: rbd
  ## Gnocchi backend can be either 'rbd' (Ceph), 'swift' or 'file'.
  GnocchiBackend: rbd


  #### CINDER NFS SETTINGS ####

  ## NFS mount options
  # CinderNfsMountOptions: ''
  ## NFS mount point, e.g. '192.168.122.1:/export/cinder'
  # CinderNfsServers: ''


  #### GLANCE NFS SETTINGS ####

  ## Make sure to set `GlanceBackend: file` when enabling NFS
  ##
  ## Whether to make Glance 'file' backend a NFS mount
  # GlanceNfsEnabled: false
  ## NFS share for image storage, e.g. '192.168.122.1:/export/glance'
  ## (If using IPv6, use both double- and single-quotes,
  ## e.g. "'[fdd0::1]:/export/glance'")
  # GlanceNfsShare: ''
  ## Mount options for the NFS image storage mount point
  # GlanceNfsOptions: 'intr,context=system_u:object_r:glance_var_lib_t:s0'


  #### CEPH SETTINGS ####

  ## When deploying Ceph Nodes through the oscplugin CLI, the following
  ## parameters are set automatically by the CLI. When deploying via
  ## heat stack-create or ceph on the controller nodes only,
  ## they need to be provided manually.

  ## Number of Ceph storage nodes to deploy
  # CephStorageCount: 0
  ## Ceph FSID, e.g. '4b5c8c0a-ff60-454b-a1b4-9747aa737d19'
  # CephClusterFSID: ''
  ## Ceph monitor key, e.g. 'AQC+Ox1VmEr3BxAALZejqeHj50Nj6wJDvs96OQ=='
  # CephMonKey: ''
  ## Ceph admin key, e.g. 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ=='
  # CephAdminKey: ''
  ## Ceph client key, e.g 'AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw=='
  # CephClientKey: ''

  ExtraConfig:
    ceph::profile::params::osds:
      '/dev/vdb': {}


As you see above rbd(Ceph) is used as the backend for Cinder. Ceph is also used for Cinder backup, for Nova ephemeral storage, for Glance backend to store images, and for Gnocchi backend.

There are two other files that I'd like to share, the first one is the advanced_network.yam and the other being the nic-config file for the Ceph Node.

[stack@undercloud ~]$ cat my_templates/advanced-networking.yaml
# Enable the creation of Neutron networks for isolated Overcloud
# traffic and configure each role to assign ports (related
# to that role) on these networks.
resource_registry:
  OS::TripleO::Network::External: /home/stack/my_templates/network/external.yaml
  OS::TripleO::Network::InternalApi: /home/stack/my_templates/network/internal_api.yaml
  OS::TripleO::Network::StorageMgmt: /home/stack/my_templates/network/storage_mgmt.yaml
  OS::TripleO::Network::Storage: /home/stack/my_templates/network/storage.yaml
  OS::TripleO::Network::Tenant: /home/stack/my_templates/network/tenant.yaml
  # Management network is optional and disabled by default.
  # To enable it, include environments/network-management.yaml
  #OS::TripleO::Network::Management: /home/stack/my_templates/network/management.yaml

  # Port assignments for the VIPs
  OS::TripleO::Network::Ports::ExternalVipPort: /home/stack/my_templates/network/ports/external.yaml
  OS::TripleO::Network::Ports::InternalApiVipPort: /home/stack/my_templates/network/ports/internal_api.yaml
  OS::TripleO::Network::Ports::StorageVipPort: /home/stack/my_templates/network/ports/storage.yaml
  OS::TripleO::Network::Ports::StorageMgmtVipPort: /home/stack/my_templates/network/ports/storage_mgmt.yaml
  OS::TripleO::Network::Ports::RedisVipPort: /home/stack/my_templates/network/ports/vip.yaml

  # Port assignments for service virtual IPs for the controller role
  OS::TripleO::Controller::Ports::RedisVipPort: /home/stack/my_templates/network/ports/vip.yaml
  # Port assignments for the controller role
  OS::TripleO::Controller::Ports::ExternalPort: /home/stack/my_templates/network/ports/external.yaml
  OS::TripleO::Controller::Ports::InternalApiPort: /home/stack/my_templates/network/ports/internal_api.yaml
  OS::TripleO::Controller::Ports::StoragePort: /home/stack/my_templates/network/ports/storage.yaml
  OS::TripleO::Controller::Ports::StorageMgmtPort: /home/stack/my_templates/network/ports/storage_mgmt.yaml
  OS::TripleO::Controller::Ports::TenantPort: /home/stack/my_templates/network/ports/tenant.yaml
  #OS::TripleO::Controller::Ports::ManagementPort: /home/stack/my_templates/network/ports/management.yaml

  # Port assignments for the compute role
  OS::TripleO::Compute::Ports::ExternalPort: /home/stack/my_templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::InternalApiPort: /home/stack/my_templates/network/ports/internal_api.yaml
  OS::TripleO::Compute::Ports::StoragePort: /home/stack/my_templates/network/ports/storage.yaml
  OS::TripleO::Compute::Ports::StorageMgmtPort: /home/stack/my_templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::TenantPort: /home/stack/my_templates/network/ports/tenant.yaml
  #OS::TripleO::Compute::Ports::ManagementPort: /home/stack/my_templates/network/ports/management.yaml

  # Port assignments for the ceph storage role
  OS::TripleO::CephStorage::Ports::ExternalPort: /home/stack/my_templates/network/ports/noop.yaml
  OS::TripleO::CephStorage::Ports::InternalApiPort: /home/stack/my_templates/network/ports/noop.yaml
  OS::TripleO::CephStorage::Ports::StoragePort: /home/stack/my_templates/network/ports/storage.yaml
  OS::TripleO::CephStorage::Ports::StorageMgmtPort: /home/stack/my_templates/network/ports/storage_mgmt.yaml
  OS::TripleO::CephStorage::Ports::TenantPort: /home/stack/my_templates/network/ports/noop.yaml
  #OS::TripleO::CephStorage::Ports::ManagementPort: /home/stack/my_templates/network/ports/management.yaml


# NIC Configs for our roles
  OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/my_templates/nic-configs/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/my_templates/nic-configs/controller.yaml
  OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/my_templates/nic-configs/ceph-storage.yaml

# Enable the creation of Neutron networks for isolated Overcloud
# traffic and configure each role to assign ports (related
# to that role) on these networks.

parameter_defaults:
  # Internal API used for private OpenStack Traffic
  InternalApiNetCidr: 172.17.1.0/24
  InternalApiAllocationPools: [{'start': '172.17.1.10', 'end': '172.17.1.200'}]
  InternalApiNetworkVlanID: 101

  # Tenant Network Traffic - will be used for VXLAN over VLAN
  TenantNetCidr: 172.17.2.0/24
  TenantAllocationPools: [{'start': '172.17.2.10', 'end': '172.17.2.200'}]
  TenantNetworkVlanID: 201

  StorageNetCidr: 172.17.3.0/24
  StorageAllocationPools: [{'start': '172.17.3.10', 'end': '172.17.3.200'}]
  StorageNetworkVlanID: 301

  StorageMgmtNetCidr: 172.17.4.0/24
  StorageMgmtAllocationPools: [{'start': '172.17.4.10', 'end': '172.17.4.200'}]
  StorageMgmtNetworkVlanID: 401

  # External Networking Access - Public API Access
  ExternalNetCidr: 192.168.122.0/24

  # Leave room for floating IPs in the External allocation pool (if required)
  ExternalAllocationPools: [{'start': '192.168.122.100', 'end': '192.168.122.129'}]

  # Set to the router gateway on the external network
  ExternalInterfaceDefaultRoute: 192.168.122.1

  # Add in configuration for the Control Plane
  ControlPlaneSubnetCidr: "24"
  ControlPlaneDefaultRoute: 172.16.0.1
  EC2MetadataIp: 172.16.0.1
  DnsServers: ['192.168.122.1', '8.8.8.8']


Next, we will look at the NIC config for the Ceph Storage Node

[stack@undercloud ~]$ cat /home/stack/my_templates/nic-configs/ceph-storage.yaml
heat_template_version: 2015-04-30

description: >
  Software Config to drive os-net-config to configure multiple interfaces
  for the ceph storage role.

parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal API network
    type: string
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage mgmt network
    type: string
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  ManagementIpSubnet: # Only populated when including environments/network-management.yaml
    default: ''
    description: IP address/subnet on the management network
    type: string
  ExternalNetworkVlanID:
    default: 10
    description: Vlan ID for the external network traffic.
    type: number
  InternalApiNetworkVlanID:
    default: 20
    description: Vlan ID for the internal_api network traffic.
    type: number
  StorageNetworkVlanID:
    default: 30
    description: Vlan ID for the storage network traffic.
    type: number
  StorageMgmtNetworkVlanID:
    default: 40
    description: Vlan ID for the storage mgmt network traffic.
    type: number
  TenantNetworkVlanID:
    default: 50
    description: Vlan ID for the tenant network traffic.
    type: number
  ExternalInterfaceDefaultRoute:
    default: '10.0.0.1'
    description: default route for the external network
    type: string
  ControlPlaneSubnetCidr: # Override this via parameter_defaults
    default: '24'
    description: The subnet CIDR of the control plane network.
    type: string
  ControlPlaneDefaultRoute: # Override this via parameter_defaults
    description: The subnet CIDR of the control plane network.
    type: string
  DnsServers: # Override this via parameter_defaults
    default: []
    description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
    type: json
  EC2MetadataIp: # Override this via parameter_defaults
    description: The IP address of the EC2 metadata server.
    type: string

resources:
  OsNetConfigImpl:
    type: OS::Heat::StructuredConfig
    properties:
      group: os-apply-config
      config:
        os_net_config:
          network_config:
            -
              type: interface
              name: nic1
              use_dhcp: false
              dns_servers: {get_param: DnsServers}
              addresses:
                -
                  ip_netmask:
                    list_join:
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
              routes:
                -
                  ip_netmask: 0.0.0.0/0
                  next_hop: {get_param: ControlPlaneDefaultRoute}
                  # Optionally have this interface as default route
                  default: true
                -
                  ip_netmask: 169.254.169.254/32
                  next_hop: {get_param: EC2MetadataIp}
            -
              type: ovs_bridge
              name: br-isolated
              use_dhcp: false
              members:
                -
                  type: interface
                  name: nic2
                  # force the MAC address of the bridge to this interface
                  primary: true
                -
                  type: vlan
                  vlan_id: {get_param: StorageNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: StorageIpSubnet}
                -
                  type: vlan
                  vlan_id: {get_param: StorageMgmtNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: StorageMgmtIpSubnet}

outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}

Nic 1 of the Ceph node was used for Provisioning, which Nic 2 is vlan-ed for Storage and Storage Magement.

 After the deployment completes successfully, here is what you will see.

[stack@undercloud ~]$ neutron net-list
+--------------------------------------+--------------+-------------------------------------------------------+
| id                                   | name         | subnets                                               |
+--------------------------------------+--------------+-------------------------------------------------------+
| 542b5f6d-d0a6-4fd2-a905-996208a77525 | storage      | 4449344f-089f-426f-bd5f-37591e9e71ea 172.17.3.0/24    |
| 9e2ab19a-3b29-47a7-8db8-995883e1511b | internal_api | 1901412a-d54b-4274-8042-7daacfe07bfc 172.17.1.0/24    |
| a9fedf2d-537c-461c-b15c-1f3fe2467172 | ctlplane     | 869c6ad4-d133-4076-875f-347026dfee88 172.16.0.0/24    |
| b7bc5c80-a57b-40b7-ac5c-9beac1da84d7 | storage_mgmt | 2b724ccd-a9eb-4aa8-907a-14bcebd34e27 172.17.4.0/24    |
| c5392029-9438-4521-8202-f9a89648d8a5 | tenant       | 3b4b55a3-97d1-40dc-b95c-51bd2012e2ae 172.17.2.0/24    |
| cef77887-476d-47ba-8d06-4fa65acbe5ff | external     | 0fca9fa0-b3eb-4b25-a987-cc936a28d659 192.168.122.0/24 |
+--------------------------------------+--------------+-------------------------------------------------------+
[stack@undercloud ~]$ ironic node-list
+--------------------------------------+-----------------------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Name                  | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+-----------------------+--------------------------------------+-------------+--------------------+-------------+
| 945a30f3-dc7f-4ca4-946d-18f38a352e1e | overcloud-ceph1       | bdc05172-4e5f-4aaf-9758-bbe0d2cb2012 | power on    | active             | False       |
| d78b8cac-4379-4825-bb22-4e9e9f259fcd | overcloud-ceph2       | 1208a491-157f-4f43-9267-017aba21b331 | power on    | active             | False       |
| 4b50450a-eaee-4828-ab43-a96f36d52789 | overcloud-ceph3       | 2a658d35-64f7-40d4-9d54-142640279a86 | power on    | active             | False       |
| 682b6813-5df2-4829-927a-a1b95e081119 | overcloud-compute1    | 8b626a49-ad7b-4e1e-9dec-8ca1abb5073e | power on    | active             | False       |
| 2645e0ed-c403-4e16-afdd-b293726fd0eb | overcloud-compute2    | None                                 | power off   | available          | False       |
| af20af15-7d13-41fa-9862-2a191108be4c | overcloud-controller1 | 5ee03387-25bc-4368-9da4-e6a73cecd455 | power on    | active             | False       |
| 4272e8f0-7409-4858-b17a-323ff0fbd43a | overcloud-controller2 | f55665f3-8926-4bc2-8068-74d9001600e6 | power on    | active             | False       |
| a9325cf0-9b0f-405f-8cbe-697e4146ffbd | overcloud-controller3 | 40fc318e-22fe-4418-a063-eae58719da38 | power on    | active             | False       |
+--------------------------------------+-----------------------+--------------------------------------+-------------+--------------------+-------------+
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks             |
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
| bdc05172-4e5f-4aaf-9758-bbe0d2cb2012 | overcloud-cephstorage-0 | ACTIVE | -          | Running     | ctlplane=172.16.0.32 |
| 1208a491-157f-4f43-9267-017aba21b331 | overcloud-cephstorage-1 | ACTIVE | -          | Running     | ctlplane=172.16.0.26 |
| 2a658d35-64f7-40d4-9d54-142640279a86 | overcloud-cephstorage-2 | ACTIVE | -          | Running     | ctlplane=172.16.0.34 |
| 8b626a49-ad7b-4e1e-9dec-8ca1abb5073e | overcloud-compute-0     | ACTIVE | -          | Running     | ctlplane=172.16.0.27 |
| 40fc318e-22fe-4418-a063-eae58719da38 | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=172.16.0.31 |
| 5ee03387-25bc-4368-9da4-e6a73cecd455 | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=172.16.0.23 |
| f55665f3-8926-4bc2-8068-74d9001600e6 | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=172.16.0.36 |
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+

In the next post we will look into the details of Ceph.