In previous posts of the series, I have covered the installation of the Cisco ACI Opflex plugin our OpenStack lab and covered the basics of the plugin. Specifically, we have gone through the core benefits of the Opflex OVS agent, such as virtualization visibility on ACI, distributed routing, optimized DHCP and Metadata functions, etc.
In this part 3 of the series, I will cover the setup of the external networks to allow OpenStack instances in the created tenants to communicate with the outside world through ACI.
OpenStack external networks with ACI Opflex plugin
The plugin allows the creation of Neutron external networks. An external network can be either shared among tenants or dedicated per tenant.
- Shared external networks: In this scenario, an L3Out is defined in ACI tenant common that will be shared among several projects (tenants). This scenario is often used when there is limited number of L3Out in the ACI fabric, or there is little concern to share the same L3Out among different projects.
- Dedicated external networks: In this scenario, a dedicated L3Out needs to be defined in the Cisco ACI tenant.
In either scenario, the L3Out needs to be defined on ACI. In order to achieve this, we can use any available tools that we are familiar with for this task, either by navigating through GUI, CLI, REST APIs, or simply posting .xml configuration file through APIC GUI.
We can have SNAT or Floating IP (FIP) enabled for these types of external networks.
- If we want SNAT or FIP to be enabled
- the L3Out VRFs should be separate from the DefaultRoutedVRF (DefaultVRF) that the ACI plugin created on OpenStack.
- This is enabled by default in shared external networks scenario, as the tenant VRFs and L3Out VRF are separate.
- If we do not want SNAT or FIB:
- the L3Out should be defined in the tenant VRF that the integration plugin creates.
- For shared external networks, address scope should be defined as tenant VRFs and common L3Out VRF are separate by default (not covered in this post).
Lab scenario
We will go through these scenarios in our lab. In our lab, we should now have instances spun up and connectivity among instances within a tenant is successful.
Our ultimate goal for the lab will be providing external connectivity for the instances in projects Red and Blue. We will do that for different scenarios as briefly described above.
- Scenario 1: Shared external network – Instances in project Red will use a shared external network with L3Out defined in ACI tenant common
- Scenario 1.1: Using SNAT
- Scenario 1.2: Using Floating IP
- Scenario 2: Dedicated external network with NAT – Instances in project Blue will use a dedicated external network with L3Out defined in ACI tenant blue.
- Scenario 2.1: Using SNAT
- Scenario 2.2: Using Floating IP
- Scenario 3: Dedicated external network without NAT – Instances in project Blue will use a dedicated external network with L3Out defined in ACI tenant blue.
Now let’s get into the details.
Scenario 1: Shared external network
Creating L3Out in tenant common
First, we need to create the L3Out in tenant common. We can navigate through the APIC GUI or any other supported methods (REST API, XML config, CLI, etc.) to do this. Below is the sample of the XML we can post to APIC with parent dn uni/tn-common
to configure the L3Out over VPC to the external ASR 1000 router:
<?xml version="1.0" encoding="UTF-8"?> <imdata totalCount="1"> <l3extOut annotation="" descr="" dn="uni/tn-common/out-dc-out" enforceRtctrl="export" name="dc-out" nameAlias="" ownerKey="" ownerTag="" targetDscp="unspecified"> <ospfExtP annotation="" areaCost="1" areaCtrl="redistribute,summary" areaId="0.0.0.1" areaType="regular" descr="" multipodInternal="no" nameAlias=""/> <l3extRsL3DomAtt annotation="" tDn="uni/l3dom-TO-A1K-L3-DOM"/> <l3extRsEctx annotation="" tnFvCtxName="spdc-vrf"/> <l3extLNodeP annotation="" configIssues="" descr="" name="dc-out_nodeProfile" nameAlias="" ownerKey="" ownerTag="" tag="yellow-green" targetDscp="unspecified"> <l3extRsNodeL3OutAtt annotation="" configIssues="" rtrId="100.100.100.1" rtrIdLoopBack="no" tDn="topology/pod-1/node-101"/> <l3extRsNodeL3OutAtt annotation="" configIssues="" rtrId="100.100.100.2" rtrIdLoopBack="no" tDn="topology/pod-1/node-102"/> <l3extLIfP annotation="" descr="" name="dc-out_vpcIpv4" nameAlias="" ownerKey="" ownerTag="" prio="unspecified" tag="yellow-green"> <ospfIfP annotation="" authKeyId="1" authType="none" descr="" name="" nameAlias=""> <ospfRsIfPol annotation="" tnOspfIfPolName="common-ospf-broadcast"/> </ospfIfP> <l3extRsPathL3OutAtt addr="0.0.0.0" annotation="" autostate="disabled" descr="" encap="vlan-1011" encapScope="local" ifInstT="ext-svi" ipv6Dad="enabled" llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="9000" tDn="topology/pod-1/protpaths-101-102/pathep-[Switch101-102_1-ports-33_PolGrp]" targetDscp="unspecified"> <l3extMember addr="172.16.11.3/24" annotation="" descr="" ipv6Dad="enabled" llAddr="::" name="" nameAlias="" side="B"> <l3extIp addr="172.16.11.1/24" annotation="" descr="" ipv6Dad="enabled" name="" nameAlias=""/> </l3extMember> <l3extMember addr="172.16.11.2/24" annotation="" descr="" ipv6Dad="enabled" llAddr="::" name="" nameAlias="" side="A"> <l3extIp addr="172.16.11.1/24" annotation="" descr="" ipv6Dad="enabled" name="" nameAlias=""/> </l3extMember> </l3extRsPathL3OutAtt> <l3extRsNdIfPol annotation="" tnNdIfPolName=""/> <l3extRsLIfPCustQosPol annotation="" tnQosCustomPolName=""/> <l3extRsIngressQosDppPol annotation="" tnQosDppPolName=""/> <l3extRsEgressQosDppPol annotation="" tnQosDppPolName=""/> <l3extRsArpIfPol annotation="" tnArpIfPolName=""/> </l3extLIfP> </l3extLNodeP> <l3extInstP annotation="" descr="" exceptionTag="" floodOnEncap="disabled" matchT="AtleastOne" name="dc-out-ext-epg" nameAlias="" prefGrMemb="exclude" prio="unspecified" targetDscp="unspecified"> <l3extSubnet aggregate="" annotation="" descr="" ip="0.0.0.0/0" name="" nameAlias="" scope="import-security"/> <fvRsCustQosPol annotation="" tnQosCustomPolName=""/> </l3extInstP> </l3extOut> </imdata>
On the external router ASR 1000 side, the OSPF peering is done on a separate vrf spdc-vrf
, with matching OSPF parameters. The ASR 1000 router will also provide external connectivity from VRF spdc-vrf
to our internal IT network and Internet in global VRF using NAT overload. Below is the configuration of the ASR 1000 running IOS XE:
vrf definition spdc-vrf ! address-family ipv4 exit-address-family ! interface Port-channel10.1011 encapsulation dot1Q 1011 vrf forwarding spdc-vrf ip address 172.16.11.4 255.255.255.0 ip nat inside ip ospf network broadcast ip ospf mtu-ignore ip ospf 1011 area 1 ! router ospf 1011 vrf spdc-vrf default-information originate ! ip nat inside source list DC-OUT interface GigabitEthernet0/0/1.360 vrf spdc-vrf overload ! ip access-list standard DC-OUT permit any ! ip route vrf spdc-vrf 0.0.0.0 0.0.0.0 10.138.157.129 global
After the L3Out and external router configuration, the OSPF neighbor relationship should be established:
hni05-lab-a1002-2#show ip ospf 1011 neigh Neighbor ID Pri State Dead Time Address Interface 100.100.100.1 1 FULL/DROTHER 00:00:33 172.16.11.2 Port-channel10.1011 100.100.100.2 1 FULL/BDR 00:00:31 172.16.11.3 Port-channel10.1011
Creating shared external network to use the L3Out on OpenStack
In the next step, we will configure OpenStack plugin to use the created L3Out dc-out for the shared external network named external-common-net
On the controller node, through the aimctl
command, we can query Cisco ACI for the created L3Out:
ubuntu@os-controller:~$ aimctl manager external-network-find +---------------+--------------+----------------+ | tenant_name | l3out_name | name | |---------------+--------------+----------------| | common | dc-out | dc-out-ext-epg | +---------------+--------------+----------------+
Then, using the neutron
command, create an external network that consumes the create L3Out (dc-out-ext-epg):
$ neutron net-create external-common-net --router:external --apic:distinguished_names type=dict ExternalNetwork=uni/tn-common/out-dc-out/instP-dc-out-ext-epg neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Failed to discover available identity versions when contacting http://172.16.9.217:5000/v3. Attempting to parse version from URL. Created a new network: +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | True | | apic:bgp_asn | 0 | | apic:bgp_enable | False | | apic:bgp_type | default_export | | apic:distinguished_names | {"EndpointGroup": "uni/tn-common/ap-juju2-ostack_OpenStack/epg-EXT-dc-out", "ExternalNetwork": "uni/tn-common/out-dc-out/instP-dc-out-ext-epg", "VRF": "uni/tn-common/ctx-spdc-vrf", "BridgeDomain": "uni/tn-common/BD-juju2-ostack_EXT-dc-out"} | | apic:external_cidrs | 0.0.0.0/0 | | apic:nat_type | distributed | | apic:nested_domain_allowed_vlans | | | apic:nested_domain_infra_vlan | | | apic:nested_domain_name | | | apic:nested_domain_node_network_vlan | | | apic:nested_domain_service_vlan | | | apic:nested_domain_type | | | apic:svi | False | | apic:synchronization_state | build | | availability_zone_hints | | | availability_zones | | | created_at | 2020-06-29T16:05:26Z | | description | | | id | f20f6e8a-16bf-4821-b4b2-e524c6eeb08f | | ipv4_address_scope | | | ipv6_address_scope | | | is_default | False | | mtu | 1500 | | name | external-common-net | | port_security_enabled | True | | project_id | 75ff76b727fc4a5eb754c4393c619e3f | | provider:network_type | opflex | | provider:physical_network | physnet1 | | provider:segmentation_id | | | revision_number | 6 | | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tags | | | tenant_id | 75ff76b727fc4a5eb754c4393c619e3f | | updated_at | 2020-06-29T16:05:28Z | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
By default with the OpFlex plugin, the external network is created with NAT enabled (--apic:nat_type = 'distributed'
)
Next, we will create a SNAT pool and a Floating IP pool, attach them to the external network, and set red-router01 as external gateway:
neutron subnet-create external-net-common 172.17.0.0/24 --name ext-subnet-common --disable-dhcp --gateway 172.17.0.1 --apic:snat_host_pool True neutron subnet-create external-net-common 172.18.0.0/24 --name ext-subnet-FIP --allocation-pool start=172.18.0.10,end=172.18.0.100 --disable-dhcp --gateway 172.18.0.1 openstack router set --external-gateway external-net-common red-router01
A subnet for SNAT pool has been created:
Created a new subnet: +----------------------------+------------------------------------------------+ | Field | Value | +----------------------------+------------------------------------------------+ | allocation_pools | {"start": "172.17.0.2", "end": "172.17.0.254"} | | apic:distinguished_names | {} | | apic:snat_host_pool | True | | apic:synchronization_state | N/A | | cidr | 172.17.0.0/24 | | created_at | 2020-07-11T15:19:13Z | | description | | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 172.17.0.1 | | host_routes | | | id | 19974370-8451-4711-aa98-ab3897ebc6d7 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | ext-subnet-common | | network_id | 9d73e2cf-50d1-4da5-8d6b-fb252eb49d58 | | project_id | 07d74314cbc24118888cf622976ad116 | | revision_number | 0 | | service_types | | | subnetpool_id | | | tags | | | tenant_id | 07d74314cbc24118888cf622976ad116 | | updated_at | 2020-07-11T15:19:13Z | +----------------------------+------------------------------------------------+
Likewise, new subnet for Floating IP pool has been created:
Created a new subnet: +----------------------------+-------------------------------------------------+ | Field | Value | +----------------------------+-------------------------------------------------+ | allocation_pools | {"start": "172.18.0.10", "end": "172.18.0.100"} | | apic:distinguished_names | {} | | apic:snat_host_pool | False | | apic:synchronization_state | N/A | | cidr | 172.18.0.0/24 | | created_at | 2020-07-11T15:21:19Z | | description | | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 172.18.0.1 | | host_routes | | | id | 82a315c8-b0af-4704-b7d8-74c54a838b16 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | ext-subnet-FIP | | network_id | 9d73e2cf-50d1-4da5-8d6b-fb252eb49d58 | | project_id | 07d74314cbc24118888cf622976ad116 | | revision_number | 0 | | service_types | | | subnetpool_id | | | tags | | | tenant_id | 07d74314cbc24118888cf622976ad116 | | updated_at | 2020-07-11T15:21:19Z | +----------------------------+-------------------------------------------------+
Scenario 1.1: External network with SNAT
Each compute node will be assigned with one IP from the SNAT pool. On ACI, the assigned SNAT IP will appear in tenant common:
The created IP pools are created on ACI as BD subnets. The subets are advertised externally through the configured L3Out:
On the VM instances without any assigned floating IPs, the traffic in the routed subnets (e.g. red-net01, red-net02) will be NAT’ed by the OVS with the SNAT IP of the compute node.
We can verify the external connectivity from red-vm1 (SNAT traffic)
Scenario 2: External network with Floating IP
Now let’s examine the case of Floating IP. We will assign a floating IP from the pool to the VM red-vm2-net01-az1.
On ACI, the endpoint with the floating IP is visible in tenant common:
For the VM instances with assigned floating IPs, OVS will NAT the traffic it will source NAT the VM IP.
We can also see the OpenFlow rules programmed on the OVS for this NAT function (nw_src=192.168.1.6 actions=set_field:fa:16:3e:ab:b2:19->eth_src,set_field:00:22:bd:f8:19:ff->eth_dst,set_field:172.18.0.10->ip_src
)
ubuntu@os-compute-01:~$ sudo ovs-ofctl dump-flows br-fabric -O OpenFlow13 | grep "172.18.0.10" <snip> cookie=0x0, duration=45085.207s, table=12, n_packets=12, n_bytes=1051, priority=10,ip,reg6=0x1,reg7=0x740002,metadata=0x2/0xff,nw_src=192.168.1.6 actions=set_field:fa:16:3e:ab:b2:19->eth_src,set_field:00:22:bd:f8:19:ff->eth_dst,set_field:172.18.0.10->ip_src,dec_ttl,load:0x740002->NXM_NX_REG0[],load:0x7->NXM_NX_REG4[],load:0x7->NXM_NX_REG5[],load:0x4->NXM_NX_REG6[],load:0->NXM_NX_REG7[],load:0x400->OXM_OF_METADATA[],resubmit(,4)
Now let’s check the connectivity to the outside. The ping should be successful and the VM is using floating IP to communicate outside.
Scenario 2: Dedicated external networks with DNAT
Creating L3Out in OpenStack created tenant
The only difference between this scenario versus the shared external networks described above is that we need to create the L3Out in the OpenStack created tenant.
As we use NAT, we must create the L3Out in a separate VRF from the default routed VRF that OpenStack creates on ACI. In our setup, we create a new VRF called l3out-vrf in tenant blue and use this VRF for the new L3Out.
Below is the sample of the XML we can post to APIC with parent dn uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05
(tenant Blue) to configure the L3Out over VPC to the external ASR 1000 router. Notice that we are using l3out-vrf, OSPF area 21 and a dedicated VLAN encapsulation (VLAN 1021) for the L3Out connection:
<?xml version="1.0" encoding="UTF-8"?> <imdata totalCount="1"> <l3extOut annotation="" descr="" dn="uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/out-blue-out" enforceRtctrl="export" name="blue-out" nameAlias="" ownerKey="" ownerTag="" targetDscp="unspecified"> <ospfExtP annotation="" areaCost="1" areaCtrl="redistribute,summary" areaId="0.0.0.21" areaType="regular" descr="" multipodInternal="no" nameAlias=""/> <l3extRsL3DomAtt annotation="" tDn="uni/l3dom-TO-A1K-L3-DOM"/> <l3extRsEctx annotation="" tnFvCtxName="l3out-vrf"/> <l3extLNodeP annotation="" configIssues="" descr="" name="blue-out_nodeProfile" nameAlias="" ownerKey="" ownerTag="" tag="yellow-green" targetDscp="unspecified"> <l3extRsNodeL3OutAtt annotation="" configIssues="" rtrId="100.100.21.1" rtrIdLoopBack="no" tDn="topology/pod-1/node-101"/> <l3extRsNodeL3OutAtt annotation="" configIssues="" rtrId="100.100.21.2" rtrIdLoopBack="no" tDn="topology/pod-1/node-102"/> <l3extLIfP annotation="" descr="" name="blue-out_vpcIpv4" nameAlias="" ownerKey="" ownerTag="" prio="unspecified" tag="yellow-green"> <ospfIfP annotation="" authKeyId="1" authType="none" descr="" name="" nameAlias=""> <ospfRsIfPol annotation="" tnOspfIfPolName="common-ospf-broadcast"/> </ospfIfP> <l3extRsPathL3OutAtt addr="0.0.0.0" annotation="" autostate="disabled" descr="" encap="vlan-1021" encapScope="local" ifInstT="ext-svi" ipv6Dad="enabled" llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="9000" tDn="topology/pod-1/protpaths-101-102/pathep-[Switch101-102_1-ports-33_PolGrp]" targetDscp="unspecified"> <l3extMember addr="172.16.21.3/24" annotation="" descr="" ipv6Dad="enabled" llAddr="::" name="" nameAlias="" side="B"> <l3extIp addr="172.16.21.1/24" annotation="" descr="" ipv6Dad="enabled" name="" nameAlias=""/> </l3extMember> <l3extMember addr="172.16.21.2/24" annotation="" descr="" ipv6Dad="enabled" llAddr="::" name="" nameAlias="" side="A"> <l3extIp addr="172.16.21.1/24" annotation="" descr="" ipv6Dad="enabled" name="" nameAlias=""/> </l3extMember> </l3extRsPathL3OutAtt> <l3extRsNdIfPol annotation="" tnNdIfPolName=""/> <l3extRsLIfPCustQosPol annotation="" tnQosCustomPolName=""/> <l3extRsIngressQosDppPol annotation="" tnQosDppPolName=""/> <l3extRsEgressQosDppPol annotation="" tnQosDppPolName=""/> <l3extRsArpIfPol annotation="" tnArpIfPolName=""/> </l3extLIfP> </l3extLNodeP> <l3extInstP annotation="" descr="" exceptionTag="" floodOnEncap="disabled" matchT="AtleastOne" name="blue-out-ext-epg" nameAlias="" prefGrMemb="exclude" prio="unspecified" targetDscp="unspecified"> <l3extSubnet aggregate="" annotation="" descr="" ip="0.0.0.0/0" name="" nameAlias="" scope="import-security"/> <fvRsCustQosPol annotation="" tnQosCustomPolName=""/> </l3extInstP> </l3extOut> </imdata>
On the external router ASR1K side, a separate VRF (blue-vrf) is defined. The external router do routing with NAT overload. Below is the configuration of the ASR 1000 running IOS XE for tenant blue:
vrf definition blue-vrf ! address-family ipv4 exit-address-family ! ! interface Port-channel10.1021 encapsulation dot1Q 1021 vrf forwarding blue-vrf ip address 172.16.21.4 255.255.255.0 ip nat inside ip ospf mtu-ignore ip ospf 21 area 21 ! router ospf 21 vrf blue-vrf default-information originate ! ip nat inside source list BLUE-OUT interface GigabitEthernet0/0/1.360 vrf blue-vrf overload ! ip route vrf blue-vrf 0.0.0.0 0.0.0.0 10.138.157.129 global
Let’s verify that the OSPF neighbor relationship has been established:
hni05-lab-a1002-2#show ip ospf 21 nei Neighbor ID Pri State Dead Time Address Interface 100.100.21.1 1 FULL/BDR 00:00:33 172.16.21.2 Port-channel10.1021 100.100.21.2 1 FULL/DR 00:00:36 172.16.21.3 Port-channel10.1021
Creating a external Neutron network to use the L3Out
Now follow the similar steps as in the shared external network scenario, we will consume the created L3Out blue-out for the dedicated external network named external-net-blue
:
administrator@ubuntu-lab:~/aci-openstack$ neutron net-create external-net-blue --router:external --apic:distinguished_names type=dict ExternalNetwork=uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/out-blue-out/instP-blue-out-ext-epg neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Created a new network: +--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | True | | apic:bgp_asn | 0 | | apic:bgp_enable | False | | apic:bgp_type | default_export | | apic:distinguished_names | {"EndpointGroup": "uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/ap-OpenStack/epg-EXT-blue-out", "ExternalNetwork": "uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/out-blue-out/instP-blue-out-ext-epg", "VRF": "uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/ctx-l3out-vrf", "BridgeDomain": "uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/BD-EXT-blue-out"} | | apic:external_cidrs | 0.0.0.0/0 | | apic:nat_type | distributed | | apic:nested_domain_allowed_vlans | | | apic:nested_domain_infra_vlan | | | apic:nested_domain_name | | | apic:nested_domain_node_network_vlan | | | apic:nested_domain_service_vlan | | | apic:nested_domain_type | | | apic:svi | False | | apic:synchronization_state | build | | availability_zone_hints | | | availability_zones | | | created_at | 2020-07-12T10:17:42Z | | description | | | id | f63fd70d-2248-4bb2-ac74-5b6d68d886fa | | ipv4_address_scope | | | ipv6_address_scope | | | is_default | False | | mtu | 1500 | | name | external-net-blue | | port_security_enabled | True | | project_id | 07d74314cbc24118888cf622976ad116 | | provider:network_type | opflex | | provider:physical_network | physnet1 | | provider:segmentation_id | | | revision_number | 6 | | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tags | | | tenant_id | 07d74314cbc24118888cf622976ad116 | | updated_at | 2020-07-12T10:17:43Z | +--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
In the next step, we create a SNAT pool and a Floating IP pool, attached them to the external network and set the router blue-router01 as external gateway.
neutron subnet-create external-net-blue 172.17.21.0/24 --name ext-subnet-blue --disable-dhcp --gateway 172.17.21.1 --apic:snat_host_pool True neutron subnet-create external-net-blue 172.18.21.0/24 --name ext-subnet-FIP --allocation-pool start=172.18.21.10,end=172.18.21.100 --disable-dhcp --gateway 172.18.21.1 openstack router set --external-gateway external-net-blue blue-router01
A subnet for SNAT pool has been created:
neutron subnet-create external-net-blue 172.17.21.0/24 --name ext-subnet-blue --disable-dhcp --gateway 172.17.21.1 --apic:snat_host_pool True neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Created a new subnet: +----------------------------+--------------------------------------------------+ | Field | Value | +----------------------------+--------------------------------------------------+ | allocation_pools | {"start": "172.17.21.2", "end": "172.17.21.254"} | | apic:distinguished_names | {} | | apic:snat_host_pool | True | | apic:synchronization_state | N/A | | cidr | 172.17.21.0/24 | | created_at | 2020-07-12T10:22:41Z | | description | | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 172.17.21.1 | | host_routes | | | id | 58ba19dc-e2ed-4ad3-b311-c6a6aac9c079 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | ext-subnet-blue | | network_id | f63fd70d-2248-4bb2-ac74-5b6d68d886fa | | project_id | 07d74314cbc24118888cf622976ad116 | | revision_number | 0 | | service_types | | | subnetpool_id | | | tags | | | tenant_id | 07d74314cbc24118888cf622976ad116 | | updated_at | 2020-07-12T10:22:41Z | +----------------------------+--------------------------------------------------+
Likewise, a new subnet for Floating IP pool has been created.
administrator@ubuntu-lab:~/aci-openstack$ neutron subnet-create external-net-blue 172.18.21.0/24 --name ext-subnet-FIP --allocation-pool start=172.18.21.10,end=172.18.21.100 --disable-dhcp --gateway 172.18.21.1 neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Created a new subnet: +----------------------------+---------------------------------------------------+ | Field | Value | +----------------------------+---------------------------------------------------+ | allocation_pools | {"start": "172.18.21.10", "end": "172.18.21.100"} | | apic:distinguished_names | {} | | apic:snat_host_pool | False | | apic:synchronization_state | N/A | | cidr | 172.18.21.0/24 | | created_at | 2020-07-12T10:22:46Z | | description | | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 172.18.21.1 | | host_routes | | | id | 2741fb33-1615-47ba-b27e-761c2b6c57d3 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | ext-subnet-FIP | | network_id | f63fd70d-2248-4bb2-ac74-5b6d68d886fa | | project_id | 07d74314cbc24118888cf622976ad116 | | revision_number | 0 | | service_types | | | subnetpool_id | | | tags | | | tenant_id | 07d74314cbc24118888cf622976ad116 | | updated_at | 2020-07-12T10:22:46Z | +----------------------------+---------------------------------------------------+
Scenario 2.1: Dedicated external network with SNAT
On ACI tenant blue, we can see that the os-compute-01 (hosting instance blue-vm1) is assigned IP address from the SNAT pool. The created subnets are also visible on ACI.
Let’s verify connectivity on blue-vm1.
Scenario 2.2: External network with Floating IP
Again, we will assign a floating IP from the pool to the VM blue-vm1.
On ACI, the endpoint with the floating IP is visible in tenant blue:
For the VM instances with assigned floating IPs, OVS will NAT the traffic it will source NAT the VM IP.
Verify the external connectivity on blue-vm1:
Scenario 3: Dedicated external networks without NAT
The difference between this scenario and scenario 2 is that we are not going to use NAT. As described earlier, it is required that we create a dedicated L3Out in the same VRF as the default routed one (DefaultRoutedVRF).
The L3Out configuration is similar to scenario 2. We will create a L3Out in uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05
(tenant Blue) but will need to use DefaultRoutedVRF, OSPF area 22 and a different VLAN encapsulation (VLAN 1022) for the L3Out connection:
<?xml version="1.0" encoding="UTF-8"?> <imdata totalCount="1"> <l3extOut annotation="" descr="" dn="uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/out-blue-nonat-out" enforceRtctrl="export" name="blue-nonat-out" nameAlias="" ownerKey="" ownerTag="" targetDscp="unspecified"> <ospfExtP annotation="" areaCost="1" areaCtrl="redistribute,summary" areaId="0.0.0.22" areaType="regular" descr="" multipodInternal="no" nameAlias=""/> <l3extRsL3DomAtt annotation="" tDn="uni/l3dom-TO-A1K-L3-DOM"/> <l3extRsEctx annotation="" tnFvCtxName="DefaultVRF"/> <l3extLNodeP annotation="" configIssues="" descr="" name="blue-nonat-out_nodeProfile" nameAlias="" ownerKey="" ownerTag="" tag="yellow-green" targetDscp="unspecified"> <l3extRsNodeL3OutAtt annotation="" configIssues="" rtrId="100.100.22.1" rtrIdLoopBack="no" tDn="topology/pod-1/node-101"/> <l3extRsNodeL3OutAtt annotation="" configIssues="" rtrId="100.100.22.2" rtrIdLoopBack="no" tDn="topology/pod-1/node-102"/> <l3extLIfP annotation="" descr="" name="blue-nonat-out_vpcIpv4" nameAlias="" ownerKey="" ownerTag="" prio="unspecified" tag="yellow-green"> <ospfIfP annotation="" authKeyId="1" authType="none" descr="" name="" nameAlias=""> <ospfRsIfPol annotation="" tnOspfIfPolName="common-ospf-broadcast"/> </ospfIfP> <l3extRsPathL3OutAtt addr="0.0.0.0" annotation="" autostate="disabled" descr="" encap="vlan-1022" encapScope="local" ifInstT="ext-svi" ipv6Dad="enabled" llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="9000" tDn="topology/pod-1/protpaths-101-102/pathep-[Switch101-102_1-ports-33_PolGrp]" targetDscp="unspecified"> <l3extMember addr="172.16.22.3/24" annotation="" descr="" ipv6Dad="enabled" llAddr="::" name="" nameAlias="" side="B"> <l3extIp addr="172.16.22.1/24" annotation="" descr="" ipv6Dad="enabled" name="" nameAlias=""/> </l3extMember> <l3extMember addr="172.16.22.2/24" annotation="" descr="" ipv6Dad="enabled" llAddr="::" name="" nameAlias="" side="A"> <l3extIp addr="172.16.22.1/24" annotation="" descr="" ipv6Dad="enabled" name="" nameAlias=""/> </l3extMember> </l3extRsPathL3OutAtt> <l3extRsNdIfPol annotation="" tnNdIfPolName=""/> <l3extRsLIfPCustQosPol annotation="" tnQosCustomPolName=""/> <l3extRsIngressQosDppPol annotation="" tnQosDppPolName=""/> <l3extRsEgressQosDppPol annotation="" tnQosDppPolName=""/> <l3extRsArpIfPol annotation="" tnArpIfPolName=""/> </l3extLIfP> </l3extLNodeP> <l3extInstP annotation="" descr="" exceptionTag="" floodOnEncap="disabled" matchT="AtleastOne" name="blue-nonat-out-ext-epg" nameAlias="" prefGrMemb="exclude" prio="unspecified" targetDscp="unspecified"> <l3extSubnet aggregate="" annotation="" descr="" ip="0.0.0.0/0" name="" nameAlias="" scope="import-security"/> <fvRsCustQosPol annotation="" tnQosCustomPolName=""/> </l3extInstP> </l3extOut> </imdata>
On the ASR 1000 side, we will still use the blue-vrf, but with a corresponding OSPF area that matches the ACI L3Out.
interface Port-channel10.1022 encapsulation dot1Q 1022 vrf forwarding blue-vrf ip address 172.16.22.4 255.255.255.0 ip nat inside ip ospf mtu-ignore ip ospf 21 area 22 !
Verify the new OSPF peering sessions are up:
hni05-lab-a1002-2#show ip ospf 21 nei Neighbor ID Pri State Dead Time Address Interface 100.100.21.1 1 FULL/BDR 00:00:38 172.16.21.2 Port-channel10.1021 100.100.21.2 1 FULL/DR 00:00:36 172.16.21.3 Port-channel10.1021 100.100.22.1 1 FULL/BDR 00:00:37 172.16.22.2 Port-channel10.1022 100.100.22.2 1 FULL/DR 00:00:39 172.16.22.3 Port-channel10.1022
Creating a external Neutron network to use the L3Out
We will consume the created L3Out blue-nonat-out for the dedicated external network named external-net-blue-nonat
. As NAT is not used, we disable NAT for the external network with --apic:nat_type ""
.
administrator@ubuntu-lab:~/aci-openstack$ neutron net-create external-net-blue-nonat --router:external --apic:distinguished_names type=dict ExternalNetwork=uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/out-blue-nonat-out/instP-blue-nonat-out-ext-epg --apic:nat_type "" neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Created a new network: +--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | True | | apic:bgp_asn | 0 | | apic:bgp_enable | False | | apic:bgp_type | default_export | | apic:distinguished_names | {"EndpointGroup": "uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/ap-OpenStack/epg-EXT-blue-nonat-out", "ExternalNetwork": "uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/out-blue-nonat-out/instP-blue-nonat-out-ext-epg", "VRF": "uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/ctx-DefaultVRF", "BridgeDomain": "uni/tn-prj_88956796ea2e4a78a3cb01ade05d9d05/BD-EXT-blue-nonat-out"} | | apic:external_cidrs | 0.0.0.0/0 | | apic:nat_type | | | apic:nested_domain_allowed_vlans | | | apic:nested_domain_infra_vlan | | | apic:nested_domain_name | | | apic:nested_domain_node_network_vlan | | | apic:nested_domain_service_vlan | | | apic:nested_domain_type | | | apic:svi | False | | apic:synchronization_state | build | | availability_zone_hints | | | availability_zones | | | created_at | 2020-07-12T14:36:47Z | | description | | | id | 4f987d19-b1f7-40ee-8599-3a5081037013 | | ipv4_address_scope | | | ipv6_address_scope | | | is_default | False | | mtu | 1500 | | name | external-net-blue-nonat | | port_security_enabled | True | | project_id | 07d74314cbc24118888cf622976ad116 | | provider:network_type | opflex | | provider:physical_network | physnet1 | | provider:segmentation_id | | | revision_number | 6 | | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tags | | | tenant_id | 07d74314cbc24118888cf622976ad116 | | updated_at | 2020-07-12T14:36:47Z | +--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Now set the blue-router02 as the external gateway for this network:
openstack router set --external-gateway external-net-blue-nonat blue-router02
With NAT disabled, OVS will route the packet from the tenant network through ACI to the L3Out
We expect to see tenant subnet prefix (192.168.3.0/24) on the external router:
hni05-lab-a1002-2#show ip route vrf blue-vrf Routing Table: blue-vrf <snip> E2 192.168.3.0/24 [110/20] via 172.16.22.3, 00:29:05, Port-channel10.1022 [110/20] via 172.16.22.2, 00:29:08, Port-channel10.1022
Traffic from blue-vm3 will be routed without NAT:
Conclusion
With ACI Opflex integration, the external connectivity is provided through Cisco ACI L3Out, eliminating the requirement of Neutron node being Layer 2 adjacent to the SNAT or floating IP network. OVS handles the distributed SNAT or floating IP on each compute node. ACI and OVS work together to largely offload the networking funtions in traditional Neutron nodes.
This blog post concludes the series of OpenStack networking with Cisco ACI Opflex integration. I hope the the step-by-step tutorials are easy to follow, and you can get the most out of them for your benefits. Cheers!