In this post, I will share my experience in setting up an HA Kubernetes cluster. The cluster consists of 3 external etcd nodes, 3 master nodes and 3 worker nodes. The nodes are VMs hosted inside VMWare ESXi (integrated with ACI using VMM domain) on a HyperFlex cluster. However, you can perform the same steps on any standard ESXi deployment with VMM domain integration. In my setup I also use Cisco ACI CNI plugin to provide network connectivities among pods within the Kubernetes cluster. Ansible is used to provide automation to the installation:
spinning up the VMs on ESXi
installing HA etcd cluster
creating an active/standby haproxy as LB for multiple master nodes
configuring a Kubernetes cluster using kubeadm
applying ACI CNI plugin to have a fully working K8S cluster.
[Edit] I put the Ansible playbook code and the Jinja2 template files on my Github repo.
Many people have asked me how HyperFlex and ACI can actually integrate, as they have not found such a guide. So in this blog post, I’ll demonstrate the integration with some a step-by-step instruction using my local lab kit.
In previous posts of the series, I have covered the installation of the Cisco ACI Opflex plugin our OpenStack lab and covered the basics of the plugin. Specifically, we have gone through the core benefits of the Opflex OVS agent, such as virtualization visibility on ACI, distributed routing, optimized DHCP and Metadata functions, etc.
In this part 3 of the series, I will cover the setup of the external networks to allow OpenStack instances in the created tenants to communicate with the outside world through ACI.
This is part 2 of the deep-dive series into OpenStack networking with Cisco ACI Opflex integration. In the previous post, I have described the different integration modes between ACI and OpenStack (ML2 vs. GBP, OpFlex vs. non OpFlex). We have also seen the constructs (tenants/VRF/BD/EPG/contracts) that the plugin creates automatically on ACI when we create OpenStack objects, as well as the distributed routing feature of the OpFlex OVS.
In this part, we will do some deep-dive into the distributed DHCP and Neutron metadata optimization of the OpFlex agent, i.e. how the OpenStack instances get their IP addresses and metadata with the integration plugin.
This is the follow-up of my previous post on the installation of the ACI Openstack integration plugin (OpFlex mode) in my lab. In this blog post, we will take a step back and discuss why we would want to integrate ACI with OpenStack in the first place, the benefits of the integration (especially OpFlex mode), the different integration modes (ML2 vs. GBP, OpFlex vs. non OpFlex) and the decision to choose one.
You may also find very good details on this topic covered in Cisco ACI Unified Plug-in for OpenStack Architectural Overview document. I am not trying to make a full clone of the whitepaper here, but will summarize some key points and provide demonstrations specific for our previous lab setup with some packet capture to illustrate the networking features that we have discussed.
One of the main features that I’ve always been impressed with ACI is its capabilities to integrate with popular virtualization domains- VMWare, OpenStack, Microsoft SCVMM, Kubernetes, etc. There have been quite a lot of resources covering ACI integration with VMWare in great details. You can find the lab guide in any Cisco DC specialization courses such as DCVAI / DCAC9K, and many other sources you can google for yourself. However, there are not as many for OpenStack. This tutorial is based on Cisco’s official installation guide, adapted to a newer OpenStack release (Queens) on Ubuntu 18.04 Bionic. I hope it will be helpful for those who are seeking a quick and easy way to setup an OpenStack environment with ACI integration. In the future posts, I may write about some key benefits, features, and mechanisms of this ACI-OpenStack integration.
New year marks a fresh start. At the start of the year, we often take time to reflect on the previous year and plan the goal for the coming months. In this blog post, I would do the same by having a quick look back at 2019 and sketch a plan of my technical skills developement for the rest of 2020.
It has been some months since I passed the CCIE DC (v2.1) lab exam, and my plaque finally arrived. Having got this second CCIE flavor means so much, considering I’ve been working on the certification track for years with failed attempts in both written and lab exams along the way.
In this very first blog post, I’ll share some of my experience with CCIE DC – covering the what and the how – which have been often asked by many of my colleagues.