In this post, I will share my experience in setting up an HA Kubernetes cluster. The cluster consists of 3 external etcd nodes, 3 master nodes and 3 worker nodes. The nodes are VMs hosted inside VMWare ESXi (integrated with ACI using VMM domain) on a HyperFlex cluster. However, you can perform the same steps on any standard ESXi deployment with VMM domain integration. In my setup I also use Cisco ACI CNI plugin to provide network connectivities among pods within the Kubernetes cluster. Ansible is used to provide automation to the installation:
spinning up the VMs on ESXi
installing HA etcd cluster
creating an active/standby haproxy as LB for multiple master nodes
configuring a Kubernetes cluster using kubeadm
applying ACI CNI plugin to have a fully working K8S cluster.
[Edit] I put the Ansible playbook code and the Jinja2 template files on my Github repo.
In previous posts of the series, I have covered the installation of the Cisco ACI Opflex plugin our OpenStack lab and covered the basics of the plugin. Specifically, we have gone through the core benefits of the Opflex OVS agent, such as virtualization visibility on ACI, distributed routing, optimized DHCP and Metadata functions, etc.
In this part 3 of the series, I will cover the setup of the external networks to allow OpenStack instances in the created tenants to communicate with the outside world through ACI.
This is part 2 of the deep-dive series into OpenStack networking with Cisco ACI Opflex integration. In the previous post, I have described the different integration modes between ACI and OpenStack (ML2 vs. GBP, OpFlex vs. non OpFlex). We have also seen the constructs (tenants/VRF/BD/EPG/contracts) that the plugin creates automatically on ACI when we create OpenStack objects, as well as the distributed routing feature of the OpFlex OVS.
In this part, we will do some deep-dive into the distributed DHCP and Neutron metadata optimization of the OpFlex agent, i.e. how the OpenStack instances get their IP addresses and metadata with the integration plugin.
This is the follow-up of my previous post on the installation of the ACI Openstack integration plugin (OpFlex mode) in my lab. In this blog post, we will take a step back and discuss why we would want to integrate ACI with OpenStack in the first place, the benefits of the integration (especially OpFlex mode), the different integration modes (ML2 vs. GBP, OpFlex vs. non OpFlex) and the decision to choose one.
You may also find very good details on this topic covered in Cisco ACI Unified Plug-in for OpenStack Architectural Overview document. I am not trying to make a full clone of the whitepaper here, but will summarize some key points and provide demonstrations specific for our previous lab setup with some packet capture to illustrate the networking features that we have discussed.
This post is second part of a series on MAAS/Juju and their usage in my lab. In this part I will cover the MAAS machine enlisting and commissioning process and workaround/fixes for some of the caveats you might encounter.
Let’s revisit the lab setup topology below and describe the process.
In this multi-part tutorial series, I will demonstrate how MAAS and Juju have helped me quickly setup and consume both bare metal servers and virtual machines in our lab in an automated, cloud-like fashion.
The first part will cover the high level introduction to MAAS and Juju, my MAAS installation, and MAAS network service configuration.