This post is second part of a series on MAAS/Juju and their usage in my lab. In this part I will cover the MAAS machine enlisting and commissioning process and workaround/fixes for some of the caveats you might encounter.
Let’s revisit the lab setup topology below and describe the process.
MAAS machine enlisting
MAAS enlistment is the unattended process of adding machines to MAAS in which MAAS automatically handles the DHCP, TFTP and PXE on the booting machines.
By now with our setup, the physical servers (C200-1, C200-2, and C200-4) and VMs (VM-1 and VM-2) can successfully reach MAAS controller (with the configured network services) in the PXE/MAAS network subnet with the CIDR 172.16.9.0/24.
With the network boot enabled on these machines, when we boot them up they will automatically discover MAAS to finish their PXE boot process. The screenshot below illustrates the network boot process of VM-1 where it boots from ens160 NIC, discovers MAAS controller (172.16.9.2), gets its IP address assigned by MAAS (172.16.9.10), gets its boot image, and continues with its PXE boot process.
We can observe the same process on the physical servers through KVM (Keyboard Video Mouse) console sessions to the machines.
After the enlistment process finishes, the server will automatically power off. We can see the MAAS automatically assigns random, funky names for the servers with New status.
MAAS automatically tags the virtual machines with the ‘virtual‘ tag. We can put on the machine our own tags (strings) to further control the machine selection in our further tasks, e.g. choosing a specific machine or a set of machines with a particular tag to deploy the OS. In this setup, we tag VM-1 with the ‘juju‘ tag.
We might want to change the randomly assigned machine name to something more rational. For example, I plan to deploy juju controller on VM-1 so I change the hostname from current ‘usable-bull‘ to ‘juju-controller’.
Power configuration
We can notice that the Power status of new machines are all ‘Unknown’. By default MAAS cannot discover the methods to communicate with the Cisco UCS rack servers and VMWare VMs. We will also need to address that by configuring IPMI parameters for the physical servers, and vCenter/ESXi parameters for the virtual machines so MAAS can control their power settings (on/off/power-cycle). The Power configuration settings are in the Configuration section when you click on the machine name.
Physical server power configuration
For the physical UCS servers, we set the Power type to IPMI and provide MAAS with the parameters:
- IP address: the IP address of the IPMI. For Cisco UCS rack servers, provide the CIMC IP address.
- Power user: the CIMC admin username
- Power password: the CIMC admin password
- Power MAC: the MAC address of the CIMC
Virtual machine power configuration
For the VMWare virtual machines, we set the Power type to VMWare and provide MAAS with the parameters:
- VM Name or VM UUID
- VMWare hostname: IP address or URL to the vCenter or ESXi host that the VM resides
- VMWare username: the vCenter/ESXi username
- VMWare password: the vCenter/ESXi password
- VMWare API protocol: if we use HTTPS with self-signed certificate, we can import the certificate on the MAAS controller, or ignore the certificate error by providing
https+unverified
as the API protocol.
After we finish the power configuration for all the machines, we can refresh the power status and they are all in Off states with no power type errors on MAAS.
MAAS machine commissioning
After a machine has been enlisted and we have configured the needed power configuration, the next step is to commission it (i.e. put it into the MAAS resource pool to make it available for future deployment).
On the dashboard we can select all 5 of our machines, navigate to Take action on the top right, and commission them all at once.
MAAS will power on the machines (given you have configured the power configuration correctly). After that they will undergo the commissioning process in which they will contact the DHCP server, get the kernel and initrd over TFTP, boot up, and run the cloud-init scripts. We can look at the status changed to Commissioning and the process on the machines via KVM / remote console.
After the commissioning process finishes, the machine status will change to Ready.
Have you noticed the memory/RAM for the virtual machines listed on MAAS are all 0GB? It is a known bug with MAAS and VMWare 6.5 in which MAAS failed to discover the amount of memory configured for the virtual machines during commissioning. One workaround would be to change the MAAS database manually to reflect the correct amount of memory:
- Get the current database credentials:
sudo cat /etc/maas/regiond.conf
- Login to the PostgreSQL using the credentials:
psql -U maas -h localhost maasdb
- Update the memory of the machine:
UPDATE maasserver_node SET memory = '<NUMBEROFMEGABYTES>' WHERE hostname = '<NAMEOFCOMMISIONEDHOST>' \g
e.g. in my case:
UPDATE maasserver_node SET memory = '4096' WHERE hostname = 'juju-controller' \g
- Quit the database with
\q
This step is needed, otherwise the deployment on the virtual machines will fail when it checks for resource availability on the machine and reports insufficient resources.
Refresh the MAAS dashboard and we can see the memory information updated.
After this step, we have all 5 machines in the resource pool that can be consumed. We can deploy the machine, either directly on MAAS (install OS on the machine and then manually use it), or through a deployment agent (for example, Juju). With Juju, not only do we have the OS installed on the machine, but we also have the services installed and configured on top of the OS.
Using MAAS with Juju
In this section I will walkthrough the process of adding a MAAS cloud to be consumed with Juju. We will also using Juju client to bootstrap a Juju controller in the MAAS cloud.
Juju controller is the the heart of Juju that manages all the machines in a model (an environment to manage and operate a set of software applications). We can use a Juju client to bootstrap a Juju controller on a cloud platform that Juju supports, i.e. MAAS.
In my setup I have Juju client packages installed on the same MAAS controller node 172.16.9.2/24:
sudo apt-get install juju
Create a MAAS cloud
Next step will be to create a MAAS cloud and specify the MAAS controller on Juju:
juju add-cloud
Cloud Types
lxd
maas
manual
openstack
vsphere
Select cloud type: maas
Enter a name for your maas cloud: maas-cloud
Enter the API endpoint url: http://172.16.9.2:5240/MAAS
Cloud "maas-cloud" successfully added
You will need to add credentials for this cloud (`juju add-credential maas-cloud`)
before creating a controller (`juju bootstrap maas-cloud`).
Create MAAS cloud credentials
Go ahead and add credentials for our maas-cloud. We name our credential name maas-cloud-creds
. The maas-oath
is the MAAS API key, which we can get from MAAS GUI in User Preference page or by issuing command sudo maas-region apikey --username=<MAAS_username>
.
juju add-credential maas-cloud
Enter credential name: maas-cloud-creds
Using auth-type "oauth1".
Enter maas-oauth: <paste the MAAS API when prompted>
Credentials added for cloud maas-cloud.
Bootstrap a controller
We are now ready to create a controller for our MAAS cloud.
juju bootstrap --config bootstrap-timeout=1800 --bootstrap-series=xenial --constraints tags=juju maas-cloud maas-controller
There are some parameters we can use to instruct Juju to do the bootstrap the way we want. Earlier I mentioned about tagging on the MAAS machines. Here we use the tag ‘juju‘ that we have put on the VM-1 to request Juju to bootstrap the controller on VM-1. We name the controller ‘maas-controller‘ and the OS to be installed on the machine is xenial (codename for the 16.04 Ubuntu series).
After issuing this command, we can observe that Juju will communicate with MAAS, deploy the requested 16.04 Ubuntu OS on VM-1, and install the Juju controller automatically.
There we go. Now we have a full setup of MAAS cloud with a Juju controller. In future posts I will demonstrate how we can create models on use Juju to quickly setup an OpenStack or a Kubernetes cluster environment. Stay tuned!
Chưa thấy tác dụng của SDN, Cisco ACI ở đây anh nhỉ.
Next post will be about automating the installation of OpenStack on those machines with ACI integrations. I will write when I can find some free time. Stay tuned 😉
Hê hê, Cảm ơn anh!
Hi there very nice website!! Man .. Beautiful .. Amazing ..
I’ll bookmark your website and take the feeds additionally?
I’m glad to find numerous useful information right here in the publish, we need develop extra techniques
on this regard, thank you for sharing. . . .
. .
Thanks for the kind words. Glad that it helps.
brilliant!