Showing posts with label NOS. Show all posts
Showing posts with label NOS. Show all posts

Wednesday, 14 October 2015

One-Click upgrade to Acropolis 4.5.

I am very excited about the release of Acropolis 4.5. It has some great features I am eager to test, one of them being Azure Cloud Connect. When I was about to upgrade I noticed that the new NOS version did not appear in the list of available upgrades.



I did notice that NCC 2.1.1 was available to upgrade which was released at the same time.
Assuming it was an issue with PRISM I did restart the PRISM service
  • $ allssh genesis stop prism
  • $ cluster start
This did not fix the issue. I also noticed the same problem on another cluster and in my PRISM Central install. Time to check with the knowledgeable SRE at Nutanix :-)

I was told that the One-Click Upgrade for 4.5 is not available. Apparently this is because of the many new features introduced and they would like the customer to be fully aware of these new features before upgrading. Until it is available you can do a manual upgrade. Just upload the binary and metadata file. PRISM Central can be manually upgraded with the same binary/metadata file that you use for upgrading your clusters.



Sunday, 13 September 2015

Re-image Nutanix block with ESXi


Your Nutanix block has made it all the way to the central North Island of New Zealand and you have unpacked the block, took the box for a ride around town and installed the block into the rack. Your new Nutanix block is factory shipped with the KVM hypervisor but your environment's hypervisor of choice is VMware ESXi. This is were Nutanix Foundation comes in. In a previous post I explained how to install the Nutanix Foundation VM. This post will cover how to re-image your nodes.



Once you have logged into your Foundation VM you need to click the Nutanix Foundation icon which make use of the Firefox icon.


This will take you to the Foundation GUI. This interface will walk you through the following 4 steps:
  1. Global Config
  2. Block and Node Config
  3. Node Imaging 
  4. Clusters




Global Config


The global configuration screen allows you to specify settings which are common to all the nodes you are about to re-image. At the bottom you see a multi-home checkbox. This only needs to be checked when your IPMI, hypervisor and CVM are in different subnets. Since it is best practice to keep these in the same subnet it is advised you do and thus there is no reason to tick the box.
In this case you will enter the same netmask and gateway. For IPMI, enter the default ADMIN\ADMIN credentials. You can change this later. Enter your organisational  DNS server under hypervisor. You probably make use of another DNS server and you can add this one later too.
The CVM memory can be adjusted as required. In my case this is 32 GB


Block and Node Config


The block and node config screen allows you to discover the new nodes. Remember that your Foundation VM needs to have an IP in the same subnet as the new nodes and that IPv6 needs to be supported. The new block and its new nodes should be automatically detected if all pre-requisites have been met. If not, you can try the discovery again by clicking the retry discovery button.

Enter the IP addresses you want to use for the IPMI, hypervisor and CVM interfaces for each discovered node. You can also set the desired name for your hypervisor host


Node Imaging


The node imaging allows you to specify the NOS package and hypervisor you want to use for re-imaging your node. You should ensure that the NOS and hypervisor version you specify is the same as the ones in use on your cluster. It is not strictly necessary but it makes life a bit easier. You will need to upload your NOS and hypervisor installer files to the Nutanix Foundation VM. By default, there is enough disk space available on the foundation VM to hold at least one of each. It is important that the files are stored in the correct location on the VM. You can upload them from your laptop to the VM with the following SCP commands:

scp -r <NOS_filename> nutanix@foundationvm:foundation/nos
scp -r <hypervisor_filename> nutanix@foundationvm:foundation/isos/hypervisor/<hypervisorname>



Clusters


As I am actually not creating a cluster but planning to expand an existing one I do not specify a new cluster. I do click the run installation button and I get a message that informs me the imaging will take around 30 minutes.


The installer will kick off once you click the proceed button.



Just sit back and wait for the process to complete.








Thursday, 3 September 2015

Upgrade NOS via PRISM

One of the things I like most about Nutanix is its simplicity and it does not get much simpler then the one-click upgrade that allows you to keep your NOS up to date. The process of upgrading NOS is very similar to upgrading PRISM Central which I wrote about in a previous post.

Before upgrading NOS you should always run a ncc health check to see if their are any issues and if you do come across an issue then you can fix it or contact Nutanix support. You can run ncc by logging into a CVM and run the following command: ncc health_checks run_all

NOS is upgraded via your PRISM interface. Once you logged in you will need to click the gear icon in the top right-hand corner. Select upgrade software. Here is where you can not only upgrade NOS but also your hypervisor, firmware and NCC. 

Under NOS, click the download button. This could take a while to complete


Once downloaded a blue upgrade button appears. This will also give you the option do a pre-upgrade.


We will run the pre-upgrade first. This will check whether all the pre-requisites have been met.
This is actually done as part of the upgrade too so feel free to skip.

.

At the bottom click back to versions button and select upgrade > upgrade now. The pre-upgrade will run again and the upgrade will commence after that has completed.


The process will upgrade each CVM and you should see this when complete


If you return to the upgrade software window you will see it reflects the latest version


And that is it. All in all it took about 25 minutes to do a 4-node cluster. Time to upgrade the next one...