UPDATED: If you are running ESXi 6.x, please look at the updated section below.
Cisco announced CCIE Routing & Switching Version 5 exam update back on December 3, 2013. Even before the announcement, there were already speculations what will or will not be on the exam. There were two things we knew for sure that will not be in the v5 exam, Frame Relay and IOS 12 – we all turned out to be right. That being said, my old CCIE R&S v4 home lab needed an upgrade.
Before the CCIE R&S v5 exam has been released, I’ve been studying for the written exam by using the two very well known books for Cisco routing and switching – does not matter if one is studying for CCNA R&S, CCNP R&S, or CCIE R&S. These two books are very awesome resources to have on their bookshelf at work or home. The two books are Routing TCP/IP, Vol 1 and Routing TCP/IP, Vol 2. While these two books are old, they are still very helpful!
According to Jeff Doyle, he is working on updating at least one of the books. You can now order Routing TCP/IP, Volume II Second Edition!
The other two books that a CCIE aspirant must have are the following: CCIE R&S v5.0 Official Cert Guide, Vol 1 and CCIE R&S v5.0 Official Cert Guide, Vol 2. The first two books are also available in a bundled format, CCIE R&S v5.0 Official Cert Guide Library, which costs less than buying them separately. I do want to point out that these books are not a replacement of Routing TCP/IP Vol 1 and 2.
Cisco’s Recommended Gears
The recommended gears to prepare for the exam are ISR G2 2900s and Catalyst 3560X, as stated in this document. I can’t afford all of those gears, especially with INE’s hardware topology. As you can see, the topology has 20 routers for full-scale labs but the advanced technology labs require only 10 routers. Even with 10 routers, I still can’t afford to buy them. On top of all the routers, you still need the 3560X which cost an arm and leg just like the 2900s. Fortunately, there are other options in building a CCIE home lab that will mimic INE’s workbook topology. I opted for the CSR 1000v which requires a hypervisor, like VMware’s ESXi, Microsoft’s Hyper-V, Xen, and KVM.
The CSR 1000v takes care of the routing section of the lab, but I still need something for the switching section. Fortunately, I can reuse my two Catalyst 3560s loaded with IOS 15. Yes, you read that right. There are some versions of Catalyst 3560v1 that are capable of loading IOS 15 as long as you have the 32MB flash version. I have two flavors of 3560: WS-3560-24TS-S and WS-3560-48TS-S. If try to log into Cisco’s IOS download page with your CCO account and start looking for an IOS 15 image for that model, you won’t be able to find one. The last version for these models are 12.x, but if you look under 3560G then you have the option to download IOS 15 and the images do work on these two models.
With two on hand, I needed to add two more to complete my CCIE home lab. Fortunately, we have tons of them at work that aren’t being used at all. I wanted to buy them from my employer but was told that I didn’t have to. I was given a permission to borrow two of them for a long period so I took that opportunity and borrowed two more of the WS-3560-24TS-S. With four 3560s, I now have a complete CCIE R&S v5 home lab! While it won’t do 100% of the topics covered in the lab exam, at least I can still do the majority of the topics with my home lab. For the topics that won’t be able to do with my home lab, I can always do rack rental or probably use VIRL (Virtual Internet Routing Lab).
Since I already have a VMware ESXi home lab, it was an easy decision on which hypervisor to use and it’s what INE is using anyway for their rack rental. The ESXi server build that I have is a couple of years old so if you’re looking for a new one, then check out my new build to get some idea. My new build is underpowered compared to my first ESXi host, but the Intel NUC Skull Canyon is a great candidate for your home lab. If the build is still pricey for you, eBay is there for old servers. I almost bought the Dell T5500 but glad I decided to hold off on the purchase. Also, I didn’t like the fact that it was too big since I do not have space for it unless I clear out my rack. Anyone wants to buy my old routers and switches that were part of my CCIE R&S v4 home lab? The decision to hold off turned out to be a great decision, since I was able to squeeze all 20 CSR 1000v with 2.5GB RAM configured on each VM, more on that later.
I am not going to do a tutorial on how to install it since INE has one already. However, I do want to show some tips that I’ve learned from the INE’s forum members on both CCIE R&Sv5 Equipment Build and Building INE’s RSv5 topology on CSR1000v threads.
INE’s blog post about how to install CSR 1000v is a bit old, but you can still follow the wizard since it’s pretty self-explanatory. One of the differences would be after the Name and Location section, which is new with the OVA file that I downloaded. The Deployment Configuration section will ask you what type of deployment you would like to use – Small, Medium, Large, and Large + DRAM Upgrade. For the lab environment, Small hardware profile is sufficient.
If you still decided to use ESXi 5.5, then deploying OVF template install is different from ESXi 5.1. Different in terms of how it creates the VM. The ESXi 5.1 used hardware version 9 versus hardware version 10 of ESXi 5.5. You’re probably asking, so what if ESXi 5.5 uses hardware version 10. Well, you cannot edit the VM using the vSphere client. You would need to use vSphere Web Client to edit the VM, as seen below. That means you would need to run a vCenter Server instance to just run vSphere Web Client which complicates your home lab. Then again, if you are studying for VCP or will study for it, then it doesn’t really matter. However, for people who just want to get their CCIE home lab running, then it will be annoying to go through the process of building a vCenter Server instance just to edit the VM.
If you still decided to use ESXi 5.5, then there are ways around the error that you get when trying to edit a virtual machine that is version 10 or higher. Some people decided to copy the VMX file from the ESXi server to their desktop and edit the file and re-upload it. I decided to use CLI of the ESXi server since I am familiar with it. To access the CLI, you’ll need to enable SSH, it’s mentioned in one of my blog posts. Next, you will need to find the VMX file of your CSR 1000v. In my case, my baseline VM is called CSR1000v and normally what you call your VM will be also the name of the folder within your datastore.
~ # vi /vmfs/volumes/nfs01/CSR1000V/CSR1000V.vmx .encoding = "UTF-8" config.version = "8" virtualHW.version = "10" nvram = "CSR1000V.nvram" ! Output omitted for brevity !
The line that we’re looking for here is the virtualHw.version = “10”. We need to change it from 10 to 9 and save the file. We’re still not quite done yet, even though we’ve changed the hardware version. The vSphere Client will still give you an error if you try to edit it since it still thinks that the VM is version 10. We need to remove the VM from the inventory and add it back in. Once added, the VM can now be edited. My suggestion is not to edit this and make this as your baseline VM for CSR 1000v needs. Since we need 20 CSR 1000vs for INE’s full-scale labs, we need to start cloning the VM. Once you’ve created all 20 VMs, you’re now ready to edit them.
In INE’s blog post, it tells you to issue the command platform hardware throughput level 50000 but this one doesn’t prompt you to activate the evaluation license with a newer CSR 1000v image, at least on the one I downloaded – csr1000v-universalk9.03.12.00.S.154-2.S-std.ova.
R1(config)#platform hardware throughput level 50000 R1(config)# *Aug 6 04:36:20.067: %VXE_THROUGHPUT-3-CONF_FAILED: Configuration failed. Installed license does not support the throughput level. Please install the valid license
To activate the evaluation premium license, you need to issue the command below and accept the EULA. Before activating the license, you may want to do a snapshot or configure your VM’s Virtual Disk to non-persistent. This way, you can still take advantage of the higher throughput level compared to a measly 2.5Mbps once the evaluation license expires. For the lab environment, though, the basic throughput is enough so it’s really optional. Another option is to activate the license and put all the initial configs and then convert the VM’s Virtual Disk to non-persistent. This will give you fresh evaluation license every time you the shutdown and power up the VM.
R1(config)#license boot level premium Feature Name:prem_eval ! Output omitted for brevity ! ACCEPT? (yes/[no]):yes *Aug 6 04:41:05.560: %LICENSE-6-EULA_ACCEPTED: EULA for feature prem_eval 1.0 has been accepted. UDI=CSR1000V:9L5WSSXMWKP; StoreIndex=0:Built-In License Storage% use 'write' command to make license boot config take effect on next boot *Aug 6 04:41:09.344: %IOS_LICENSE_IMAGE_APPLICATION-6-LICENSE_LEVEL: Module name = csr1000v Next reboot level = premium and License = prem_eval R1#show license ! Output omitted for brevity ! Index 30 Feature: prem_eval Period left: 8 weeks 3 days Period Used: 0 minute 3 seconds License Type: Evaluation License State: Active, Not in Use, EULA accepted License Count: Non-Counted License Priority: Low ! Output omitted for brevity !
Once the license is activated, then you can now change the throughput to 50Mbps.
R1(config)#platform hardware throughput level 50000 R1(config)# *Aug 21 01:34:36.573: %VXE_THROUGHPUT-6-LEVEL: Throughput level has been set to 50000 kbps
What happens when the evaluation license expires? Well, as stated earlier the throughput goes back down to 2.5Mbps but you still get all the premium license features. You also get an annoying message about the license being expired. If you’re doing debugs and what have you then it gets annoying to see one of the lines is about the license. That said, the non-persistent disk or snapshot becomes handy in this situation.
R1#show platform hardware throughput level The current throughput level is 2500 kb/s R1# %LICENSE-1-EXPIRED: License for feature prem_eval 1.0 has expired 7 hours and 30 minutes ago. UDI=CSR1000V:905F9PPAKYB
Since my ESXi server only has 32GB of RAM (maxed out), the amount of CSR 1000v VMs that I can do is probably around 12 excluding my other VMs. That means I will be short of 8 routers for the full-scale labs. My original plan was to do memory overcommitment and take advantage of the performance that you get from SSDs. That said, I bought a 128GB Samsung SSD to be used as host SSD cache. In theory, the swapping to SSD shouldn’t be noticeable for a lab environment. I was ready to install the SSD until I saw a guy posted in IEOC thread about disabling the Transparent Page Sharing (PDF) Large Page support feature. By default, some ESXi versions support large page and small page. By disabling the large page support for OS, VMware ESXi kernel will force the guest OS to use small page support and will increase the chances of TPS being able to keep one copy of the page and share it among the VMs. Since we’re running the same OS twenty times over, there are some contents that will be the same for each and every VM. That said, it would make sense to just keep one and share it among the VMs, which reduces the amount of physical memory being used by the VMs and allows a higher level of memory overcommitment. That’s how I was able to run 23 VMs concurrently and use only about 16GB of physical memory.
To disable the large page behavior and instead force small pages, go to Configuration tab > Advanced Settings (under Software) > Mem > Mem.AllocGuestLargePage > Change the value from 1 to 0.
Once disabled, it would take some time to scan the VMs and reduce the amount of RAM used. That being said, just try to run 10 or so CSR VMs and wait several minutes for the ESXi server to scan them for duplicate memory pages. Once ESXi is able to scan fully, you should start to see your memory consumption to drop significantly.
UPDATE: For those people who are running certain versions of ESXi 5.x and now 6.0, the TPS is disabled by default. My ESXi hosts are now on
6.0U2 6.5 and noticed that Mem.AllocGuestLargePage setting changed back to the value of 1. Please check your settings and change it back to 0. In addition, there is a new setting in certain versions of 5.x and version 6.0 that needs to be changed. This is to revert back to the traditional behavior of TPS. The setting is the Mem.ShareForceSalting and needs to be changed to 0. Please be aware that there is a VMware KB that talks about security concerns of TPS. In a CCIE home lab environment, I do believe that it is perfectly OK to revert back to the traditional behavior.
Previously, once the Mem.AllocGuestLargePage has been changed to 0 the inter-VM TPS will kick in after several minutes. However, I noticed that the behavior of ESXi 6.0U2 is quite different. The inter-VM TPS did not kick in until the VMs were powered down or migrated (vMotion) to a different host. Once done, RAM consumption will start to decrease but may take 15 – 30 minutes before it goes down to around 14 GB. Ignore the other VMs on the left since those are on a different host. The RAM consumption of 20 CSR1000v should be around the same as mine.
If you haven’t been living under a rock, then you definitely know that there are several options out there. Some people decided to use Web IOU or Unified Networking Lab by @adainese, GNS3 v1.x, IOSv that is extracted from the OnePK, or a combination of hardware and virtual. If you are not Cisco employee, then you shouldn’t be running IOU/IOL since it is against Cisco’s EULA. That said, run it at your own risk.
The GNS3, Web IOU, and Unified Networking Lab options are great since you can take them on the go because the system requirements are not very high so any decent notebook can run it. The problem with IOU/IOL, they weren’t designed to be used for learning so some features may not work properly and can be frustrating at times. The IOSv’s, which is the software that will be used in VIRL, system requirements are not very high as well so if you have a not so powerful desktop/notebook, then this is definitely something you can consider.
Since I happened to have an ESXi server, I made a decision to utilize it since I was only using it for playing with some OS and a few VMs that I use for everyday use like FTP, proxy, Plex, etc. Another reason why I went with the CSR 1000v route is because the INE’s labs were written with their rack rental in mind. That said, the convenience that you get with following INE’s setup is priceless. I am quite happy with my setup and my ESXi server can handle the load that I am currently throwing at it. Then again, I haven’t finished the whole advanced technology labs so take it with a grain of salt.
NetworkJutsu.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.