Author: steveinit

NetScaler 11.1 to 12 Migration Review

I recently had the opportunity to upgrade Citrix Netscaler from v11.1 to v12 for a client. It was a relatively simple load-balancer on a stick architecture with high availability (HA) active/passive pair, so seems super easy, right? It was…mostly. I had two bumps along the way, so I wanted to put this out there. Oh yeah, this is also an appliance pair, but I’m withholding the model. Here are the Citrix recommendations for the process below.

I’m a new kid on the networking block, so I wanted to do this via the web GUI. This couldn’t be much simpler. After logging in on a NetScaler, click Configuration on the bar, which would put you in System. Herein lies the System Upgrade button; however, we’re not ready for that. This is an HA pair and we want to control our fail-overs.

Prep for Upgrading the Secondary Device

So you should just be able to upgrade your secondary device, right? Yes. The primary will see the secondary down and just keep on trucking…assuming you don’t run into a bug. Well, my boss will light me up if I trip over a bug and take down production, so let’s control this process.

First, we log into our primary box, navigate into System/High Availability, select the primary load-balancer, and click edit. This is also a good time to save any changes to the running-config, which will be noted as a orange dot on a blue file (seen in the top right of the snippet below. Mine’s grayed since there are no pending changes).


In Configure HA Node, pull down the High Availability Status dropdown, select “Stay Primary” and hit the Okay button at the bottom of the form.


In the High Availability page, the node state should now say STAYPRIMARY.

Now I log in to the secondary NetScaler and repeat this process, but this time I’m putting the secondary box into “STAYSECONDARY.” If you have HA Synchronization and HA Propagation checked checked as I do in the screenshot above, you can technically configure the secondary into stay secondary from the primary, but I don’t. I don’t like to wait for the config to propagate and I need to verify on the secondary that it’s correct anyways.

Once Primary is in stay primary and secondary is in stay secondary, it’s time to upgrade.

Upgrade Secondary Device

From the GUI on the secondary node, open the main System page and click System Upgrade.


Seen below, the GUI allows you to select the build from either your local machine or the appliance. Last time I updated one of these, the local file upload did not work, and would just spin after the upload  I tried it anyway, which failed miserably on IE, Chrome, and FireFox. I opened FileZilla client and transferred the build file to /var/nsinstall. Now I can select the file from appliance in the Select Firmware drop-down. Put a check-mark in Reboot after successful installation and click upgrade. A black progress box will pop up. In my experience, it’s not terribly trustworthy. Both of the ones I upgraded took almost exactly seven minutes each from clicking upgrade to logging back into the GUI. Go get some food and hit the bathroom. Maybe not in that order.


Once the GUI comes back up, it should look slick indicating a successful upgrade. Need more proof? I do, but I somehow can’t find the build in the GUI since version 11, which Citrix swears is at the top of the screen. I SSH into the box and run > show version. It should reply with something like NetScaler NS12.0: Build, Date: Sep 22 2017, 09:11:54. I verify the back-end applications are functioning and your active users are happy. We’ve won half the battle. Now its time to break the network.

Prep for Upgrading the Primary Device

This shouldn’t break the network, but any sessions will need to renegotiate. If you’ve been following along and haven’t told your change board (shame on you), go tell someone because it’s fail-over time.

Log into the primary and secondary nodes, go back into System/High Availability, and set them both to ENABLED, changing the primary first. Let it bake for five minutes. Now since we don’t want to send commands from v12 to v11 (because that would be begging to hit a bug), from the v11 unupgraded primary node put a check in the primary’s checkbox, click the Action drop-down, and select Force Failover. There may be a pop-up or there may not. I don’t remember and this screenshot is in production so I’m not really gonna do it.Fail_Over_1

Confirm on the upgraded node that it’s now showing primary and the v11 is showing secondary. If that’s fine, go back into System/High Availability on both nodes and set the v12 node to STAYPRIMARY and v11 node to STAYSECONDARY. Verify this change and we’re ready to upgrade the remaining device.

Upgrading the Remaining v11 Device

I’m not going to rewrite this part, so in short: transfer the build file to the v11 node, upgrade it, and make sure it upgraded successfully. Now go back into System/High Availability on both devices and put them both back into ENABLED. I like to force one more fail-over to make sure both devices both handle traffic well. That’s it. As for the other bump I mentioned, it was my fault and I don’t want to talk about it. Hope this guide can help some of y’all out.


CCNA and Network+ Study Plan

I’m now building a page of networking study plans. My intent isn’t to teach here on the site, but to show what books, video series, sites, and labs that either I’m using for my own studies or that people I trust recommend. If you think I’m missing any great resources for Net+ or CCNA studies, please comment your recommendation.

Parallel to the study plans I’m also building a YouTube series called Network Speed Guides. These videos will address the topics of Net+ and CCNA R&S in the most condensed way possible. I’m aiming for a 5-10 minute video per topic designed to run through the terms, values, algorithms, etc. that have a bad tendency of falling out of your head.

These are both lengthy undertakings, so I’m giving myself 6 months to complete both. I appreciate you patience and look forward to your feedback!

CCNA R&S is Impossible 

Well, it has been for me.

I failed Cisco’s ICND2. Actually, I’ve now failed it for the fourth time, sort of: twice on version 2 and now twice on version 3. I am going to retake it. Each failure is demoralizing, but I’ll take it again anyways.

I have to admit I’m getting pretty sick of having my teeth kicked in by Cisco. While there’s a little bit of pity party sprinkled throughout this post, I’m also legitimately quite irritated with the test writers.

I feel like my last failure (my first attempt at ICND2v3) was an honest loss. The revision was only a couple months old. I could tell Cisco hadn’t yet made the questions terribly complex or obscure. I hear Cisco adds fluff to the questions to keep cheaters from memorize them for test banks, kind of like salting passwords in a database. I could see the fluff in the old test, but the revision was concise and well thought out. The questions we’re tough, but felt fair. That time I left the test center feeling like I failed a test because I didn’t know the material to the level they were asking of me. That’s the point of the exam, right?

That is not the case at all this time. I feel like Cisco sold me a lemon. The questions were once again obscure and cerebral. Worse, five or six of the questions literally left me wondering what they were even asking. Not like “Ah, they are trying to see if I understand the difference between STP State and Role! Cisco, that old fox,” but instead “Is this grammatically correct, or do I need to go back to 2nd grade?”

Making matters worse, Frame relay is back. Not all of it, just light theory and topology stuff, but I don’t know frame relay. I learned it for the previous version of the test, but I poured that right out my head after the revision. Frame relay has been a war-story of bygone days longer than my IT career, but I guess I better read that frame relay section in the appendix of the cert library.

IPv6 is much heavier this time. Now, that doesn’t bother me. I love IPv6. It feels so much more intuitive and I even lab more in v6 than v4. While I know the v6 threw me some gimme questions, only a minority of my real world networking has been v6. I don’t need to use math to subnet because I’ve worked with IPv4 almost every day for the last 4 years; I have the masks, CIDRs, wildcards, and binary in my head. I love IPv6, but I don’t have that level of confidence with v6 yet.

What really upsets me most is that I feel more than ready. I’ve read the Lammle and Odom big books cover to cover. I read a chapter from “31 Days Until Your Routing & Switching Exam” every night, then I reread it the next day on lunch. I’ve watched the entire CBT Nuggets twice, the O’Reilly series once, all of the condensed and full ITProTV videos, and even the LiveLessons in my Safari Books Online. I’ve done the Cisco, Transcender, and Boson practice tests, including rocking a Boson I’ve never seen before with a 993/850 two weeks ago; so close to perfect. Oh yeah, I also do this every day at work. Somehow I still scored a 766/811. I don’t know what’s left to do.

So, I’m going to take it again in June. I’m going to keep hammering my labs, building my Quizlet (which you may use), and taking the practice tests that I’m now memorizing the answers to. I just got back from Barnes and Noble with the hardback cert library for the revision (I already have v3 in electronic and v2 hardback). I’m going to read every single word in that book again, maybe a little faster this time, and compare it to my notes. I’m also going to write every command I come across and make sure I use it in my lab 5 times for each command, switch, and variable.

I’m so sick of taking this test. My employee evaluation, the way my coworkers see me, even my self-assessment of me as a young network engineer and aspiring infrastructure architect take a hit each time I see that sub-par score on the Pearson screen.

That’s why I’m taking it again. I’m better than an exam. I’m going to beat Cisco.

Don’t Believe the Programming Hype

I had the privilege of joining the Packet Pushers again recently to discuss the hype-train surrounding the alleged future death of the Network Engineer and rise of the omnipotent Network Programmer. We recorded this a few weeks back and I’ve been pondering on what to add to the conversation ever since. I’ve now decided that I have nothing to add. This was a great discussion. Have a listen!

A Very Tough Decision

A few months ago I had a very tough choice to make. I was happy with my job at Apogee Telecom. Great employer, good benefits, awesome environment, and a dependable partner at my site. Work was fun. I had been getting recruiter calls, but I had been with Apogee for about 3 months and I couldn’t see myself going anywhere else for quite awhile. Apogee is a great company. As a matter of fact go check them out at They have constant job postings and you will love them.

One normal day I picked up a call from Kristin Miller at Corus 360. I normally would let recruiting calls go to voicemail, but I like the folks at Corus 360, so I picked up the call thinking it would be just a casual “no thank you, work is great. How are things on your side?” kind of chat. That was the plan. Hey, Steve. how’s it going. Good, how about yourself? Oh, I’m doing well. Hey, I know you aren’t really looking, but I have a position for net engineer at Dingleboppits Hospital I thought I could tell you about. No, I’m happy where I…wait…did you say DingleboppitsYep, that’s the one. Long story short I told her she could send my resume over. And yes, *spoiler* Dingleboppits is a metaphor for NAME WITHHELD FOR SECURITY even though it wouldn’t be hard to figure out who I work for.

Sidestory, I was laid off in summer of 2016 in a massive workforce cut. The very first place I applied to was Dingleboppits Hospital. Some people have Google, Microsoft, the NSA; they can have them. I wanted to work for Dingleboppits. For a couple years I had been struggling with whether I wanted to go into big data, or ISP, or healthcare, but I had firmed up towards the latter over time. So then why did I want to be at that specific hospital? Well, my daughter was born at Dingleboppits, and both of my grandparents and even my wife had surgery there. During all of that stuff every nurse, doctor, tech, and maintenance worker I saw seemed to have a smile on their face. Further, I had worked, outside of IT, for one of their competitors during college. During that time I watched nurse after nurse leave for the promised land that is Dingleboppits. Part personal pride and part grass-is-greener, but that’s why I wanted to be part of their team. Unfortunately, this time around, I didn’t hear back from them, so a couple months later I was working for Apogee.

The next time around, when Kristin reached out to them, they must have liked what they saw, because two months later I stepped into my new cube at Dingleboppits.

But the decision to stay or go was hard; one of the most stressful choices I have ever made. I was already happy with my job and I really hate leaving an employer right after they spun down their employment search. So did my best to weight the options. Apogee: great company, good team, enjoyed my work, and room to grow. Dingleboppits: great company, really enjoyed meeting the team, should enjoy my work, and room to grow. What is the right choice? Well, I can do this here, but the other lets me do this and they both do that but this one does that better although in a year the other may do that better. I was so stressed about it that I got sick and irritable for about a week. Then I stopped for split second to breathe and I realized that I was trying to be unemotional about a choice that would affect the course of my life. I was using near mathematical methods to make a decision that would affect the fulfillment I would feel for myself; that can’t be the right way. One moment my head was spinning and the next I was calm, collected, and the decision had already been made up for me. I was leaving. Not because of anything Apogee had done to make me want to leave, but because I simply wanted to be where I wanted to be.

I wasn’t unhappy before, but I am happier now. If you are struggling with a decision like my own, or just a decision in general, don’t forget to stop and breathe.

If you are just looking for work in IT in Atlanta or North GA, definitely reach out to Kristin Miller or anyone else at Corus 360.

If you are looking for work in networking, definitely reach out to Apogee Telecom.

Pure Processor Power!

My wife let me build a new desktop as my Christmas present this year. My intent was to build a sweet rig for things like GNS3, Plex, Handbrake and some A/V editing by dropping a hot CPU, high memory capacity, RAIDing a few NAS HDDs together for storage and mirroring 2 smaller economy SSDs for OS and critical programs. Tie them all together with a strong, but economical, motherboard and I could build a pretty sweet rig for under $800, right? Well, this isn’t Linus Tech Tips so I didn’t get to push together everything at once, but I think I did alright.

The Build

So, notes on the build. I budgeted $100 on the motherboard then I chose based on 3 filters:

  1. Integrated USB 3.1 on type A so I can do things like quickly transfer files on a 3.1 thumb drive, stand up good quality heavier Linux and Windows VMs in VmWare Player, or rapidly charge my phone.
  2. DDR3 dual channel. I love the speeds of DDR4, but I don’t need them. I can max this motherboard (64GB) for much cheaper than I could if I picked a DDR4 board. I need memory so I can have Wireshark, GNS3, and a browser all pumping at the same time. This meets that mark without breaking the wallet.
  3. AM3+ socket. I didn’t want to go on-board for graphics and I definitely didn’t want Intel prices. The AMD FX-8370 was perfect and I caught it when the difference between the chip alone vs with the Wraith Cooler was only three dollars.

The Gigabyte 970-Gaming-SLI fit my needs, was 35 bucks under budget, has M.2, plus it’s lightly ruggedized. Other quick notes: the SSD was a cheap solution to get fast boot (less than a sip of coffee on the Windows 10 logo); the vid card was just a cheap video output, but actually plays New Vegas at around 40FPS on high, not bad for the price; PSU was cheap and has enough spare potential to keep the fan quiet even at the max the PC can draw; ADATA RAM, because cheap and effective; Corsair AF series fans are super quiet and look great; HGST Deskstar NAS 3TB because cheap for the high MTTF and raiding in the additions won’t hurt my feelings/wallet; Enermax Ostrog case, cheap and looks nice (though not the best cable management, my CPU bundle sticks out like a sore thumb). I also had 3 160GB WD Blues Sata III 5400 in the drawer, so I striped them together for kicks.


I’m impressed. For less than $600 I’m getting this result out of handbrake.


Wow. I was doing this before on a Dell I15 ultrabook with the I7 4500-U running Ubuntu 14.04. Now you cannot, in any world, compare the performance of an older ultrabook processor to an 8370. Still, this is cut my encoding time to 1/10 of burden. And sure, there are other factors further boosting that number too, so let’s go to the Prime95 results.


BAM! So, when the test queued the workers, the CPU jumped into Turbo on each core before they settled into throughput-per-core. This capture is thirty minutes into the test and I’m writing this as it continues to run. The Mobo/CPU/Wraith are in perfect harmony. It took about ten minutes to climb to 49C and then the board drew a line in the sand. The Wraith picked up speed, taking the CPU back to a steady 46C and the heat spread through the case for another ten minutes before the two other board fans took more speed, but we’re still at 46.5C. Even better, it’s nearly silent. Fan config is Top/out=2xAF140(molex-to-3pin), Rear/out=1xOstrog120mm; Front/in=unknownspare120mm; bottom/in=2xAF120(molex-to-3pin). I just killed Prime95 and the Wraith took the chip to under 30C in ten seconds and at 20 seconds all temp sensors are at normal 25C (my office is cold). Not bad for AMD.

What’s Next?

I’m not done with this guy. I’m going to swap out the vid card for something in the $200 range eventually for a HTPC setup, add two more of those Adata sticks so I can put a ridiculous number of routers in GNS3 or even open two tabs in Chrome (if you don’t get it, check your RAM usage), pull the 3 WD Blue 160s and raid in 3 more of the 3TB HGST HDDs, add a fan controller and temp probes for for the Corsairs, and add an M.2 stick to put my Steam games on. Speaking of which, I put Skyrim on the 3 striped WD Blues. Skyrim moves between loading screens so quickly that I can’t read the tips. First world problems.

Packet Pushers: A Recap

A couple weeks ago I had the honor of joining the Packet Pushers Podcast for a discussion on networking careers and the more general IT field. First of all, that was Awesome! I’ve been listening to Ethan Banks and Greg Ferro for going on two years and it was a blast to have the ear of these gentlemen when they’ve had mine for so long. Michael Sweikata and Ryan Booth of Moving Ones and Zeroes also joined in the fun. Together, those four engineers have on the order of 70ish years in IT, and I have four years, six if I include tech support. It felt great to have these 4 guys express interest and concern in my opinions. I came prepared with my thoughts and the input of many of my colleagues, who share in common my lack of tenure and wealth of questions about the future. These four packet pros in turn assuaged many fears and reinforced my love of all things IT. On to the Recap!

What’s a young network pro to do?

When I asked these engineers for advice aimed towards someone in my position as he/she considers a networking career Greg summed it up beautifully: “don’t.” He wasn’t being short or fatalist; instead he and the other 3 addressed my fears that networking is dying. In short, yes, it’s dying. Not the career of building the paths on which data travels; data will always need mobility. Instead, the Network is evolving from a sum of a million parts just barely glued together into essentially a giant mainframe, with some exceptions. With Ethernet speeds shaming the rest of the system one would first think “wow, I’ll be able to transfer this file so fast.” but in reality, why would you move the data to a weak host when those time sensitive bits are already sitting in a bare metal beast?  This mentality has driven progress in computing for about the last ten years. Networking, on the other hand, is finally tuning into the speed IT. Most of the networking field is likely to move into either automation. We all wanted a better way to provision a vlan on 200 switches in a moment, right? Well, it’s finally coming. Add in cloud-integrations, server-centric thin-client workstations, and wireless everywhere and there simply are not as many cables being run at businesses.

But like I said, Data still needs to move. Be it in a hypervisor or a controller, networking pros will still have to lay down paths between hosts, instances, and containers. The rise of automation is simplifying that process, but we still have to make all of the interconnections. Are the jobs going to go away? They say, and I say, No. But the amount of time networking pros spend in front of an SSH terminal is going to steadily decrease in exchange for time spent banging out python, tuning your puppet/chef/ansible deployment, or learning the new SDN solution designed to do it all in a distributed manner.

Cisco, certs, and the future of IT learning.

So what does this do to beloved Cisco? Well, they are going to have to get flexible. Cisco has spent many years conforming the industry to itself, but now we have options. Not just Juniper or HPE, but options to make a network from scratch or from the opensource compilations of many other network pros who want to break the vendor bonds.

So if companies are branching out, what does that do to the cert environment? Nothing, as Ethan Banks explained in the podcast. Cisco’s training arm is a profit model which has adapted over the years to the changes in technology. Quite simply, Cisco is likely continue revamping their cert system to reflect the industry while also seeding a Cisco preference into those certified by Cisco.

But obviously Cisco, Juniper, HPE, etc., are not going to write a “OpenStack Associate” certification. The open and whitebox communities instead going to pull a greater variety of skills for network support. Soon, more job listing will include “Python/Java preferred” or “Security Experience” under “CCNA certified.” Computer science and IT sec are re-entering the network and the jobs will follow that trend.

Let there be Gripes!

If you haven’t ready my post about my attempts at ICND2, check them out. I was incredibly relieved to hear Ethan Banks say he would have trouble passing the CCNA. I’m not going to beat that dead horse, as my mind is geared towards passing the revised ICND2 by Valentines day.

There’s a great bit of conversation on the lack of vendor accountability, insight, and integrity and how that’s driving customers away from the traditional vendors. I don’t have the experience to speak on that topic, but the conversation was still fascinating.

That’s it!

This is just a bit of what was pertinent to me, as a lot of the conversation was above my experience and understanding. All in all, this was a great conversation that addressed much of my aspirations and worries. Link below, check it out, and I recommend the Packet Pushers to any IT pros who want a view beyond their own data-center.

Fixed it!

Fixed the website. Too bad I didn’t get it back up until 3 weeks after the chat with the guys at the Packet Pushers Podcast!

I found the backup file right before I was about to wire a drive and put it in a raid array on my new desktop. Good thing I looked before I lept!

An Unfortunate Opportunity

I have been given the chance to compare the retired CCNA 200-120 test with the new CCNA 200-125. Did Steve score a sweet deal with Cisco? Nope. Then, you must be retaking the test with the purpose of writing a compare and contrast post. Wrong again. Actually, I don’t have a choice…at least if I want my CCNA.

I failed ICND2. Actually I failed it twice and chose not to write about it until a month later. I needed some time to let the bitter fade.

So why was I bitter? That test was hard. Really hard. I’m an excellent test taker and I enjoy the challenge of a good test, but both attempts were utterly draining. I know Cisco is hard charging to foil the test question banks, but this is getting ridiculous. I noticed when taking CCENT that a lot of questions left me with staring at the screen with a “huh?” in my brain, but I made it through nonetheless. My CCNA attempts were ten times worse.  On my first attempt, I almost ran out of time and scored a 740. I wasted a ton of time just trying to wrap my head around questions. The second time around, I made up for lost time by piling a month of studying on topics where I knew I had trouble, but I still only hit 784. 40 points short. I’m still a tad bitter, both at myself for not studying better and at Cisco. I’m not perfect by any means, but I can ace any practice test on the first attempt; that’s actually the bar I use to determine when to take an exam. Further, I’ve been working in networking for a minute, so I’m comfortable with most topics and I’ve supplemented the technologies I’m short on experience with labs.

So why didn’t I pass? I simply couldn’t figure out the best answer. There was not a single gimme question on the exam. Fortunately, it wasn’t multiple answer heavy, but I was left doubting almost ever choice I made. It was like every question had three good answers and I could have chosen one with just a little more information. Instead, I went with my gut reaction on nearly half of the test.

I could have done better, period. The test was passable and I blame myself for not knowing the material well enough. I do not, however, think I could have had enough preparation and experience to have done well.

But I am still hopeful. I’m hoping the revision of the CCNA exam will have taken a bit of esoteric head-scratching out of the test. I plan to take the revised exam in about six months, but first I want to knock out the CWTS. I need a change of study material for a minute.

I’m way too lazy for this

I broke my website….

As you can tell, I host my website on WordPress. While many of you cringe at the very name, I like WordPress. WordPress may have only recently dropped from the security chatter, mostly due to shouts of “YAHOO” raising the noise floor, and WordPress can be very clunky, but I like it. So, as a bit of a personal challenge, I wanted to host my site myself.


I really only have a few, and I mean a few, regular readers, all of whom I interact with regularly, so I took the site down at my lease-lapse to save a little coin while I built. Exported the config, saved to my daily PC, Dropbox, and my home Samba server. Safe and sound.

Now it’s time to bring it SteveInIT back up.

But, as you can see, there are not 20 posts behind this one. I broke (fixed) it. Well, I lost it. The hard drive in my PC crashed, so the local copy is gone. I can’t find it in my Dropbox at all, which is weird because I don’t ever delete anything from Dropbox. But I still have the Samba server, right? Wrong! Well, maybe wrong. I can’t find it there either. What I can find is the notepad I took to log where I put the exported config copies.I have the filename of the config in my log, so I can search for it, but no luck yet. My Dropbox is even installed in the Samba server, so I should be able to search the whole array and find 2 copies of the config. Again, haven’t yet, but I’ll upload it when I find it.

I have a few things to talk about, so I’m doing that first.

*Edited, since I fixed it.