vPC.uconn.edu version 2.0 is done. And it has launched. Which you know because you are reading this blog posting!
Huge huge thanks to everyone involved with pushing this updated site out the door. Dave Hicking, Marie Leblanc and Doug Neary put in a huge amount of effort over the past two weeks building the new structure with Ed, Tony and myself creating gobs of new content, all on the new web server built by David Ruiz. A big thanks to Chris Larosa for creating the awesome new logo that has become our defacto branding image for this fall. And last but not least, our fearless project manager Catherine Rhodes for pushing every aspect of the project and keeping us on target.
Thanks again to everyone, and we hope you, the reader, continue to find the content we (sporadically find time to) post here interesting.
Well, it’s been nearly a month without an update. Did we forget about the project? Not at all, in fact just the opposite. We’ve been so focused on bringing her home that we haven’t found time for blogging. So instead of five easy-to-read blog posts in nice friendly chunks, you now face this wall-o-text in front of you!
Over the past month, we have gone through quite a bit. In late July, we brought a dedicated project manager on to the project. Catherine has been amazing and has really helped us a great deal.
We completed the first 90% of the infrastructure by our 7/21 target date, but the second 90% has taken a while. We struggled with some major issues with our Dell 8024-K switches in the M1000E blade, primarily due to, we believe, bugs in the web GUI. Yesterday (8/18/2011) we came in and nuked the configurations and rebuild them completely via CLI. We actually printed out the configuration files first and decided that we need to CREATE a wall of shame just so we have somewhere to post them. Ports labeled as access AND trunk, multiple vlans, multiple MTU sizes… well, you can imagine that we had some fairly inexplicable issues. After reconfiguration, everything is running rock solid.
Setting up a VDI environment requires testing and verifying multiple components to verify the operation of our equipment before rolling out our implementation into a production environment. An important aspect of this testing phase deals with identifying the amount of virtual guest operating systems that each host is capable of handling. This is an important endeavor that must be taken in order to not overload our hosts and have our users suffer slow downs from performance degradation due to the saturation of our hosts.
An administrator trying to find out the amount of guests that a host can handle might mistakenly start creating multiple instances of vPCs and monitor the CPU as well as memory usage until it has reached the server capacity. This is not the best way of testing the capacity that a host can handle as there are many factors that are being overlooked. The issue with the above method is that our guests are sitting idle and are not simulating a workload that users in our enterprise might perform.