Category: Virtualization

Office 365 and the backup/data loss conundrum

With GDPR on the horizon and many organisations rapidly moving to Office 365, Azure services, Skype for Business and SharePoint online it seems many are not 100% clear on the distinction of responsiblities between their organisation and Microsoft themselves.

The plain bare fact is that YOU and your organisation are responsible for your data. All of it. Not Microsoft, sure they provide the service and there are SLA’s associated with those services – but those SLA’s can still be met if even if all your data was maliciously or accidentally erased i.e. the service is still running (even though all your data is gone!).

Microsoft are not responsible for backup or restore of your data.

Again, you might say there is 30 days backup for Office365 and 14 days for SharePoint online – but this only provides a limited amount of protection against data loss. Believe it or not any restore requirements are on a best effort from Microsoft as oppsed to tied to a distinct SLA. As with all cloud services, functionality and features continually change and evolve, a good thing generally BUT when talking about backup/restore and data loss this uncertainty around continual change represents a significant risk to your critical data.

Granular restore of a specific document in SharePoint online? Forget it, it’s either the whole Site Collection (yes, everything!) or nothing.

Read more

Tags: , , , , , , , , , , , , , , ,

Did Microsoft Decode the Future in 2016? Not really

Microsoft Future Encoded
Microsoft Future Encoded

Billed as an event where the future direction of Microsoft (and its partner network) would head in the next 3-5 years I thought it worth heading to London for the ‘Technical’ Day. It was a 2 day event but day 1 on Tue 1st November 2016 was billed as ‘Business Day’, so of course I booked in for the geek chic on the 2nd Nov.

It was a well organised event at the Excel centre, smooth check-in & badge print out and well staffed. A little crowded outside the single escalator everyone was trying to use straight after lunch to get up to Levels 1 through to 3, which is where the breakout session were held in various meeting/seminar rooms. You are suposed to use the ‘Future Encoded’ app to set a schedule and browse the timetable/sessions – the app was pretty rubbish, it kept showing me day 1 (no way to change it) and didn’t work properly until day 2 actually arrived. Without the app you are stuck – no printed copies just dashboard screens outside each meeting/seminar room with the schedule for the remainder of that day for that room only. Read more

Tags: , , , , , , , , , , , , , , ,

Hyperconvergence – what (the heck) is it?

The latest buzzword in virtualization, yet for me the technology it describes is old hat (in the I.T. world old hat isn’t all that long ago). Let me explain…traditionally a ‘converged’ system is simply a combination of 2 (or more) great bits of technology with very different roles combined into one. An example of a converged system is VCE, where I still think of it as the ‘V’Mware, ‘C’isco and ‘E’MC alliance:

  • VMware – provides the virtualization function
  • Cisco – provides the network and server layer (with a little help from Intel!)
  • EMC – experts in storage, so you can guess what they provide!

[With Intels contribution it should really be called ‘ViCE’ ūüėČ ]

Together that means a joined up system, a VBlock, that you simply deploy then use as a converged compute system. Want more performance? Then add more CPU or RAM or Storage…

…and that is where Hypercovergence differs. Instead of isolated blocks of converged compute you have ‘blocks’ that can work together and scale out, want more performance? Add another block to an existing one via a network cable and BOOM! You have more power. Add 10 blocks. Or 50!

Why did I say it was ‘old hat’ I hear you ask? Well, that’s exactly the way MongoDB works, it scales out in pretty much the same way. When your databases reach a certain size and you need more oooomph, traditionally you would need to migrate the workload to a beefier machine. What if there was a better way, one perhaps that could make use of some of the spare CPU cycles available in an existing machine or one that allowed a redundant piece of kit become useful again? I’ll explain with pictures:

single MongoDB box
single MongoDB box

Poor chap, a lowly P75 system crunching away at that data. Need to urgently number crunch the number of stars in the universe and the probability % of habitable planets? Well you need more ooooomph, so scale out like thus which MongoDB has been doing for years (since 2007 while VMwares bitter rival Nutanix first released their Virtual Compute Platform in Q4 2011) :

scale out baby
scale out baby!

OH look at that, my Xeon buddies have joined in the game. Now with all that Quad core Hyperthreading with a bit of clever sharding on the MongoDB config you’ll be finished calculating in no time.

So that’s what Hyperconvergence is pretty much. The ability to add more by simply using Ethernet. No need for messy transitions or complicated integration paths and reams of consulting days. Buy it, plug it in, switch it on, use it.

Of course Hyperconvergence is a little more than my simplistic analogy, it’s changing the landscape for virtualization and storage. Previously you would need to integrate 4 or 5 vendor offerings to get your virtual compute platform running. Now you don’t have to. Buy just one (very expensive) hyperconverged box and spin up 100’s of workload VMs to do your grunt work. Potentially you can reduce significantly the number of racks of servers you have, and power/storage costs anywhere between 20 to 80%. Impressive stuff

The following are ones to watch:

Openstack – cheap cloud (supposedly) particularly Red Hats based upon KVM

Nutanix – possibly more famous for rowing with VMware

SimpliVity – simple isn’t it! Get a free ‘For Dummies’ book here

PernixData – just like The Flash, these guys are fast

I wonder what NetApp are thinking right now…?

Probably enjoying the ever growing spat between VMware and Nutanix, my buddy Chuck started it all with this > 10 reasons why vmware is leading hyperconvergence






Tags: , , , , , , , , , , , , , , , , , ,

Free MS Books for Kindle or Kindle App

Whether you have a Kindle, or a smartphone/tablet with the Kindle App installed you can download the following Microsoft related technical books for free. Right now from Amazon. They are:

Introducing Windows 8.1 for IT Professionals – Ed Bott DOWNLOAD

Introducing Windows Server 2012 R2 – Mitch Tulloch DOWNLOAD

Introducing Microsoft System Center 2012 R2 – Mitch Tulloch, Symon Perriman, Microsoft System Center Team DOWNLOAD

Office 365: Migrating and Managing Your Business in the Cloud –¬†Matt Katzer, Don Crawford DOWNLOAD

Introducing Windows Azure for IT Professionals – Mitch Tulloch DOWNLOAD

Microsoft System Center Troubleshooting Configuration Manager  РRushi Faldu, Manoj Pal, Andre Monica, Kaushal Pandey, Mitch Tulloch DOWNLOAD

Microsoft System Center Building a Virtualized Network Solution – Nigel Cain, Alvin Morales, Michel Luescher, Damian Flynn DOWNLOAD

Microsoft System Center Integrated Cloud Platform (Introducing) – David Ziembicki, Mitch Tulloch DOWNLOAD

Remmeber, you DO NOT need a Kindle to access these books. Use the App. Enjoy!

Tags: , , , , , , , , , , , , , , , , ,

Generation 8 HP Microservers released

HP have released updated versions of their Microserver range, the 8th gen. Now comes in 2 flavours, a G1610 or a G2020 with either a 2.2 or 2.5GHz dual core CPU.

Major differences are a move from AMD to Intel for the CPU architecture, ILO capability as per standard HP Server range, and an updated Storage controller!

Also official support for 16GB RAM (still only 2 DIMM slots), and still comes with only 2GB ECC RAM and 150W PSU, and presumably a 250GB HDD (tbc!). UK costs will be added here once known, click for link to USA site

Oh and of course it looks way cooler…updated as per other HP Servers!

HP Microserver G8
HP Microserver G8


Tags: , , , , , , , ,

HP Microserver N40L bargain & the 16GB RAM confusion

The last few months the HP Microserver offering, the N40L, has been hitting sales highs due to a £100 cashback offer from HP. This effectively made the whole unit available for around £100 give or take. The basic spec is 2GB RAM, single 250GB SATA HDD and a AMD Turion II (Dual core) CPU.

Ideal and cheap boxes for testing with, however 2GB RAM isn’t enough, and neither is 250GB of disk space. First thing, the RAM since it’s relatively cheap the moment for DDR3. The N40L CAN and does support 16 GB of RAM contrary to HP’s own claim of a 8GB max, however it only has 2 DIMM slots so to get it to max capacity you need 2x8GB DIMMs.

List of total 16GB RAM modules known to be compatible with the N40L


Manufacturer Model Details ECC URL
Patriot PGD316G1333ELK 2 x 8 GB DDR3-1333 PC3-10666 CL9 G2 Series N Link
Crucial CT2KIT102464BA1339 2 x 8 GB DDR3-1333 PC3-10600 CL9 1.5v N Link
Gskill F3-1333C9D-16GAO 2 x 8 GB DDR3 1333 9-9-9-24 1.5v N Link
Corsair CMX8GX3M1A1333C9 8 GB XMS3 DDR3-1333 CL9 9-9-9-24 1.65v N Link
Kingston KVR1333D3E9S/8G 2x 8 GB DDR3-1333 ECC CL9 1.5v Y Link
Super Talent W1333EB8GS 8 GB DDR3-1333 PC3-10600 CL9 1.5v Y No Link
Transcend JM1333KLH-8G 8 GB DDR3-1333 PC3-10600 CL9 1.5v N Link

Doesn’t work with 16GB? Things to try…

You may need to set parity checking in BIOS to ‘enabled’ and¬†for both 8GB modules to be recognised, once done set it back to¬†‘auto’ unless you are using ECC memory!

Some people report that the Corsair module, CMX8GX3M1A1333C9, is a bit fiddly & sometimes will recognise the full 16 GB and sometimes just 8 GB!

Tags: , ,

Exam 70-659 Microsoft Server Virtualization

I took and passed this exam today, not as straightforward as I thought it would be.

I pretty much used ‘Mastering Microsoft Virtualization’ by Tim Cerling at al, and a VMWare Workstation based lab to do the hands-on. Pretty solid book and I now have an extremely good understanding of the underlying Hyper-V architecture, especially CSV.

It has actually made me look forward to Hyper-V v3.0 to be released with Server 8/2012 later this year ūüôā

Anyway back to the exam, aside from the usual multiple choice there were a number of other formats. The one that was new to me was a bunch of questions where you were presented with the same (or very similar) long list of possible answers ranging from A – M, except the answers were re-arranged & some questions had a few less possible answers to choose from. There were about 6 of these questions and they all came one after the other.

There was also the ‘pick the click’ right answer when presented with a graphic to select from. Also another format where you had to pick say 3 correct blocks of answers out of 5/6 and then you HAD to arrange them in the correct sequence order. I found these questions consumed a lot of my time, subtle difference between similar answers had me scratching my head at times!

Lots of questions on RDS, about 10. A couple of questions involved DPM 2010 (don’t worry, nothing deep) and a bunch of questions around using/understanding SCVMM 2008R2. Know the difference between 2008 and 2008R2 failover clusters (i.e. no CSV/Live Migration on 2008). Make sure you have some hands on with SCVMM, know your P2V best practice, and understand disks as related to hyper-v.

Right. So next move is to obtain the full MCSE:Cloud,

which means one more exam, 70-246 – Monitoring and Operating a Private Cloud with System Center 2012:

Exam 70-246 Monitoring and Operating a Private Cloud with System Center 2012The other pre-reqs for MCSE:Cloud are Exam 70-247 (which until December 2013 you have the option of completing 70-659 instead). and having an MCSA in Server 2008 (I already have the MCITP Enterprise Admin!).

All good stuff, i’ll be sitting 70-246 in July and will report back then as this is a totally new exam to be released in June and there is not much material available to study.

Tags: , , ,

PlateSpin Migrate 8 and P2V of Cluster

Using PlateSpin version 8 to P2V a Active/Active Cluster?

If your looking at virtualizing your infrastructure P2V of standalone systems is straightforward, usually a good idea to defrag drives prior to a P2V but not always essential. Just keep in mind that each of the major vendors P2V tool-sets has different supported operating systems and scenarios (ever tried P2V of NT 4.0?).

On to a simple 2-node cluster, whether it’s for file, print or SQL, clustered servers present a challenge to those in the middle of a wider P2V project. PlateSpin Migrate 8 officially only supports P2V of Active-Passive 2-node clusters.

But what about Active-Active? Here are the steps we followed to ensure it worked in a production environment, using VMWare vSphere 5 as target virtual infrastructure:

  1. Move all clustered resources to one single node, make note of drive letters for shared disks (e.g, spooler service). Effectively creating an Active/Passive cluster.
  2. Ensure empty Node is offline, will shut it down later
  3. P2V the active node by its name, include ALL disks
  4. Add prod NIC to the P2V image, ensure the correct IP is entered if static, but DO NOT connect the NIC!!!
  5. Close down the active physical node
  6. Close down the inactive physical node
  7. Now you can activate the production NIC on the P2V
  8. If the cluster disks are missing, within Disk Management, add in additional disks with previous drive letters. You should find the additional disks as vmdks
  9. Start Cluster specific services from Services.msc
  10. Browse to Cluster Manager from Computer Management, it should advise of activating cluster. Do so.
  11. All cluster resources should come online, showing one dead node

Hyper-V on VMWare Workstation

Hyper-V has 2 core hardware requirements РDEP & CPU Virtualization Рof course it has other requirements too for it to work optimally but that is not the concern here. How do you run the Hyper-V Microkernalized Hypervisor within VMWare Workstation?

If you run the highly useful SecurAble tool [download] from any virtual machine, you get this result.

Now, as far as I am aware VMWare Workstation 7 is not able to provide the relevant Hardware Virtualization required (someone correct me if I am wrong please!). You need Workstation v.8 Рyou also need to remember that any virtual machine you already have probably needs to be upgraded to Workstation 8 to work. You should see the option below in the settings window when you open an older VM in Workstation 8:

Click ‘Upgrade vm’

click next>

change it to the latest & greatest Workstation 8

then click next and then choose whichever option you prefer (clone or amend existing), i prefer to amend & it is very quick

Finally confirm and accept by clicking Finish

Right, now go the Settings of this virtual machine and choose the options as below for CPU:

Once you make sure the virtual machine you will use for hyper-v also has at least 2GB of RAM you are good to go, install 2008 R2 then add the hyper-v role and you won’t get this complaint:

Make sure you have multiple NICs added, one for Hyper-V to take over & your standard LAN management IP as minimum. Another if you plan to use iSCSI storage, but in a lab probably easier to use the management IP for iSCSI too Рbut never in production. Ideal if you use WSS 2008 R2 to present storage to test the various Host/Guest scenarios for failover/CSV in this environment.

After you successfully add the Hyper-V role you may get Event Log errors for VMBus (not installed)¬†and Virtualization Infrastructure Driver¬†(not installed), both are event ID 14098. Also looking at Device Manager (within ‘System devices’) you will see a yellow warning against ‘Virtual Machine Bus’ and ‘Virtualization Infrastructure Driver’. That is because you forgot something..

IMPORTANT: What you forgot is to modify the .VMX file for your Virtual Machine to allow the addition of Guests once the Parent Partition/Host is setup. To edit the .VMX the Hyper-V VM must be in a shutdown state, then edit the. VMX (in notepad++) by adding this line:

hypervisor.cpuid.v0 = “FALSE”

Do that for all VMs to which you add the Hyper-V role.

Tags: , ,
%d bloggers like this: