Microsoft’s long-awaited Windows 8 was officially released just last week, but Windows Server 2012 has been up for public analysis and use for almost two months. The server update -- the first in three years for Microsoft -- may not appear to have received the same major makeover that the new operating system did, but a glance under the hood reveals a slew of additions and improvements. Here is an overview of the most significant upgrades from Windows Server 2008 that you may want to consider for your organisation’s next technology renovation.
1. Windows Server 2012 is simply more powerful than Windows Server 2008
Microsoft set out to build a new server that would both embrace virtualisation and complement a new breed of scalable, workable data centres. Whereas Windows Server 2008 R2 allowed users to support up to 64 logical processors on hardware, Windows Server 2012 upped the number to 320. Overall memory jumped as well, with 4 terrabytes of physical memory available now, versus 1 in Windows Server 2008, and up to 1 terabyte of memory on a virtual machine, up from 64 gigabytes previously. These increases present in Windows Server 2012 can translate to higher-quality platform performance and an overall longer lifespan for your enterprise devices.
2. Windows Server 2012 is more secure than Windows Server 2008
With the global movement toward cloud and virtualisation, security is the top concern for individuals on their personal phones and tablets, but even more so for the IT department and the infrastructure it oversees. Microsoft recognised this concern and applied it to Windows Server 2012, which now offers a fully isolated data centre network layer for better separation between virtual machines, as opposed to the more limited privacy of Windows Server 2008. A new safety feature in Windows Server 2012, unseen in Windows Server 2008, is support for private virtual local area networks (PVLANS), which prevent interaction between virtual machines on the same server without sacrificing network connectivity for each.
3. Windows Server 2012 is more flexible than Windows Server 2008
Technology, like your business, is always transforming and growing. Flexibility is critical to ensure that a server can adapt to your organisation’s fluctuating needs; Windows Server 2012 seems to have a better grasp on this expectation than Windows Server 2008 did. Server management is even more centralised and customisable with Windows Server 2012 Server Manager, which lets administrators control all locally and cloud-hosted servers from one, sleek dashboard. The Windows Server 2008 version used virtual LANs to virtualise networks, albeit in a complicated fashion when done on a large scale.
With Windows Server 2012, Microsoft’s Hyper-V Network Virtualisation isolates network traffic without VLANS, letting users switch virtual machines at will within the infrastructure, with no extra servers, switches or maintenance necessary. Performing a live migration or storage migrations is much simpler in Windows Server 2012, which supports multiple, simultaneous VM migrations across cluster boundaries and allows storage migration even while the machines are running.
Deciding to overhaul your organisation’s entire data centre won’t happen overnight. Now, however, is the time to evaluate your business needs and goals, and determine if the switch from Windows Server 2008 to Windows Server 2012 is justified. Feel free to reference this spec sheet for more information on how Windows Server 2012 measures up to Windows Server 2008.
Are you ready to let your employees use mobile apps? You have more to consider than just what apps you’re going to let them access – you have a few delivery options as well. Picking the right one depends on what you need in terms of security, IT management and device compatibility. Here are a few considerations to get you started.
You need: No data stored on the device
Choose: Virtualisation or cloud
Enterprise users will make up about 75 per cent of the market for cloud-based mobile apps by 2014, according to Juniper Research. Both virtualising your mobile apps and delivering them through the cloud keep data off the device. Both approaches also can put more control in the hands of the IT department, which can oversee access to applications and manage how they are used.
Security buffs are likely fiercely nodding their heads, but keep in mind that if you virtualise, there could be usability issues surrounding the need for constant network connectivity and how a mobile app looks and performs on a device. Delivering mobile apps through the cloud can ease these usability issues, but your security concerns will only be abated if you know where your data is sitting.
You need: To take the app migration burden off IT
Choose: An enterprise app store
By 2014, 60 per cent of IT organisations will have private app stores, according to Gartner. These stores work similarly to Apple’s app store by allowing employees to quickly and securely download certain applications they are authorised to use. This takes a lot of the burden off IT as they don’t need to provision apps to different users and devices.
However, building an app store does come at a cost, and users might grumble if your app store doesn’t resemble Apple’s. That means you need to consider the ability to rate apps, search for apps, recommend similar apps and allow user feedback.
You need: Compatibility across a wide range of devices
Choose: Web apps
Their ability to run in browsers means web apps don’t require a distribution system, so users can access them from any number of devices. Plus, IT doesn’t need to create several incarnations of one app, which leads to easier delivery and management. Internet connectivity will always be a concern to run web apps, however; if it’s poor, even refreshing the screen will cause problems.
These are just a few reasons for considering the different mobile app delivery methods. If your organisation needs additional help with taking its enterprise applications into the mobile world, Datacom’s Enterprise Mobility Applications practise can help. We handle application development, integrate apps with devices and offer field service support to ensure you applications run smoothly.
How are you delivering mobile apps in your organisation?
When it comes to new IT investments, the benefits – including cost savings – often depend on how well you use and take care of the technology or infrastructure. Akin to a new car, organisations could easily wind up spending more money if they quickly run the technology purchase into the ground. If your organisation is wondering why it isn’t seeing as great of an ROI on server virtualisation as expected, look to how well the IT staff is managing the physical servers and virtual machines.
1. Use more of your physical servers
Technology and business research firm Forrester reports that one of the chief reasons organisations don’t maximise their virtual infrastructure investment is because they fail to put enough virtual machines on their physical servers. It is tough to strike a balance between too few VMs and too many, as the latter can inevitably lead to poor performance. But setting a strict utilisation percentage is not the answer either, as the organisation might get stuck in a cycle of buying more servers to host the VMs earlier than should be needed.
In Forrester’s research, many companies reported stopping at three to five VMs per server when some of these servers could actually host up to 15 VMs. While how many VMs IT runs will depend on factors such as resource-heavy apps, organisations can typically run three to five virtual machines for each core on a new Intel or AMD processor, according to a CIO report. One Sydney council was able to reduce the number of physical servers in its data centre from 24 to four and save more than $100,000 with a server virtualisation project implemented by Datacom.
2. Avoid virtual sprawl
Of course the danger of creating more VMs on your servers is that you’ll spawn server sprawl. It’s now so easy to create VMs that everyone wants one when they want it. This may not seem as dangerous as physical server sprawl, but it is. Too many VMs lead to an over-allocation of resources, which drives up costs; there's also the risk the organisation might have to purchase another physical server when they shouldn’t need to. VM sprawl also drains the IT department’s management abilities.
VM sprawl is a sneaky beast – it happens quietly and slowly, so the best defence against it is regularly monitoring how resources are being used and how many VMs are in the data centre one month compared to the next. VMs should have a lifecycle, and careful reporting will help IT departments determine when VMs are no longer needed. Going forward, IT should demand justification for VMs when they are requested to avoid creating them just because it's easy. IT could also turn off an unused VM every time someone asks for a new one.
3. Replace your physical servers on time
Yes, it can be a drag when you have to potentially fork over thousands or tens of thousands of dollars to replace your physical servers. IDC figures have shown that organisations that upgrade their servers within three-and-a-half-years make back the amount of their investment within 12 months and receive an ROI of more than 150 per cent over three years. Sticking to this refresh cycle can unearth some of the virtualisation benefits your organisation initially sought out, such as improved maintenance and better energy efficiency.
How does your organisation plan to fully realise the cost-saving benefits of server virtualisation?
There isn’t a single approach to managing Bring Your Own Device at organisations. A slew of different mobile device management and mobile application management tools exists for allowing access to apps and data on employee-owned devices, and organisations can choose one or several of these tools to work in tandem to cover all the different devices and platforms.
Other organisations are managing Bring Your Own Device through virtual desktop infrastructure. There’s some debate in the IT industry and the media over whether using VDI for this purpose is smart. You can decide for yourself by reviewing the benefits and disadvantages to this approach.
VDI delivers desktops through the data centre to any mobile device. This means all devices can essentially be managed from one location, providing easier administration and deployment and a more streamlined way to enforce compliance for all users. It also gives the end users better ability to connect to their virtual desktops from any device whenever they want.
Security is also strengthened because no corporate data is actually sitting on the employee’s phone or tablet—it’s all in the data centre. If an employee device falls into the wrong hands, the thief won’t be able to access work information. IT retains complete control over both the operating system and the apps on the device.
With VDI virtualisation, users have to connect to the data centre to access the corporate desktop. This means network connectivity and bandwidth become factors the IT team needs to worry about for anyone in the company trying to do work from a personal device. Network performance can affect even the most basic of tasks if the network is sluggish. VDI also presents issues with running rich media on virtual desktops, which can prevent users from accessing certain functions, such as video, and can cause screen resolution problems.
There are also issues related to the lack of desktop customisation that crops up when you’re provisioning an image to a user’s device. In this way, VDI runs the risk of defeating the purpose of BYOD: allowing employees to have the user experience they want on their device of choice. The act of turning mobile devices into desktops via a VDI image means users won’t get the native experience of the device. There could also be problems when certain users need access to different apps and extra licensing costs for accessing the desktop through VDI on personal devices.
How to decide if it’s right for you
Deciding how to manage BYOD is a big decision for your organisation. Doing it in a way that allows your IT department to retain the right level of control while also letting users work the way they want on their personal devices is crucial. The right IT outsourcer will be able to assess your current infrastructure and systems to determine if your environment is right for both VDI and BYOD. If VDI isn’t the way to go, your IT provider will be able to make recommendations on the right approach and guide the design and implementation process.
How do you manage BYOD at your organisation?
In 2011, Australia only hit 63.3% of its cloud growth expectations, according to a Computerworld story. This means many Aussie organisations aren’t competing at their full potential — especially in the international market where cloud computing continues its growth at a much higher rate. But it also means many national organisations have found solutions that provide all the agility needed without opting for the cloud, usually thanks to server virtualisation.
Through server virtualisation, the process of running multiple individual computing environments from a single server, many organisations wonder why opting for a cloud solution is worth it. After all, do they need a cloud solution’s large data centre when, through server visualisation, they have more horsepower under the hood than they’ll likely ever need?
In the end, the best solution is a tailored solution. The key isn’t asking if it’s one or the other — it’s asking which functions are best meant for the cloud and which flourish in visualised server environments. For example, you might ask:
- Will this project be customer-facing and focused on aggressive growth, or is this project dedicated for a static internal audience?
- If for a current function, is it meeting the needs of your organisation and/or customers? If not, what needs are not currently addressed?
- What are the applications or data that you need to support?
- Do you have the core competency, resources and technical expertise to manage scaling large amounts of data in-house?
- What are your demands for server workload?
- What are your requirements for disaster recovery?
You can probably guess which answers will lead you to a virtualised server and which to a cloud solution
. Before you buy, it's worthwhile tapping into IT consultants who can conduct an in-depth discovery to learn which option — or which combination — suits your organisation. As a one-stop solution, a professional services team should not only determine the right solution, but implement it, test it and support it.
As you begin researching your options, remember to think about the future. Organisations must anticipate their storage and computing power needs two to three years down the line to ensure their investment makes sense.
The bulk of IT costs goes toward maintaining creaky infrastructure and complicated, often archaic systems. Three quarters of the average IT budget is spent on legacy systems alone, according to Microsoft. IT departments pay a steeper price. IT staff bogged down in keeping legacy systems a few steps from death’s store and tuning up old infrastructure have no time to innovate or be strategic. This creates a vicious cycle in which the department keeps the status quo without ever having a chance to prove both its technical and business power.
Organisations that can simplify their IT environments – standardise hardware, better manage applications and keep the data centre uncluttered – can reduce costs, better pool resources and help IT leverage technology to drive business results.
One issue: Years ago, it was server sprawl causing a headache in the data centre. Now, it’s virtual machine sprawl. The latter might sound innocuous, but taking server virtualisation too far – basically, creating VM after VM just because you can – might result in a mess of too many VMs that overtax your infrastructure and cause the cost of licenses to soar.
One solution: Virtualisation management technology and services can help IT better manage and contain the virtual environment. Through our managed services team, Datacom can oversee server management and monitoring so you keep virtual sprawl in check. The process begins through automation, which helps streamline the virtual environment and more quickly provision virtual machines. The end result is a self-service method that allows IT to routinely provision, secure and manage VMs so they don’t spiral out of control.
One issue: Legacy systems maintenance can keep IT from developing innovative new applications that could boost system efficiency and end-user satisfaction. There’s also research demonstrating that the more apps a business runs, the less efficient it is. The organisations with the best performance run an average of 20 applications, according IT research by The Hackett Group; lesser-performing companies run 39 apps on average.
One solution: Many organisations keep legacy systems going because they fear the cost and challenge of updating them. Leveraging the help of an IT outsourcer to plot and execute your legacy system modernisation or transformation can take the headache out of revamping your business-critical applications so you can get better use out of them. The first part of the process involves conducting an audit of all the apps running in these systems. Identifying which ones are critical to the business will the set the wheels in motion to clean up the clutter.
How have you planned to reduce IT complexity in your organisation?
Virtual desktop infrastructure can benefit organisations by allowing more centralised, standardised deployment and management of virtual desktops. Because everything is stored and managed in data centres, this desktop virtualisation delivery method allows IT to complete all troubleshooting, security patching and adding of applications without affecting the end user’s computer. VDI also allows easier security and backup of data through the data centres and faster deployment of desktops to remote users.
That doesn’t mean transitioning to VDI is a cakewalk. To avoid having your VDI solution turn into a hassle, designate the right people to lead the build-out and piloting of the implementation, get user input and conduct an audit of the applications you want to include in the VDI image.
Carefully decide who will manage the project
VDI projects will bring together the IT staff managing servers in the data centres and the staff managing the desktops. Some elements of the project can get lost in translation between these two groups. Both teams have very different roles involving management of different resources, and sometimes these employees aren’t even based out of the same office.
One of the ways around this potential struggle is to designate one person to oversee both groups – such as the CIO directly – to ensure each is doing its bit to get the project done. The designated manager can make sure desktop staff is given access to the data centre and its resources to complete the project, for instance. Organisations may also be able to avoid these issues all together by choosing an IT outsourcer to design, build and implement their VDI solution.
Keep the users happy
In a recent ZDNet story, research firm Longhaus cautions against IT departments designing their VDI plan and pilot without their end users in mind. Any transition to virtual desktops will bring about pain points and some disgruntled users who were happier with the previous setup. If accessing critical applications becomes a major problem during the pilot phase, and if users can no longer customise their desktops, the IT department will risk losing employee buy-in for VDI.
To help keep most of the end users happy, organisations should choose the most enthusiastic VDI supporters to try the pilot first. These users can then be VDI evangelists for the rest of the company and show, from a user perspective, how the technology works and how it can benefit the organisation. Allowing staff to move between the pilot system and the production system will also cut down on user down time and allow users to keep working as the organisation makes the transition to VDI.
Put the right applications on the image
Some organisations falter with VDI when it comes time to build the image. IT must choose which applications to put on the image, but this is easier said than done if the team leading the project doesn’t knows which apps are the most relevant and used. The worst-case scenario is that IT mistakenly leaves critical apps off the image, resulting in users being unable to access the systems necessary to do their jobs.
A way to work around this is by having the IT department take a thorough inventory of all the applications in use prior to building the VDI image. This task may involve assessing if any apps were installed locally on certain users’ computers to ensure every person in the organisation will have access to the apps they need. Taking the time to complete this task now can help avoid project delays and user issues when you get into the pilot phase.
How have you ensured a smooth transition to VDI at your organisation?
Julian Buckley is the Business Manager of Professional Services for Datacom in QLD. Julian leads a team of solution architects, project managers and consulting engineers that evangelise, design, scope, deliver and implement purpose-built, client-focused infrastructure and virtualisation solutions for our customers. His team in QLD focuses on long-term relationships with clients, building end-to-end enterprise ICT architecture for corporate, education and government clients across Microsoft, Citrix and VMware technology sets. A local leader in virtualisation in the QLD market, Julian's team can help all clients achieve greater return on investment, reliability and performance through best practice, industry-leading solutions.
Who told you desktop virtualisation only comes by way of virtual desktop infrastructure (VDI)? There is more than one flavour for delivering virtual desktops to your workforce, such as app streaming, operating system provisioning and Remote Desktop Services (RDS). While any delivery model can improve IT management, the method you choose has a lot of do with the main problem you’re trying to solve. Here are a few example scenarios to guide you.
You need: To deliver entire desktop images to remote employees
Then choose: VDI
VDI enables IT to deploy an entire desktop image to workers no matter where they are. For companies with multiple offices or a lot of workers scattered across locations, VDI provides a more secure way to access data and applications through the data centre. A key benefit for the IT department is it no longer has to worry about repairing a computing device or troubleshooting a glitch for an employee far offsite. Users must be able to connect to the network – keep in mind this means the end user’s experience is affected by network latency and available bandwidth.
You need: Business-critical applications available across operating systems and devices
Then choose: RDS
For this hosted type of virtualisation, users just need to connect to a network on any device to access their applications. The setup makes this delivery model ideal for enterprises embracing the Bring Your Own Device trend, as it enables users to access specific apps on different devices. IT management is improved, as the department can oversee the applications and switch user settings if need be without worrying about apps being compatible with the operating system.
You need: Tight control over app management
Then choose: Application streaming
The IT department retains ultimate control over who can access which applications when with this model. If you hire a lot of contract workers, this client-based virtualisation model also allows you to schedule when the license expires on certain applications. Users can access applications, which are delivered on-demand, when off the network, and IT gets the benefit of being able to run legacy applications on newer operating systems.
You need: Employee mobility
Then choose: Virtual containers
You can lock down certain applications and use them offline with this virtualisation delivery model, which adds big convenience for mobile employees who regularly travel, such as sales associates. There’s a security win here as threats can be contained and the OS and apps are delivered through the IT department.
You need: Reliable uptime
Then choose: Operating system provisioning
This client-based delivery model solves the uptime issue because only the OS and the applications are downloaded, which avoids burdening the network. Users can very easily access an operating system image by restarting their computer or moving to another desktop. This delivery method also, in effect, wipes the data from the device when the user powers it off, as everything is stored in the data centre. This delivery model works well for organisations with the same desktop image on a lot of different devices, such as contact centres and schools.
Have more questions on which desktop virtualisation delivery model to choose? A Datacom expert can discuss the options that best suit your business.
When it comes to desktop virtualisation, you can’t just pick a product or delivery method at random and hope it sticks. Like taking on any major IT transformation, you should first see how desktop virtualisation fits in with your current information and communications technology environment. Additionally, you’ll need to ensure you have the network infrastructure to support whichever virtualisation delivery model you choose. And finally, you’ll need to take a look at the different kinds of users at your organisation to determine the best method of virtualising.
1. Determine how does desktop virtualisation fits with other IT plans
A 2011 Forrester survey reported 38 per cent of IT managers planned desktop and application virtualisation with Windows 7 migration. If you have a similar project in the works, it might make sense to execute desktop virtualisation at the same time. But there’s more to consider in the realm of mobility, collaboration or remote work plans. For instance, are you envisioning a Bring Your Own Device program? Then virtualising through an application-level streaming approach is probably your best bet because these users will only need certain applications to run on their personal device, not an entire operating system image.
2. Assess network requirements
Any type of desktop virtualisation affects your network. For server-based virtual desktop infrastructure (VDI) and terminal services, the end user’s experience is affected by network latency and the bandwidth availability. To determine bandwidth needs, your organisation will need to categorize its workforce – task or power users, for instance – and determine which types of functions they will need to perform and how often they will perform them. If you’re choosing VDI, you might need to consider a WAN optimization solution to decrease latency. Datacom can work with your organisation to design and implement a VDI solution that takes your network needs into consideration.
3. Plan for profile management
How will you let your workforce enjoy a consistent end-user experience with their virtualised desktop? If you don’t properly plan profile management, users could wind up with lost desktop settings and an inconsistent look and feel when accessing the corporate desktop from different devices. Depending on the method of IT virtualisation and the product you use, there are several potential ways to execute a profile strategy.
For example, products from Citrix and Cisco, with which Datacom has experience implementing and integrating virtualisation solutions, optimise the desktop so settings and applications are carried over to whichever device the user accesses. Having a profile management tool in place can also prevent “logon storms” when every member of your workforce is attempting to access the desktop at the same time each morning.
Your mobile employees can now access sensitive company information on their smartphones. And so can the cybercriminal sitting in the next building over or on the other side of the world.
The trend of workforce mobility has led to increasingly savvy cybercriminals shifting their attention from attacking traditional corporate network environments to syphoning company data off poorly-secured employee-owned devices. The security gap has emerged for a number of reasons. Many enterprises don’t have the IT manpower to oversee a host of different devices after an organisation has hastily agreed to implement a Bring Your Own Device programme. There are also issues related to employees losing their mobile devices or using them to access corporate data on unsecure Wi-Fi networks. Downloading suspicious files or applications further increases the cyber threat.
These risks aren’t cause to shut the door on mobility at your enterprise, however. Ensuring your organisation uses the right security solutions and educates all staff about ongoing cyber threats are key ways to protect your corporate data and brand.
Mobile virtualisation, for instance, essentially allows IT departments to keep corporate data separate from the rest of the device. Staff can then remotely manage the device, issuing security patches and wiping sensitive information if a phone or tablet is lost or compromised. End-point security solutions and mobile device management allow IT staff to encrypt and password-protect data, limit the number of users with administrative privileges and select which employees gain access to which applications.
While the IT department will manage these solutions, you should ask mobile employees to commit to doing their part to keep company systems and information secure. Educate the entire enterprise on how the latest cyber security risks could affect the business. Require all staff interested in participating in a Bring Your Own Device programme to sign a user policy that includes requirements for setting and updating passwords, reporting procedures for when a device is lost, stolen or attacked and decommissioning guidelines.
If you’re concerned about the risks inherent in enabling greater mobility at your enterprise, Datacom’s Technical Security Services (TSS) can assess your current environment and develop a customised security solution. Staffed with security experts experienced in protecting corporations and government agencies, TSS can conduct application and network vulnerability assessments, intrusion simulation and research to ensure you are protected against the latest cyber security risks from all sides.
Datacom will be presenting on cyber security at the Trend Micro EVOLVE.Cloud event in Sydney and Melbourne this week. For details on the event, click here.
Mark McWilliams has 24 years experience in the technology sector and is a Director of Datacom Investments. Datacom’s specialist technical security practice, Technical Security Services (TSS), which provides in-depth advice and technical consulting to both public and private sector enterprises across Australia, reports to Mark. He has detailed knowledge across the IT spectrum from data centres through to governance, with everything in between. He has also worked with organisations that have varying needs from a security standpoint, including those with advanced requirements such as banks and government agencies. He has seen both good and bad security deployments and has strong views on how organisations should protect themselves in this interconnected world.