13 min read

Evolving remote access challenges

In order to increase productivity of employees, every company wants to provide access to their applications to their employees from anywhere. The users are no longer tied to work from a single location. The users need access to their data from any location and also from any device they have. They also want to access their applications irrespective of where the application is hosted. Allowing this remote connectivity to increase the productivity is in constant conflict with keeping the edge secure. As we allow more applications, the edge device becomes porous and keeping the edge secure is a constant battle for the administrators. The network administrators will have to ensure that this remote access to their remote users is always available and they can access their application in the same way as they would access it while in the office. Otherwise they would need to be trained on how to access an application while they are remote, and this is bound to increase the support cost for maintaining the infrastructure. Another important challenge for the network administrator is the ability to manage the remote connections and ensure they are secure.

Migration to dynamic cloud

In a modern enterprise, there is a constant need to optimize the infrastructure based on workload. Most of the time we want to know how to plan for the correct capacity rather than taking a bet on the number of servers that are needed for a given workload. If the business needs are seasonal we need to bet on a certain level of infrastructure expenses.

If we don’t get the expected traffic, the investment may go underutilized. At the same time if the incoming traffic volume is too high, the organization may lose the opportunity to generate additional revenue. In order to reduce the risk of losing additional revenue and at the same time to reduce large capital expenses, organizations may deploy virtualized solutions. However, this still requires the organization to take a bet on the initial infrastructure.

What if the organization could deploy their infrastructure based on need? Then they could expand on demand. This is where moving to the cloud helps to move the capital expense ( CapEx ) to operational expense ( OpEx). If you tell your finance department that you are moving to an OpEx model for your infrastructure needs, you will definitely be greeted by cheers and offered cake (or at least, a fancy calculator).

The needs of modern data centers

As we said, reducing capital expense is on everyone’s to-do list these days, and being able to invest in your infrastructure based on business needs is a key to achieving that goal. If your company is expecting seasonal workload, you would probably want to be able to dynamically expand your infrastructure based on needs. Moving your workloads to the cloud allows you to do this. If you are dealing with sensitive customer data or intellectual property, you probably want to be able to maintain secure connectivity between your premise and the cloud. You might also need to move workloads between your premise and the cloud as per your business demands, and so establishing secure connectivity between corporate and the cloud must be dynamic and transparent to your users. That means the gateway you use at the edge of your on-premise network and the gateway your cloud provider uses must be compatible.

Another consideration is that you must also be able to establish or tear down the connection quickly, and it needs to be able to recover from outages very quickly. In addition, today’s users are mobile and the data they access is also dynamic (the data itself may move from your on-premise servers to the cloud or back). Ideally, the users need not know where the data is and from where they are accessing the data, and they should not change their behavior depending on from where they access the data and where the data resides. All these are the needs of the modern data center. Things may get even more complex if you have multiple branch offices and multiple cloud locations.

Dynamic cloud access with URA

Let’s see how these goals can be met with Windows Server 2012. In order for the mobile users to connect to the organizational network, they can use either DirectAccess or VPN. When you move resources to the cloud, you need to maintain the same address space of the resources so that your users are impacted by this change as little as possible. When you move a server or an entire network to the cloud, you can establish a Site-to-Site (S2S) connection through an edge gateway. Imagine you have a global deployment with many remote sites, a couple of public cloud data centers and some of your own private cloud. As the number of these remote sites grow, the number of Site-to-Site links needed will grow exponentially. If you have to maintain a gateway server or device for the Site-to-Site connections and another gateway for remote access such as VPN or DirectAccess, the maintenance cost associated with it can increase dramatically.

One of the most significant new abilities with Windows Server 2012 Unified Remote Access is the combination of DirectAccess and the traditional Routing and Remote Access Server ( RRAS ) in the same Remote Access role. With this, you can now manage all your remote access needs from one unified console. As we’ve seen, only certain versions of Windows (Windows 7 Enterprise and Ultimate, Windows 8 Enterprise) can be DirectAccess clients, but what if you have to accommodate some Vista or XP clients or if you have third-party clients that need CorpNet connectivity? With Windows Server 2012, you can enable the traditional VPN from the Remote Access console and allow the down-level and third-party clients to connect via VPN. The Unified Remote Access console also allows the remote access clients to be monitored from the same console. This is very useful as you can now configure, manage, monitor, and troubleshoot all remote access needs from the same place.

In the past, you might have used the Site-to-Site demand-dial connections to connect and route to your remote offices, but until now the demand-dial Site-to-Site connections used either the Point-to-Point Tunneling Protocol ( PPTP) or Layer Two Tunnel Protocol ( L2TP) protocols. However, these involved manual steps that needed to be performed from the console. They also produced challenges working with similar gateway devices from other vendors and because the actions are needed to be performed through the console, they did not scale well if the number of Site-to-Site connections increased beyond a certain number.

Some products attempted to overcome the limits of the built-in Site-to-Site options in Windows. For example, Microsoft’s Forefront Threat Management Gateway 2010 used the Internet Key Exchange ( IKE ) protocol, which allowed it to work with other gateways from Cisco and Juniper. However, the limit of that solution was that in case one end of the IPsec connection fails for some reason, the Dead Peer Detection (DPD) took some time to realize the failure. The time it took for the recovery or fallback to alternate path caused some applications that were communicating over the tunnel to fail and this disruption to the service could cause significant losses.

Thanks to the ability to combine both VPN and DirectAccess in the same box as well as the ability to add the Site-to-Site IPsec connection in the same box, Windows Server 2012 allows you to reduce the number of unique gateway servers needed at each site. Also, the Site-to-Site connections can be established and torn down with a simple PowerShell command, making managing multiple connections easier. The S2S tunnel mode IPsec link uses the industry standard IKEv2 protocol for IPsec negotiation between the end points, which is great because this protocol is the current interoperability standard for almost any VPN gateway. That means you don’t have to worry about what the remote gateway is; as long as it supports IKEv2, you can confidently create the S2S IPsec tunnel to it and establish connectivity easily and with a much better recovery speed in case of a connection drop.

Now let’s look at the options and see how we can quickly and effectively establish the connectivity using URA. Let’s start with a headquarters location and a branch office location and then look at the high-level needs and steps to achieve the desired connectivity. Since this involves just two locations, our typical needs are that clients in either location should be able to connect to the other site. The connection should be secure and we need the link only when there is a need for traffic flow between the two locations. We don’t want to use dedicated links such as T1 or fractional T1 lines as we do not want to pay for the high cost associated with them. Instead, we can use our pre-existing Internet connection and establish Site-to-Site IPsec tunnels that provide us a secure way to connect between the two locations. We also want users from public Internet locations to be able to access any resource in any location.

We have already seen how DirectAccess can provide us with the seamless connectivity to the organizational network for domain-joined Windows 7 or Windows 8 clients, and how to set up a multisite deployment. We also saw how multisite allows Windows 8 clients to connect to the nearest site and Windows 7 clients can connect to the site they are configured to connect to. Because the same URA server can also be configured as a S2S gateway and the IPsec tunnel allows both IPv4 and IPv6 traffic to flow through it, it will now allow our DirectAccess clients in public Internet locations to connect to any one of the sites and also reach the remote site through the Site-to-Site tunnel.

Adding the site in the cloud is very similar to adding a branch office location and it can be either your private cloud or the public cloud. Typically, the cloud service provider provides its own gateway and will allow you to build your infrastructure behind it. The provider could typically provide you an IP address for you to use as a remote end point and they will just allow you to connect to your resources by NATting the traffic to your resource in the cloud.

Adding a cloud location using Site-to-Site

In the following diagram, we have a site called Headquarters with a URA server (URA1) at the edge. The clients on the public Internet can access resources in the corporate network through DirectAccess or through the traditional VPN, using the URA1 at the edge. We have a cloud infrastructure provider and we need to build our CloudNet in the cloud and provide connectivity between the corporate network at the Headquarters and CloudNet in the cloud. The clients on the Internet should be able to access resources in the corporate network or CloudNet, and the connection should be transparent to them.

The CloudGW is the typical edge device in the cloud that your cloud provider owns and it is used to control and monitor the traffic flow to each tenant.

Basic setup of cross-premise connectivity

The following steps outline the various options and scenarios you might want to configure:

  1. Ask your cloud provider for the public IP address of the cloud gateway they provide.

  2. Build a virtual machine running Windows Server 2012 with the Remote Access role and place it in your cloud location. We will refer to this server as URA2.

  3. Configure URA2 as a S2S gateway with two interfaces:

    • The interface towards the CloudGW will be the IPsec tunnel endpoint for the S2S connection. The IP address for this interface could be a public IPv4 address assigned by your cloud provider or a private IPv4 address of your choice.

      If it is a private IPv4 address, the provider should send all the IPsec traffic for the S2S connection from the CloudGW to the Internet-facing interface of URA2. The remote tunnel endpoint configuration in URA1 for the remote site will be the public address that you got in step 1.

      If the Internet-facing interface of URA2 is also a routable public IPv4 address, the remote tunnel endpoint configuration in URA1 for the remote site will be this public address of URA2.

    • The second interface on URA2 will be a private address that you are going to use in your CloudNet towards servers you are hosting there.

  4. Configure the cloud gateway to allow the S2S connections to your gateway (URA2).

  5. Establish S2S connectivity between URA2 and URA1. This will allow you to route all traffic between CloudNet and CorpNet.

The preceding steps provide full access between the CloudNet and CorpNet and also allow your DirectAccess and VPN clients on the Internet to access any resource in CorpNet or CloudNet without having to worry whether the resource is in CorpNet or in CloudNet.

DirectAccess entry point in the cloud

Building on the basic setup, you can further extend the capabilities of the clients on the Internet to reach the CloudNet directly without having to go through the CorpNet. To achieve this, we can add a URA Server in the CloudNet (URA3). Here is an overview of the steps to achieve this (assuming your URA server URA3 is already installed with the Remote Access role):

  1. Place a domain controller in CloudNet. It can communicate with your domain through the Site-to-Site connection to do Active Directory replication and perform just like any other domain controller.

  2. Enable the multisite configuration on your primary URA server (URA1).

  3. Add URA3 as an additional entry point. It will be configured as a URA server with the single NIC topology.

  4. Register the IP-HTTPS site name in DNS for URA3.

  5. Configure your cloud gateway to forward the HTTPS traffic to your URA2 and in turn to URA3 to allow clients to establish the IP-HTTPS connections.

Using this setup, clients on the Internet can connect to either the entry point URA1 or URA3. No matter what they choose, they can access all resources either directly or via the Site-to-Site tunnel.

Authentication

The Site-to-Site connection between the two end points (URA1 and URA2) can be configured with Pre Shared Key ( PSK) for authentication or you can further secure the IPsec tunnel with Certificate Authentication. Here, the certificates you will need for Certificate Authentication would be computer certificates that match the name of the end points. You could use either certificates issued by a third-party provider or certificates issued from your internal Certificate Authority ( CA). As with any certificate authentication, the two end points need to trust the certificates used at either end, so you need to make sure the certificate of the root CA is installed on both servers. To make things simpler, you can start with a simple PSK-based tunnel and once the basic scenario works, change the authentication to computer certificates. We will see the steps to use both PSK and Certificates in the detailed steps in the following section.

Configuration steps

Even though the Site-to-Site IPsec tunnel configuration is possible via the console, we highly recommend that you get familiar with the PowerShell commands for this configuration as they make it a lot easier to configure this in case you need to manage multiple configurations. If you have multiple remote sites, having to set up and tear down each site based on workload demand is not scalable when configured through the console.

Summary

we have seen how by combining the DirectAccess and Site-to-Site VPN functionalities we are now able to use one single box to provide all remote access features. With virtual machine live migration options, you can move any workload from your corporate network to cloud network and back over the S2S connection and keep the same names for the servers. This way, clients from any location can access your applications in the same way as they would access them if they were on the corporate network.

Resources for Article :


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here