This is the third and final instalment in a series on Here’s What a Network Needs After a Cloud Migration. Part 1 looked at how to redesign the LAN. Part 2 outlined strategies for the Internet connection.

One of the things that becomes more important in a cloud-based application environment is managing user access and authentication. If a client accesses their sensitive corporate information over the Internet from anywhere in the world, then you need to keep careful track of who has access and what they’re doing.

If someone leaves the organization or has their password stolen, that account must be immediately suspended while you figure out what the exposure is.

Implement authentication protocols

The most common system for handling corporate authentication and authorization information is Microsoft Active Directory. Many cloud services allow clients to use their own internal Active Directory to provide authentication services for cloud applications. But you need to be careful about how you do it.

The obvious approach of making an Active Directory domain controller remotely accessible over the Internet is a bad idea. The right solution is to use Active Directory Federation Services (AD FS). Microsoft has detailed documentation on setting up an AD FS server.

Stay current with patching and anti-malware

You also need to be very careful about things like securing and patching user workstations. The same web browser your client is using to access sensitive corporate files is being used to read the news, browse used car ads, visit social media sites, and so forth. An attacker won’t go after files by directly attacking the cloud service provider. That’s too hard. But attacking the insecure computer of an end user with access to sensitive data is comparatively easy.

So your client’s workstations need to be regularly patched, and you need to be diligent about maintaining modern endpoint security and anti-malware software. Note that I didn’t say anti-virus.

Anti-virus is a term that generally refers to a signature-based system for catching malicious files. But that type of system doesn’t do much to protect your client against malicious software that sits in memory or that continuously modifies itself to avoid checksum signature detection. The current generation of malware is very sneaky.

A useful piece of security monitoring equipment for an outsourced environment is a next-generation firewall or Intrusion Detection System (IDS) capable of watching for command-and-control data.

bad evil robot malware bot user access

Photo: daniel zimmel on Flickr

Modern malware often works by installing a very general piece of code called a dropper. The dropper’s primary job is to get itself running on the target system. Then it calls home to a command-and-control device for further instructions. Those instructions could include additional downloads. Your defensive goal is to detect and stop the malware at this early stage, before the real damage is done.

Obviously, the number of ways that a command-and-control protocol could work is constrained only by the imagination of the malware author. Fortunately, digital saboteurs tend not to be very imaginative and the same tricks are used repeatedly, which makes them easier to detect.

In truth, the best approach to malware is to try to cover each one of the steps. Try to catch the bad web site before the client loads the page. Then try to catch the dropper on the way down. Then try to stop the dropper from running. Then try to stop the dropper from contacting the command-and-control server. Then try to stop it from downloading the ultimate malicious payload.

If all of that fails, try to spot the malicious payload when it tries to execute. The attackers are giving you many opportunities to notice and stop their actions. Take advantage of as many of them as possible.

Final thoughts

Even when your client’s infrastructure is running in the cloud of an application service provider, standard IT best practices are still relevant. These are the action items we’re reviewed: