A Datacenter is a collection of several different elements, all working together to offer a platform to our digital needs.
A datacenter is actually a mix of different elements, some logic some physical, it is just not a mere collection of elements but a complex systems with a lot of interactions.
We can easily see inside the datacenter, cables, racks, servers, network equipments, storage units and so on but all are there (or should be there) for a purpose and are interconnected.
A big part of a datacenter is not even visible; it is the software and data running in it, going in and out from its connection, disks, memory and CPU’s.
So with a physical infrastructure we have also services, Processor power, Storage, Connectivity and services. All this is making the Datacenter what it is.
Security in the datacenter
If we think our datacenter is somehow valuable we should think how to protect it.
The protection of a datacenter is something that should take into account the entire datacenter components, physical and virtual.
In modern datacenters the two dimensions are interconnected and there is not one without the other.
I usually consider in the physical domain all the issues coming from planning a correct disaster recovery and backup solution, since they require a Hardware approach generally speaking. Virtualization moved this needs to a software level and so it is, I know, quite an arbitrary assumption. But as an example to try to explain my point of view, is there any disaster recovery in place if the DR Datacenters are in the same building? As well is any backup policy sound if backup unit are taken in a non physical secured environment? (with the same precaution level and redundancy that should be given to the DR consideration).
It is clear how to have a datacenter secured it is mandatory to provide a safe physical environment, so power lines, cooling, physical access control are all terms of the equation.
If the floor can’t stand the rack, it would be a clear security issue, as well as if datacenter goes to overheating or if power lines are not able to provide the needed energy or enough flexibility (which can be variable accordingly to the usage). Same could be told in regards of UPS Units that are critical to establish correct power maintenance. Of course the implications of many of those aspects go beyond the strictly physical environment, since almost all require control software to be monitored.
What it is usually unevaluated is that the entire physical environment, nowadays, has sensors that talk with the logical one; those data could be really useful to understand the general safety of the system even from a cyber security perspective. So a surge in power requirement, a spike in terms of heat or CPUDisk load can be a trigger to a failure as well as a symptom of an ongoing attack.
Going deeper on Physical security for datacenter is today out of my scope, I would like to take a look on the Logical aspect of Datacenter security, since it collect almost (if not all) the classical Cyber and IT security requirement.
So let’s take a quick look at server security. The term “server” attain to the logical area. When we talk about servers we are not talking about a physical machine but a logical entity providing a service.
If we look for a definition:
- In information technology, a server is a computer program that provides services to other computer programs (and their users) in the same or other computers.
- The computer that a server program runs in is also frequently referred to as a server (though it may be used for other purposes as well).
- In the client/server programming model, a server is a program that awaits and fulfills requests from client programs in the same or other computers. A given application in a computer may function as a client with requests for services from other programs and also as a server of requests from other programs.
So server is definitively a logical issue.
So secure a server it is usually necessary to perform, as first instance, a hardening analysis and to set up a correct patch management procedure. This is the truth, actually, for any device and software that run under the IT realm. Alas a lot of time this is one of the first neglected areas.
Due to the importance of those elements I will go into Hardening and Patching later, but I would like to stress that those are Mandatory Requirement for a minimum security level.
There are a lot of different OS platforms that can be used as Server; classical OS like Windows, UNIX or Linux, all require to be carefully managed.
Since most of the OS are multipurpose (they offer more than a service) as file, application, web server in the same infrastructure it is essential to extend security consideration to all the resources in place, this means protocols, access management, user and so on.
Of course since a Datacenter lives of its cross relationship we should carefully understand what is in relationship with what, in order to avoid, as an example, to operate to some parameters affecting something else.
Alas all the control and configuration needed can require deep knowledge of the platform and others, as an example the implementation of Anti Malware and Anti Viruses or encryption implementation, could require a third party software.
This is also the truth for Virtual Hosting environments as VmWare, DropBox, Hyper V and so on.
The good news is that in the market there are a lot of solutions to help us to take in place correct security plans, but a correct vision and a sound process implementation is unavoidable.
If “OS” servers have their requirement, it is obvious that all other servers have the same needs. No matter if we talk about a database or a web server, it needs security. Since those Application server lives over the OS, the security they require is “On Top” of the OS needs, it means that securing just one (Application Server or OS server) will leave the system unsecured.
A classical example is the implementation of AVAM (AntivirusAntimalware) solutions. Those solutions should be applied not only on the guest OS but also, when available, to the Application server running, since the security issues that they have to cover can be dramatically different.
Inside all the security consideration, in the datacenter cannot be avoided the component dedicated to network security.
Network security is key in Datacenter, since all communication at the end flow inside it, and the entire component needs to communicate one to another.
Even in a virtual environment the Hypervisor have to manage network communication between the various internal virtual environments and the communication of these ones with the external ones. Speed is one of the key elements inside datacenter due to the great amount of data flowing, but accordingly to the nature of the services provided by the datacenter itself other consideration can be required. Since the datacenter, as an example, is a collection of servers running a IPSIDS can make absolutely sense, as well as implementing a correct monitoring (may be using SIEM and Big DATA analytical tools).
Access Management and IAM
Along with network security another neglected area is User Management. Identity User Access and Rights management should be big concerns for a Datacenter manager, because a poor management of those could affect the entire datacenter.
Too often it is not considered important, as an example, the use of administrative rights on contiguous platforms, or the “frozen” users (from the cancelled ones to the never used, to end with the service users) all with set of rights usually not correctly monitored.
This si the truth not only for Datacenters, actually, even end users devices, as laptops or company mobile phones, usually suffers of such glimpse of memory.
And if we have concerns on user we should have concerns also on protocols. At any level they are the media trough our systems communicate, so their management should be in our best interest. Alas usually protocol management is related to Network Devices (as implementing firewall rules) and it is neglected the one of the basic rules of security: what is unmanaged is prone to risk.
There are tools to manage and secure DNS (beside DNSSEC that is a security implementation of DNS services) and but also DHCP should be managed. Why we should lease an internal address (at least till we are in the IPv4 Realms, but this is way more important on IPv6) without any check or control on who has made the request, and leave all the controls after at network device level? This Is a bad security approach. DDI solutions on the market try to address those problems.
Patching and Hardening
No matter we are dealing with a logical server, a physical server, software, a device or an appliance of any kind everything should be under patching and hardening management.
I will first look at the recurring nightmare that is patching. Years ago there were the ideas that “if the system runs, do not touch it”. It is still a mantra for many Datacenter and IT managers; alas every day more this approach is dangerous and should be really a concern.
Patching is a requirement nowadays mainly for security reasons. We finished the naïve era where software was almost perfect, not we are in a more realistic era of the flaws full software.
Every day new vulnerabilities are disclosed, and affect everything, even our cars, so patching is no more an option we can avoid.
What does Patching means:
Do patching means make configuration or code changes that are designed to secure the software from existing and potential “0” day vulnerabilities.
Patches are all around us, but what is actually a patch? A patch is usually software that is used to amend a problem in a code or a configuration. Patches are provided, usually, by vendors in form of recurring or emergency updates.
Of course updates contain not only security patches, but could also address compatibility problems, bugs, or add new features.
Those updates, at least the security ones, are usually provided within a service agreement accordingly to the license duration of the softwaredevice.
From time to time vendors tend to close support to older release for several reasons, not only commercial but also because the level of patching would become not sustainable. It is what happened to Windows XP or Windows 20032000. But to stay in the windows realms also what happened to the older versions of windows explorer or to the embedded internet browser distributed with android before chrome.
Microsoft is finally moving on from its aging Web browsers as Internet Explorer 8, 9, and 10 will receive their last security updates and enter end-of-life on January 12. Users will then see a tab with a download link to the most current Internet Explorer available for the operating system.
End-of-life doesn’t mean older versions of Internet Explorer suddenly stop working, and there are ways to turn off Microsoft’s nagging reminder to update. But not switching to a supported browser is a colossal security mistake considering that attackers frequently target unpatched vulnerabilities in Internet Explorer. A regularly updated browser is still a critical line of defense against Web-based attacks.
Things in the Datacenter follow the same rules so we have to keep an eye.
What should I patch?
Since patching means solving problems related to code or configuration everything is subject to patching.
– All that is subject to configuration and it is softwarefirmware based:
• ServerCient operating systems (Linux, Microsoft Windows, Unix, Android, iOS, MacOS…)
• Appliances OS networking devices firmwareOS
• Applications (SQL Databases, Web Serv
er, CRM, Mail, Videoconferences…)
• Virtualization Platforms (Vmware, HyperV, Virtualbox…)
• Driver, middleware and managed component (SCADA, ICS …)
The reason for all this patching is always the same address:
• Critical and not critical vulnerabilities
• SwHw compatibility
• New Services
• Bug’s correction
The Patching Cycle
Patching a system is not a onetime activity, but it require a cyclical approach.
Patching real need is related to the discovery of new bugs, problem and vulnerability, so it is mandatory to control if the vendor has in place a serious patching system, and a serious vulnerability disclosure system. Vendors that never provide patches are vendors that
• Create a perfect piece of software
• Dismissed the productservices
• Are not trustable
So beside the first point, that is unrealistic even for the Linux and Apple lovers the rest is clear, every vendor from time to time has to release patch that comes, generally, in form of update.
So the first question should be: do I need to patch the system?
This can be done just checking the vendor’s security bulletin or the vendor’s patchupdate release system. Most of those systems allow an automatic update, which is good for ALMOST all the situation.
Of course if a patch is available I should consider if I can or not apply the patch. This is a tough question if my system contains legacy components that could be affected by the patch itself.
A good way would be to have a test environment to check if everything is right and only after those test put the patch on a production environment, keeping in mind that test and production environment seldom are 100% the same, and so something could be missed.
To patch or not to patch
Due to the intrinsic risks related to patching, therefore, it is better to apply patch when there is a real need.
It is clear that in absence of issues reported, there is no need to patch.
But if a serious vulnerability or compatibility issue is reported patching should be in our best interest. Now we can face two problems
- the presence of a patch from the vendor
- the absence of the patch.
In the first occurrence we should start the patching cycle, in the second case we have a big problem; we are in presence of a security issue that is not addressed by the vendor.
In this case, luckily, we can opt for third party solution that applies a “Virtual” patching to the system.
Virtual patching cannot address compatibility issues, but can save us from immediate security risks.
In any situation it is suggested to perform test, and to save the job before applying any patch. How many system engineers I saw crying because they have not done a snapshot…
– It is always a good precaution to test patches before putting them into production to avoid unpleasant issues related to unforeseen and unexpected incompatibilities
– In virtual environments should take snapshots of the machines to be updated before and after applying the patch
• Fingers crossed
• If all goes I do anew snapshot otherwise I pull up the latest copy saved
– In physical environments do the same thing, better if I software for bare metal backup or similar
– For appliances or networking equipment it is safer to isolate the apparatus to avoid repercussions on the entire network in the event of problems
Note: Virtual Patching
With virtual patching we refers to the introduction of third party software that is able to “close” any flaws in the lack of an updatepatch..
- Virtual patching allows you to apply virtual patches both “physical” and “virtual environments”, and can be either agent-based or agentless
- virtual patching is not an alternative to patching but it can be a viable solution in the event of a EOL support by the vendor (ex: Windows XP and Windows 20002003)
- virtual patching covers the security needs, not those of compatibility
- Virtual Patching can be used as a support technology to cover complex vulnerabilities dependent on OS SW components dependencies
Hardening my dear
A less frequent but even more important thing is Hardening
Hardening is again an activity that try to address the possible vulnerability of a system, basically the idea is that if you do not have something this could not harm you, so Hardening essentially means to turn off, delete, erase, uninstall, block, wipe out all that is not essential to the service provided by a specific server or device.
What Hardening means
– Do hardening means operating on the configuration parameters of a system to “close” all nonessential services to the assigned task to decrease the attack surface area ..
In other words it means to check every service and activity of the system and close the ones that are not useful for the intended purpose.
What should I work on?
Hardening is something we should do on everything. If a service, a right, a tool is not essential it could harm the system, and therefore should be avoided.
This is the truth for everything: Router protocols and OS services.
Basically everything that is subject to patching is subject to hardening.
All that is subject to configuration and it is softwarefirmware based:
- ServerCient operating systems (Linux, Microsoft Windows, Unix, Android, iOS, MacOS…)
- Appliances OS networking devices firmwareOS
- Applications (SQL Databases, Web Server, CRM, Mail, Videoconferences…)
- Virtualization Platforms (Vmware, HyperV, Virtualbox…)
- Driver, middleware and managed component (SCADA, ICS …)
The reasons behind hardening are actually related to two aspects: security and performances. For once they go together. Getting rid of useless services allows the systems to access to a wider set of resources for the intended purpose.
So hardening is good for security and also good for performance.
Hardening is essentially a configuration activity that must be made to allow the optimal functioning of a platform. The Hardening has repercussions in terms of performance and safety.
• What does make hardening mean:
- – Allow operating systems, software and services to carry out only the wanted operationsservices
- – Close all non-essential services operation
- – Ensure that the correct access procedures are carried out by users, applications and services on the critical resources
- – Restrict or monitor non-essential activities of applications and services
Hardening so is a great idea, but it comes with a big problem, it is something really hard to do.
Hardening a system requires a methodical and thorough analysis:
- All aspect of the operating systemapplication in question should be taken into account by determining
- What services are essential and which are not
- Of essential services such characteristics are allowed and which you want to block
- What are the configuration parameters to be modified
- What is the chain of objectsservicesapplication called by the service Application
To be able to determine which services are essentials and which are not essentials is not such an easy task since, sometimes, a service that seems useless can be used for hidden background tasks. Sometimes the simple presence of a service is checked even if not used, and sometimes software engineers and developers simply made it wrong, making useless calls that when stopped makes the system unpredictable.
Chain of services are even more complicated to determine, since in absence of documentation it could be that a service can be called for specific occasion without a direct clear link of causeeffect.
Moreover in case of OS:
All applications (standard, legacy, running, installed and not sunning) that belong to the operating system in question
- For each application it is necessary to define whether it is allowed or not running
- For each application that is allowed to run it is necessary to define the list of allowed and not allowed characteristics as well the scope and execution environment
It is mandatory to change the configuration parameters of every application to get the required results.
Although could seems odd, even not running application can be subject to risks. Dormant services can be waked up by malicious software after an infection, typical example are SMTP and Web services that can be partially or totally activated by a botnet malware. But the same stuns for encryption services used by some ransomware and so on.
Alas all this require a big amount of knowledge.
• Hardening is difficult to do because:
– It requires detailed knowledge of the environment Application Software
– It requires a thorough knowledge of all the connections between the various components
– It requires an extremely precise control of all the software installed, access and utilities
– Not all configurations can be reached through a GUI or via CLI commands
– Sometimes writing code is required
But there are on the market solutions that help you to get the result with an easier approach. What is required, as usual, is to know what you want and why, this is something no third party software can provide.