Cloud Management 2: Vulnerability Scanning or How to Prevent Security Threats in the Public Cloud

Vulnerability scanning or how to prevent security threats in the public cloud | ORBIT Cloud Encyclopedia

In the previous episode we compared patch management in an on-premise environment with a cloud environment. In this article, we'll look at security in the cloud, focusing on vulnerability scanning, patching and updating software and libraries, and continuously monitoring the health of our solution.

Cybersecurity is a hot topic and will continue to be. Just as we know the principles of occupational health and safety (OHS) and fire protection (FP), so too should our knowledge of cyber security.

Cybersecurity today is no longer just about firewalls and antivirus, but also about setting up processes, rules and responding to current threats. It's not just about IT, but also about the behaviour of employees and suppliers. So we all have a responsibility for data securitynot only in the context of work, but also in private life. What threats are lurking?

1) Incorrectly configured resources

Misconfiguration of cloud services can lead to many vulnerabilities. An example is poorly set up access to cloud services or misconfiguration of firewalls.

Incorrect configuration of access rules in S3 bucket

Quite regularly there are media reports of sensitive data leaks due to misconfigured access rules in the S3 bucket (data store in AWS). Sometimes this is outright shenanigans when the S3 bucket is publicly accessible. Usually, however, it's a matter of misconfigured access rules (e.g., just having any AWS account to access a given S3 bucket).

Examples of known incidents:

  1. Capital One (July 2019): An attacker stole the personal and financial information of more than 100 million customers and gained access to sensitive data. Capital One misconfigured the S3 bucket and left it publicly accessible.
  2. Uber (2016): A misconfigured Amazon S3 bucket caused a data breach that affected more than 57 million customers and drivers and led to the disclosure of personal data.
  3. Verizon (July 2017): A misconfigured S3 bucket operated by a third party caused the personal data of 6 million customers to be leaked. The incident involved the exposure of data such as customer names, addresses and identification numbers.
  4. Dow Jones (July 2017): Due to a misconfigured S3 bucket, the company accidentally exposed the personal information of more than 2.2 million of its customers.
  5. Accenture (September 2017): The company accidentally left four S3 buckets publicly accessible, resulting in the exposure of sensitive data, including company passwords and system access credentials.

You can see other known cases in these articles.

Firewalls misconfiguration

Requirements of the NIS2 Directive are nothing new in the field of cybersecurity. Did your organization fall under Cybersecurity Act or do you have an information security management system in place? Then there is a minimum of news waiting for you.

Is cybersecurity a new topic for you? Then yes, you will need to make a greater effort to meet your new obligations. But there's no need to panic.

Although cloud environments enable network microsegmentation using AWS Security Group or Azure Network Security Group, it happens that firewalls remain completely open to the entire internet (or "just" some important ports like 22 or 3389).

Comrades from not-so-friendly countries are just waiting to join your server. Go ahead and create a publicly accessible server and let it run for an hour. Then check the log for the number of failed connection attempts. My guess is that you will see thousands of connection attempts. There are bots lurking on the Internet to crack any available system.

In general, the no system should be publicly available (i.e. not to have a public IP address) unless absolutely necessary. And if it is necessary, it should only be made available to known IP addresses.

Cloud platforms offer us (often for free) tools and recommendations on how to secure our systems in the cloud. But it is up to us to know about them and to implement them.

All the big clouds define some form of shared responsibility model in which they say they take responsibility for the security of the cloud itself. However, the customer is responsible for the configuration of the cloud and the applications that run on it (see 8 principles to ensure security in the cloud).

AWS shared responsibility model | ORBIT
AWS shared responsibility model (source: https://aws.amazon.com/compliance/shared-responsibility-model/)

We write about how to verify your cloud settings in the following paragraph cloud configuration scanning.

2) Bad authentication and authorization settings

Once an attacker gains access to a user's account, they can also gain access to sensitive data and resources. Weak passwords, insufficient authentication and authorization, and other factors are usually to blame.

As a complete basis for logging into the cloud, the following should be required multifactor authentication (In addition to your name and password (or access keys), you will need a one-time password (OTP), which will be generated by the mobile app or sent to you by email (or SMS).

You can also allow access to the cloud only from certain IP addresses.

3) Bugs in software and libraries

Attackers can also exploit known vulnerabilities in outdated software and libraries. They can be caused by poor code implementation, vulnerabilities in software libraries or misconfiguration.

We cover how to approach known software bugs in a separate section scanning for vulnerabilities in applications.

4) Insufficient data security

Inadequate data security can lead to the theft of sensitive information. Poor quality encryption, inadequate data storage or inadequate access controls are often to blame.

Theoretically, I should convince you that the only correct concept to ensure better security of stored data is to minimize access rights for users. But personally, I prefer a different approach: let's give users maximum possible rightsso they can use the cloud meaningfully and independently. Provided, of course, that they all users properly trained and be aware of the risks associated with the cloud environment.

This can be elegantly addressed by creating multiple cloud environments for individual applications/teams. Everyone then "plays on their own turf" and if they "break something" it doesn't affect the others.

If even the cloud administrator should not have access to some sensitive data, we must encrypt data on the client side (i.e. in the application) and store encryption keys outside the cloud itself (e.g. external HSM).

5) DDoS attacks

Distributed Denial of Service (DDoS) attacks are also common in the public cloud. Attackers use many devices to send a large number of legitimate requests to a target server, which can cause application unavailability and service outage.

Using the cloud to minimize the impact of a DDoS attack is still the way to go for several reasons:

  • Cloud platforms have "rich" experience with DDoS and offer services to help protect (AWS Shield, Azure DDoS Protection, Google Cloud Armor).
  • Cloud platforms have massive connectivity to the internetthat cannot be as easily overloaded as an internet connection to an on-premise datacentre.
  • Automatic application scaling (autoscaling) can be configured in such a way that it can absorb an increased number of requests until the attack stops.

6) Bad API management

APIs (Application Programming Interfaces) are an increasingly important part of modern applications and systems, so it is important to ensure their security.

Various security issues can arise when managing APIs - for example, improper authentication and authorization can allow an attacker to access API functions to which they are not authorized. Unauthorized access can also occur if an attacker obtains access data from an authorised user or finds a vulnerability in the API that can be exploited.

To create APIs in the cloud, we should use dedicated services (AWS API Gateway, Azure API Management, GCP API Gateway). These should be integrated with other services for strong authentication and authorization (AWS Cognito, Azure AD). We should set proper rate limits (the number of requests in a certain period of time so that users can't bombard our API). Alternatively, we can also use a Web Application Firewall to protect against application layer attacks.

Security scanning

Vulnerabilities in the cloud can be divided into two main categories: cloud configuration issues and vulnerabilities in the software we run in the cloud. It is important to stress that proper cloud protection requires a combination of preventive measures in both areas.

Cloud Configuration Scanning

Public clouds have mechanisms (AWS Service Control Policies, Azure Policy), which can be used to completely prohibit certain activities. For example, it will not allow a user to create a subnet that is accessible from the Internet, so it will not be able to create a server that is accessible from the Internet.

There are even pre-prepared sets of policieswith which you can be compliant with, for example, ISO standards, or security benchmarks.

However, there may be cases where we do not (or cannot) explicitly disable something, but still need the configuration to meet certain requirements, such as PCI DSS. In AWS, we use the service AWS Configwhich allows you to monitor the configuration of our AWS environment and its changes.

AWS Config & Change Analysis

Scanning for vulnerabilities in AWS environments using AWS Config is used to identify changes in the configuration that might indicate a security risk or a violation of the rules. We are therefore able to quickly identify and respond to problems. AWS Config can create alerts or trigger actions to automatically fix the problem.

In addition, it can AWS Config help with auditing and change history. It stores the configuration history of your environment. This makes it possible to view configuration changes backwards and check who made the changes and when (which can be useful for auditing).

AWS Config after integration with other AWS services dramatically improve the monitoring of your AWS environment and identifies potential security risks and configuration issues.

In the Azure world, the tool works analogously Change analysisthat searches for configuration changes to supported resources.

CVE (Common Vulnerabilities and Exposures)

CVE is a program to identify, describe and record publicly known cyber vulnerabilities. Each discovered vulnerability is classified according to its severity (critical, high, medium, low, none) and stored in the CVE database (at the time of writing, it had 203,653 entries).

CVE (Common Vulnerabilities and Exposures) | ORBIT
https://www.cve.org/

The CVE database was created to standardize vulnerability reporting and provide users with an easy way to identify potential security threats and minimise risks.

In recent years, alternatives to the CVE database have emerged that attempt to address some of its shortcomings (e.g., the problem with the speed of vulnerability disclosure). However, the CVE database still remains a key tool for security experts and organisations worldwide.

Scanning for vulnerabilities in applications

There are a number of tools and solutions for vulnerability scanning. It's just a matter of picking one and starting to use it (Vulnerability Scanning Tools, Orca Cloud Security, Amazon Inspector, Azure Defender for Cloud and others).

Normally, security scanning is done at the beginning when writing application code. Then the container is eventually scanned when it is uploaded to the repository, but that's about it. Who would bother regular security scans? After all, we have a perimeter firewall, so no one can get to us (besides, we are already busy).

Here I would like to point out that hackers are making billions of dollars worldwide. So they are very motivated to continue to improve. We, on the other hand, should be equally motivated to use all available means to reduce the attack surface.

It is not enough to update the OS once every three months because some standard requires it. We should to know the state of our system on an ongoing basis with regard to security, the platforms used and our own applications.

Vulnerability scanning in AWS | ORBIT
Sample AWS vulnerability scan output

Cloud environments can be easily configured to perform vulnerability scans at regular intervals (1 time a day, 1 time a week). If a new vulnerability is discovered that meets our defined level (e.g. high/critical), we will receive an automatic notification via email, Slack, or otherwise.

We need to remove a serious vulnerability as quickly, safely and easily as possible. Usually this requires updating the OS, platform or application and redeploying the application.

Docker base image | ORBIT
Example of removing vulnerabilities by upgrading a Docker base image

CI/CD + Infra as Code

If we have correctly set up CI/CD pipeline and we are able to deploy new versions without system downtime, patching the application is not a challenge for us.

If it is:

  • vulnerability in cloud configurationwe'll modify the IaC scripts,
  • vulnerability in the operating system, just update in IaC scripts VM or Docker base image the OS version in which the vulnerability is resolved,
  • vulnerability in application libraries, you need to update the libraries to new versions and rebuild the application,
  • vulnerability in the application code itself, you need to make adequate code changes and also rebuild the whole application.

After each such intervention, we need to be sure that our application is still working and that we haven't caused any more bugs.

The diagram below shows the minimum number of steps a CI/CD pipeline should take to successful deployment of the new version of the application. Manual approval prior to deployment in a production environment is optional. However, I have not personally experienced a project where deployment was completely automated.

CI/CD pipeline for deploying containerized applications | ORBIT
Example of a CI/CD pipeline for deploying containerized applications. Triggered by commit to a Git repository or defined Git tags.

Conclusion on vulnerability scanning

We can basically repeat the process described above over and over again. It depends on how new and new vulnerabilities appear in our solution. So the effort invested in the CI/CD pipeline will pay off sooner or later.

We don't have to worry about implementing new security processes that would generate significantly more work for administrators without automation. In our case, however, it's just a matter of updating a few scripts and committing to Git - the CI/CD pipeline does the rest for us. The important thing is to know when to do these updates.

Vulnerability scanning is an essential step to ensure the security of your systems and data in the public cloud. This process allows identify potential vulnerabilities in computer systems and allows you to take measures to eliminate them.

About the author
Petros Georgiadis
Petros Georgiadis

Cloud Consultant & Architect | LinkedIn

Petros is stirring up the backwaters in IT infrastructure management. His goal is to show that adopting and implementing DevOps and automation principles makes IT management easier. 

Technical knowledge: AWS, Infrastructure as a Code, DevOps