Five Key Steps for Database Security in the Cloud Age
As business has become more digital, data has become the most valuable asset of many organizations. But protecting that data has also become much more complicated as organizations increasingly migrate it to a mix of public and private cloud infrastructures, such as Microsoft Azure, Amazon Web Services, and Google Cloud. With most businesses today operating in a multi-cloud environment, it’s no longer possible to simply lock up precious data in the proverbial vault and guard the perimeter.
Mitigating Security Risks in Complex Cloud Environments
To protect their valuable assets amid this new reality, organizations must take a data-centric approach that focuses on protecting data no matter where it resides. Here are five key approaches to make that happen:
- Define standards, security, and compliance policies. Cloud database vendors rarely enforce more than the most obvious weaknesses in the out-of-the-box installations of their platforms. When vendors do patch vulnerabilities or ship new versions of software, an organization needs to review policies to ensure they account for new and updated configurations and settings. Organizations should ask themselves: How often are policies updated and what should trigger a policy change? How will exceptions be handled? What teams need to be involved in the review process of suggested policy changes and how will the process be communicated?
- Run vulnerability assessments. Since databases are often an organization’s largest repository of sensitive information, they should be evaluated to not only search for potential vulnerabilities but also to ensure they fulfill any relevant regulatory compliance requirements. To demonstrate effective controls surrounding sensitive data, organizations should run a baseline assessment and establish a practice of continuous assessment to ensure issues are remediated in a timely manner. The U.S. Department of Homeland Security’s Continuous Diagnostics and Mitigation standards for database security are a great model to follow for this process.
- Understand user privilege and access. As people change roles or leave an organization, user privileges are often not kept up-to-date, and, as a result, organizations lack a full understanding of who has access to sensitive data. Fortunately, many database-scanning technologies today can not only identify vulnerabilities and misconfigurations but also users, roles, and privileges. The only way to establish meaningful controls that track how users interact with the data, or to capture an audit trail for use in a breach investigation, is to know who has access to what data and why they’ve been granted that access.
- Use data analytics to mitigate risks. Remediating high-risk vulnerabilities and misconfigurations within your databases not only reduces your risk of compromise, but it also narrows the scope of any required compensating controls you might need, such as exploit monitoring. Using data analytics to associate risk scores with the findings from your vulnerability assessment can help identify your most exposed systems or groups so you can focus your efforts where you can make the most impact (i.e., reduce the most risk).
- Respond to policy violations in real time. For vulnerabilities that cannot be remediated or patched in a timely manner, real-time database activity monitoring (DAM) can be an appropriate compensating control. DAM solutions can alert operations center personnel when a security violation is identified so they can take corrective action. Many organizations also feed these alerts into a security information and event management or network management tool if suspicious activity is detected for further investigation and remediation.
Changing the Way We Think About Security
Data is an organization’s most precious asset but, with more of it residing in public and private clouds, we can no longer think of a database as something on-premise that we can protect with perimeter and network security measures. By establishing the right policies, scanning for vulnerabilities, controlling user privilege, and implementing risk mitigation and real-time monitoring, organization can create a data-centric security practice that protects its valuable data no matter where it is.
This article originally appeared on dbta.com, to read the full article, click here.
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering tools for Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT, IBM Cloud Pak for Integration and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics