Securing Data On The Cloud Requires Focused Privileged Access Strategies
Misuse of privileged access is becoming one of the primary culprits behind cloud data breaches. Despite significant efforts from leading cloud providers in creating awareness of the shared responsibility model, the leaks continue to grow both in numbers and size. This warrants the relooking and crafting a focused strategy for privileged access management (PAM) on cloud data repositories, cloud databases as services, elasticsearch databases and cloud file systems.
This article aims to explain the design principles and germane solutions for an effective and scalable PAM strategy to secure data on cloud data stores.
Design principle No. 1: Identity is the new threat vector
Employees, contractors and collaborators represent human identities, while VMs (virtual machines), database services, Kubernetes clusters and serverless functions represent silicon identities. Privileged roles and permissions can be assigned to any and all of these identities. Therefore, the principles of PAM and governance apply to both. Discovering identities with privileged access on cloud data stores is paramount to an effective PAM strategy for cloud data stores.
Solution: Human identities can very well be determined from directories, databases or HR systems. continuous integration (CI) and continuous delivery (CD) systems, DevOps tools, cloud workloads, and cloud data stores are the conduits/interfaces to which silicon identities are assigned. Continuous scanning and parsing of role or permission assignment objects provide detailed information on silicon identities.
Design principle No. 2: Identify all viable privileged access patterns
Assignment of privileged access via policies, roles and access control lists (ACLs) can be done in various ways:
• Native identity and access management (IAM) assignments: Roles and permissions offered by native cloud security frameworks (AWS IAM, Azure RBAC, etc.) can invariably be attached/assigned to multiple cloud services, including VMs (pattern used in a large-scale breach), serverless functions, or local and federated user accounts (major cause of failed audits due to missing visibility).
Solution: Continuously scan these security objects (roles, policies, ACLs, etc.), and deduce the permissions these objects grant on human service accounts as well as on cloud workloads. The scans provide insights on all possible privileged paths to cloud data stores, both point in time as well as continuously. Further, real-time alerting/remediation should also be added to prevent privileged access elevations on extremely sensitive data stores
• Resource policies: Resource policies, as the name suggests, are defined at individual data stores. These policy assignments enable access to cross-cloud subscriptions or tenants, both anonymous and authenticated users.
Solution: Continuously scan the resource policies, which entails parsing large sets of policy documents and creating a permissions matrix. The matrix provides deep access visibility on the data stores for any and all types of identities. The matrix should further be augmented with real-time monitoring.
• Access control lists (ACLs): Legacy security models on few data store types (e.g., S3 buckets) support the use of ACLs. Misconfigurations on ACLs can expose data in such buckets and often make them susceptible to data leaks/breaches.
Solution: It is strongly recommended to avoid using ACLs, and if needed, they should be continuously monitored and automatically remediated for misconfigurations.
Design principle No. 3: Discard rudimentary access assignment methods — they won’t scale
Static access assignments with elevated permissions to humans lead to residual access or long-term privileged access on cloud data stores. Alternative approaches of distinct user accounts with nonprivileged and privileged access still result in long-term access with account sprawl and increased risk exposure.
Solution: An effective measure is to elevate the access just in time for a predefined duration. Using roles and just-in-time user accounts are some unique approaches to solve this. Privileged roles can be achieved by collating fine-grained policies/permissions from native cloud providers.
This article originally appeared on forbes.com To read the full article and see the images, click here.
Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously. To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:
- Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
- Raw information handling and analytics speed
- End-to-end business transaction tracking that spans technologies, tiers, and organizations
- Intuitive, easy-to-use data visualizations and dashboards
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering tools for Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT, IBM Cloud Pak for Integration and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics