GCP IAM & Resource Manager: Practical Scenarios
Let's be honest: Identity and Access Management (IAM) can feel like a dry, complex topic. However, it's also the source of many common technical problems and a critical area of knowledge for any Google Cloud certification. Instead of focusing on abstract theory, this guide takes a different approach. We will explore IAM and Resource Manager through a series of practical, real-world scenarios. This hands-on method is designed to build an intuitive understanding of how to solve specific business and security challenges, progressing from foundational concepts to advanced, expert-level use cases.
Core Concept: What is an Identity?
Before we look at what someone can do (permissions), we must first understand who or what is asking for access. In GCP, this "who" or "what" is called an Identity. An identity is just a principal that can be authenticated and authorized to use Google Cloud resources.
Think of it like a company building. You can't just walk in; you need an ID badge that the security system recognizes. GCP has two main types of ID badges:
Identities for People (Users and Groups):
User Account: This represents a single person, like
sara.rossi@example.com.Google Group: This represents a collection of users, like
data-scientists@example.com. Using groups is the recommended best practice for managing permissions for multiple users.
Identity for Software (Service Account):
Service Account: This is a special identity for an application or a VM, not a person. It's like a robotic keycard. The VM running a nightly script doesn't use a developer's personal password; it uses its own dedicated "Service Account" ID badge.
Types of Recognized Identities
To be used in an IAM policy, an identity must be "known" to Google's authentication system. It doesn't have to belong to your organization, but Google needs a way to verify who it is. Here are the main types:
Google Accounts: This is the most direct way. It includes any personal
@gmail.comaccount or any corporate account managed through Google Workspace (e.g.,user@your-company.com).Google Groups: A collection of the Google Accounts mentioned above. This is the preferred way to manage users.
Service Accounts: These are native GCP identities for applications.
Federated Identities: This is the "external" part. Through services like Cloud Identity or Workload Identity Federation, you tell Google to trust an external Identity Provider (like Azure AD, Okta, or AWS). When a user from that external system tries to access GCP, Google effectively asks the external provider, "Do you vouch for this person?" If the provider says yes, Google accepts their identity.
In summary, every IAM policy is about connecting one of these identities with a role (a set of permissions) on a specific resource.
The GCP IAM Golden Rule: Resources Hold the Permissions
This is the single most important concept to understand. In GCP, permissions (roles) are never attached directly to an identity when it is created. You do not assign a role to a service account.
Instead, the process is always:
You create an identity (e.g.,
my-service-account@...). At this point, it is just a name with no permissions.You go to a resource (e.g., a Storage Bucket).
You modify that resource's IAM policy to grant a role to the identity.
Analogy: The permission is on the guest list at the door of the house (the resource), not printed on the guest's ID card (the identity). This model simplifies permission management by ensuring you only have to look at a resource and its parents (project, folder) to understand who can access it.
Scenario 1: The "Hands-Off" Stakeholder
Need/Requirement: A project manager needs to monitor the budget and spending for a new mobile app project. They must be able to view all billing reports and cost breakdowns, but for compliance and security reasons, they must have absolutely no ability to change, stop, or delete any technical resources (like VMs, databases, or storage buckets).
GCP Solution: Grant the project manager the IAM role of
Billing Account Viewer(roles/billing.viewer) on the specific Billing Account. This gives them read-only access to billing information without any permissions to view or alter the technical resources themselves.Key Concepts Demonstrated:
Principle of Least Privilege: Granting the absolute minimum set of permissions required for a user to perform their job.
Separation of Roles: Differentiating between technical roles (like
Project EditororOwner) and financial/administrative roles. A user's job function dictates their permissions.Predefined Roles: Using a specific, out-of-the-box role tailored to a common business need, ensuring security and simplicity.
Example CLI Command:
gcloud billing accounts add-iam-policy-binding 012345-67890A-BCDEF0 \ --member="user:pm@example.com" \ --role="roles/billing.viewer"
Scenario 2: The Departmental Sandbox
Need/Requirement: A company wants to foster innovation. The Data Science team needs a "sandbox" environment where they can freely create, modify, and delete any resources for their experiments. However, their activities must be completely contained, and they should not be able to see or touch the resources of the main "Production" environment. A central IT team must retain ultimate control over all environments.
GCP Solution: Use the Resource Manager to create two separate Folders, one named "Production" and one named "Data Science Sandbox," under the Organization node.
The Data Science team (via a Google Group) is granted the
Editorrole (roles/editor) on the "Data Science Sandbox" folder.The Production team gets relevant permissions only on the "Production" folder.
The central IT team is granted the
Organization Admin(roles/resourcemanager.organizationAdmin) role at the Organization level.
Key Concepts Demonstrated:
Resource Hierarchy: The
Organization > Folders > Projectsstructure is the key to enterprise-level control.Permission Inheritance: Permissions granted at a higher level (like a Folder) automatically flow down to all the projects within it, simplifying management.
Isolation and Containment: Using the hierarchy to build strong security boundaries between teams and environments.
Example CLI Commands:
# Create the folder under your organization gcloud resource-manager folders create \ --display-name="Data Science Sandbox" \ --organization=123456789012 # Grant the Editor role to a group on the new folder (ID from previous command) gcloud resource-manager folders add-iam-policy-binding 987654321098 \ --member="group:data-scientists@example.com" \ --role="roles/editor"
Scenario 3: The Automated Nightly Job
Need/Requirement: A developer has created a script that runs every night on a Compute Engine VM. This script needs to read data from a specific Cloud Storage bucket and write a summary into a specific BigQuery table. The script must run automatically without human interaction, and its credentials must be secure and limited only to the exact resources it needs.
GCP Solution (Following the Golden Rule):
Create the Identity: First, create a dedicated Service Account for the script. At this point, it has no permissions.
Grant Permissions on Resource #1: Go to the specific source bucket and edit its IAM policy. Add the new service account as a principal and grant it the
Storage Object Viewerrole (roles/storage.objectViewer).Grant Permissions on Resource #2: Go to the specific destination dataset in BigQuery and edit its IAM policy. Add the same service account as a principal and grant it the
BigQuery Data Editorrole (roles/bigquery.dataEditor).Attach the Identity: Finally, attach this service account to the Compute Engine VM. The VM now uses this identity to run the script.
Key Concepts Demonstrated:
Service Accounts: A non-human identity for applications, scripts, and VMs, allowing for secure, automated authentication.
Resource-level Permissions: The power of GCP IAM is applying permissions not just at the project level, but to individual resources (one bucket, one dataset, etc.).
Secure Automation: Eliminating the need to embed user credentials or keys in scripts, which is a major security risk.
Example CLI Commands:
# 1. Create the identity (the service account) gcloud iam service-accounts create nightly-job-sa --display-name="Nightly Job SA" # 2. Grant permission on Resource #1 (the bucket) gcloud storage buckets add-iam-policy-binding gs://my-source-bucket \ --member="serviceAccount:nightly-job-sa@<project-id>.iam.gserviceaccount.com" \ --role="roles/storage.objectViewer" # 3. Grant permission on Resource #2 (the BigQuery dataset) # Note: bq command is used for dataset-level permissions bq add-iam-policy-binding --member-type serviceAccount --identity nightly-job-sa@<project-id>.iam.gserviceaccount.com --role roles/bigquery.dataEditor my_project:my_dataset # 4. Attach the identity to a new VM gcloud compute instances create my-vm --service-account=nightly-job-sa@<project-id>.iam.gserviceaccount.com
Scenario 4: The Secure Developer Workflow
Need/Requirement: A developer needs to test an application on her local laptop. The application is designed to run in a Cloud Function with very specific, limited permissions. To avoid bugs, she needs to ensure her local test environment has the exact same permissions as the production Cloud Function, not the broader permissions of her own user account (e.g.,
Project Editor).GCP Solution:
A dedicated Service Account is created for the application, e.g.,
my-app-sa@....This service account is granted the minimal required role, e.g.,
Pub/Sub Publisheron a specific topic. It has no other permissions.The developer's user account (
developer@company.com) is granted theService Account Token Creatorrole (roles/iam.serviceAccountTokenCreator) only on that specific service account.The developer configures her local
gcloudSDK to impersonatemy-app-sa. When she runs her code locally, it authenticates to GCP as the service account, inheriting its tightly restricted permissions.
Key Concepts Demonstrated:
Service Account Impersonation: A user temporarily "borrowing" the identity of a service account. The user needs the
iam.serviceAccounts.actAspermission to do this.High-Fidelity Local Testing: Ensures that the development environment perfectly mirrors the production IAM environment, catching permission-related bugs before deployment.
Privilege Reduction: Even if the developer is a
Project Editor, the script she is running is not. This drastically reduces the "blast radius" of a potential bug in the code.
Example CLI Commands:
# Allow a user to impersonate a service account gcloud iam service-accounts add-iam-policy-binding my-app-sa@<project-id>.iam.gserviceaccount.com \ --member="user:developer@example.com" \ --role="roles/iam.serviceAccountTokenCreator" # Developer runs this on their local machine to assume the identity gcloud auth application-default login --impersonate-service-account="my-app-sa@<project-id>.iam.gserviceaccount.com"
Scenario 5: The Centralized CI/CD Pipeline
Need/Requirement: An organization maintains a central GCP project (
project-cicd-tools) for its CI/CD pipeline (e.g., Jenkins, GitLab Runner, or Cloud Build). This pipeline needs to automatically deploy applications to multiple, separate production projects (project-webapp,project-backend-api, etc.). The pipeline must have permission to manage resources in those projects without having overly broad permissions.GCP Solution:
A dedicated Service Account is created in the central tools project, e.g.,
deployer-sa@project-cicd-tools.iam.gserviceaccount.com.In the target
project-webapp, the administrator grants theCloud Run Adminrole to the full email address of that service account.In the target
project-backend-api, the administrator grants theKubernetes Engine Developerrole to the same service account.The CI/CD pipeline is configured to use this service account to authenticate its deployment jobs.
Key Concepts Demonstrated:
Cross-Project Permissions: An identity (like a Service Account) is global and can be granted roles on resources in any project. The IAM policy on the resource simply needs to reference the identity's unique email.
Centralized Tooling Architecture: A highly common and recommended pattern for managing shared services like CI/CD, monitoring, or security scanning, which avoids duplication and simplifies management.
Scalable Permissions: Prevents the need to create and manage separate credentials for each project, making the entire system more secure and easier to maintain.
Example CLI Command:
# In the TARGET project, grant deploy permissions to the SA from the CICD project gcloud projects add-iam-policy-binding target-project-id \ --member="serviceAccount:deployer-sa@cicd-project-id.iam.gserviceaccount.com" \ --role="roles/run.admin"
Key Concept: Using Google Groups with External Users
A common and powerful question is: "Can I add users from outside my company to a Google Group?"
The answer is yes, absolutely. This feature is a cornerstone of secure collaboration in GCP.
Who can you add? You can add any valid Google Account to a group you manage, regardless of their email domain. This includes personal
@gmail.comaccounts and users from other Google Workspace organizations (e.g.,consultant@external-firm.com).The Prerequisite: The administrator of the Google Group must have the setting "Allow members outside your organization" enabled. This is usually on by default but is a critical check.
Why is this important for GCP? This allows you to grant permissions to a group that you control, but populate it with external members. You manage the permissions in GCP; the external partner manages their own people. This is the foundation for the delegated administration model shown in the next scenario.
Scenario 6: The External Partner
Need/Requirement: Your company (
company-a.com) hires an external consulting firm (consulting-b.com) to manage your production Kubernetes clusters. You need to grant their engineering team administrative access to your GKE resources without creating and managing user accounts for them in your own organization.GCP Solution:
The consulting firm creates and manages a Google Group within their own domain, e.g.,
gke-admins@consulting-b.com.In your company's production GCP project, you add
gke-admins@consulting-b.comas a new principal.You grant this group the
Kubernetes Engine Adminrole (roles/container.admin).
Key Concepts Demonstrated:
Federated Identity Management: Granting permissions to identities that exist entirely outside of your own GCP Organization. The IAM system trusts Google's global identity system to authenticate the user.
Delegated Administration: The consulting firm is now responsible for managing who is in that group. If an employee leaves their firm, they are removed from the group, and their access to your projects is instantly and automatically revoked. This is a massive security and operational benefit.
Business-to-Business Collaboration: This is the standard, secure pattern for enabling collaboration between different companies on GCP.
Example CLI Command:
gcloud projects add-iam-policy-binding your-project-id \ --member="group:gke-admins@consulting-firm.com" \ --role="roles/container.admin"
Scenario 7: The Default Service Account Trap
Need/Requirement: A developer deploys a new VM or a 1st Gen Cloud Function. For convenience, they don't specify a service account, accepting the default. The application works, but the security team is concerned. What is the risk and what should be done?
What is a Default Service Account? To reduce initial friction, GCP automatically creates a service account when certain services (most notably Compute Engine and App Engine) are enabled for the first time in a project. This allows a new user to immediately launch a VM or deploy a function that "just works" without needing to manually create an identity first.
The Hidden Problem: The convenience comes at a high security cost. This default service account (
[PROJECT_NUMBER]-compute@...or[PROJECT_ID]@appspot.com) is granted the highly privilegedEditorrole on the project. This means any code running on that VM or Function can read, modify, and delete nearly any other resource in the same project.GCP Solution and Guidelines:
Guideline: Avoid Defaults in Production. The number one rule is to never use default service accounts for production workloads. The
Editorrole presents too large a security risk if the service is compromised.Best Practice: Create Dedicated Service Accounts. For any real application, create a new, dedicated service account with a clear name (e.g.,
invoice-processing-func-sa).Apply Least Privilege. Grant this new account the absolute minimum permissions it needs to function. If a Cloud Run service only needs to read from one Pub/Sub topic, grant it the
Pub/Sub Subscriberrole on that specific topic, not on the whole project.Explicitly Attach. Attach this new, limited-privilege service account to your VM, Cloud Function, or Cloud Run service during deployment.
Audit and Remediate. Security-conscious organizations should regularly audit projects to find resources using default service accounts and replace them. You can also safely remove the
Editorrole from the default service accounts if they are not being used, effectively disabling them.
Key Concepts Demonstrated:
Default Service Accounts: Automatically created, highly privileged identities designed for convenience but risky for production.
Privilege Escalation Risk: A vulnerability in a single application can be escalated to a full project compromise if it uses an overly-permissive default service account.
Security Best Practices: The importance of creating minimal, dedicated identities for every workload (VMs, Functions, etc.) to contain the "blast radius" of a potential security breach.
Example CLI Command (The Right Way):
# Create a dedicated, minimal-privilege SA first gcloud iam service-accounts create my-webapp-sa # Grant it ONLY the permissions it needs (not shown here) # Create the VM explicitly using the new SA gcloud compute instances create my-secure-vm \ --service-account=my-webapp-sa@<project-id>.iam.gserviceaccount.com
Scenario 8: The Time-Bound Contractor Access
Need/Requirement: A contractor needs emergency access to debug a production issue. They need the powerful
Project Editorrole, but for strict security compliance, their access must automatically expire at 5 PM today and must only be usable from the corporate office's IP address.GCP Solution: Grant the contractor's user account the
Project Editorrole, but attach an IAM Condition to this specific role binding. The condition is written in Common Expression Language (CEL) and contains two clauses:A time-based clause to set an expiration:
request.time < timestamp("2025-09-28T17:00:00+00:00")An IP-based clause for location:
origin.ip == "203.0.113.50"
Key Concepts Demonstrated:
IAM Conditions: The ability to add dynamic, attribute-based logic to an IAM policy. The permission is only granted if the condition evaluates to
trueat the moment of access.Temporary Access (Just-in-Time): This is a core principle of modern security. Permissions are granted for a limited duration, eliminating the risk of forgotten, lingering accounts. Access automatically revokes without any manual cleanup.
Context-Aware Access: Restricting access based on contextual attributes like IP address, time of day, or the type of resource being accessed. This is a foundational element of a Zero Trust security model.
Example CLI Command:
gcloud projects add-iam-policy-binding your-project-id \ --member="user:contractor@external.com" \ --role="roles/editor" \ --condition='expression=request.time < timestamp("2025-09-29T17:00:00Z") && origin.ip == "203.0.113.50",title=temp_access,description="Expires at 5PM UTC on Sep 29 2025"'
Scenario 9: Protecting Sensitive Data in a Pipeline
Need/Requirement: A company has an automated data pipeline. Raw data containing PII (Personally Identifiable Information) is uploaded to a "landing zone" Cloud Storage bucket. A Dataflow job processes this data, anonymizes it, and writes the clean, safe results to a BigQuery dataset for business analysts. The security requirements are strict:
Only the ingestion service can write new data to the raw PII bucket.
The Dataflow processing job can read from the PII bucket but cannot modify or delete the raw data.
Business analysts must be able to query the final BigQuery data but must be explicitly blocked from ever accessing the raw PII bucket, even accidentally.
GCP Solution: This requires a multi-layered approach using resource-level permissions and a Deny Policy.
Create Service Accounts: Create two dedicated service accounts:
ingestion-saanddataflow-sa.Lock Down the PII Bucket: On the
raw-pii-databucket's IAM policy:Grant
ingestion-satheStorage Object Creatorrole (roles/storage.objectCreator).Grant
dataflow-satheStorage Object Viewerrole (roles/storage.objectViewer).Do not grant any other permissions to this bucket.
Secure the BigQuery Dataset: On the
clean_analytics_datadataset's IAM policy:Grant
dataflow-satheBigQuery Data Editorrole (roles/bigquery.dataEditor).Grant the Google Group
analysts@company.comtheBigQuery Data Viewerrole (roles/bigquery.dataViewer).
Create a Deny Policy: At the Project level, create an IAM Deny Policy. This is a separate policy that overrides any
allowpermissions.Denied Principal:
group:analysts@company.comDenied Permissions:
storage.objects.get,storage.objects.listTarget Resource (via Tag): The policy applies to any resource with the tag
data-sensitivity=pii. Apply this tag to theraw-pii-databucket.
Key Concepts Demonstrated:
Data Pipeline Security: A practical example of securing an end-to-end data flow by giving each component the minimum necessary permissions on the data resources.
Defense in Depth: Using multiple security controls (least privilege on the bucket, least privilege on the dataset, and a deny policy) to protect sensitive data.
IAM Deny Policies: An advanced feature where
denyalways overridesallow. This is the ultimate safety net to prevent a group of users from ever accessing certain resources, regardless of any other roles they might have (like Project Viewer).Policy Enforcement with Tags: Using resource tags to apply security policies (like a Deny Policy) at scale. You can ensure that any future storage bucket tagged with
piiautomatically gets this protection.
Example CLI Command (Deny Policy):
# (Deny policies are complex to create via CLI; this is a conceptual example using a YAML file) # 1. Define the deny rule in a file, e.g., deny-rule.yaml # 2. Apply the policy to the project gcloud iam policies create deny-analyst-access \ --attachment-point=[cloudresourcemanager.googleapis.com/projects/your-project-id](https://cloudresourcemanager.googleapis.com/projects/your-project-id) \ --kind=denypolicies \ --policy-file=deny-rule.yaml
Scenario 10: ABAC for Data Residency
Need/Requirement: A multinational company must enforce strict data residency rules due to regulations like GDPR. The finance team is split between the EU and the US. The policy must be:
Members of the EU finance team can only access data in storage buckets physically located in the EU.
Members of the US finance team can only access data in storage buckets physically located in the US.
The policy must scale automatically to new buckets and new team members without needing manual IAM changes on every bucket.
GCP Solution: This is a perfect use case for Attribute-Based Access Control (ABAC), implemented with Tags and IAM Conditions.
Define a Resource Attribute (Tag): At the Organization level, create a Tag Key called
data-locationwith two possible Tag Values:euandus.Apply Tags to Resources: Tag the relevant Cloud Storage buckets. The
europe-financial-reportsbucket gets the tagdata-location: eu. Theusa-quarterly-earningsbucket gets the tagdata-location: us.Define User Attributes (Groups): Use Google Groups to represent the user attributes. Create two groups:
gcp-finance-eu@company.comandgcp-finance-us@company.com.Create a Single, Conditional IAM Policy: At the Project level, create a single IAM binding that grants the
Storage Object Adminrole (roles/storage.objectAdmin) to both groups, but with a condition for each.For the
gcp-finance-eugroup, the condition is:resource.matchTag('YOUR_ORG_ID/data-location', 'eu')For the
gcp-finance-usgroup, the condition is:resource.matchTag('YOUR_ORG_ID/data-location', 'us')
Key Concepts Demonstrated:
Attribute-Based Access Control (ABAC): The access decision is not based on a static role alone, but on a dynamic evaluation of attributes: the user's group membership (their attribute) and the resource's tag (its attribute).
Scalable Governance: This is the key benefit. When a new EU finance team member joins, you add them to the Google Group, and the policy automatically applies. When a new EU bucket is created, you simply tag it
data-location: eu, and it is instantly protected by the same project-level policy. You don't need to update IAM policies on hundreds of buckets.Enforcing Data Residency: This pattern provides a robust and auditable way to enforce data sovereignty and compliance rules across an entire organization.
Example CLI Command (Conditional Binding):
gcloud projects add-iam-policy-binding your-project-id \ --member="group:gcp-finance-eu@example.com" \ --role="roles/storage.objectAdmin" \ --condition='expression=resource.matchTag("123456789012/data-location", "eu"),title=eu_data_only'
Scenario 11: The Zero Trust Application
Need/Requirement: A company needs to host an internal web application (e.g., an HR portal) on Compute Engine. The security requirements are stringent:
The application must not be exposed to the public internet.
Only authenticated employees of the company can access it.
The web server VMs must be able to connect to a backend Cloud SQL database, but the database must be completely isolated from all other network traffic.
Data from the database must not be exportable to an unauthorized location.
GCP Solution: This multi-layered solution combines identity and network security.
User Access (Identity-Aware Proxy): Place the application's load balancer behind Identity-Aware Proxy (IAP). On the IAP settings, grant the
IAP-secured Web App Userrole to the Google Groupemployees@company.com. This ensures only authenticated employees can even reach the application's login page.Network Segmentation (Firewall Rules with Service Accounts): Create two dedicated service accounts:
webapp-saanddatabase-sa. Attachwebapp-sato the web server VMs. In the VPC firewall rules, create a rule that allows TCP traffic on port 3306 with a source ofserviceAccount:webapp-saand a target ofserviceAccount:database-sa. This locks down database access to only the web application VMs, regardless of their IP addresses.Data Exfiltration Prevention (VPC Service Controls): Place the project containing the application and database inside a VPC Service Controls perimeter. This perimeter blocks GCP APIs by default. It prevents a user or compromised service from, for example, running a
gcloud sql exportcommand and saving the data to a public Cloud Storage bucket in another project.
Key Concepts Demonstrated:
Zero Trust Architecture: The principle of "never trust, always verify." Access is granted based on verified identity (IAP), strict network segmentation (firewall rules), and API-level controls (VPC SC), not just network location.
Identity-Aware Proxy (IAP): A powerful tool for securing web applications by wrapping them with Google's identity and access management layer.
Micro-segmentation: Using service accounts in firewall rules to create fine-grained network controls based on workload identity, not just IP addresses.
VPC Service Controls: A critical defense against data exfiltration, creating a secure "walled garden" for your most sensitive projects.
Example CLI Commands:
# Allow a group to access an IAP-secured application # (Requires getting the existing policy, adding the member, and writing it back) gcloud iap web set-iam-policy policy.json # Create a firewall rule based on service account identity gcloud compute firewall-rules create allow-app-to-db \ --allow=tcp:5432 \ --source-service-accounts=webapp-sa@<project-id>.iam.gserviceaccount.com \ --target-service-accounts=database-sa@<project-id>.iam.gserviceaccount.com
Scenario 12: Enforcing Governance and Compliance
Need/Requirement: A company's central security team needs to ensure that all GCP projects continuously adhere to corporate security policies. They need to prevent certain risky configurations, detect any misconfigurations that slip through, and get proactive advice on improving their IAM posture over time.
GCP Solution: This is a layered governance strategy using several integrated services.
Prevention (Organization Policies): The team sets up Organization Policies at the root Organization node to enforce non-negotiable rules. For example, they enforce the
iam.allowedPolicyMemberDomainsconstraint to ensure only identities from their own corporate domain can be added to IAM policies, preventing accidental sharing with external accounts. They also enforce thecompute.vmExternalIpAccessconstraint to block the creation of VMs with public IPs.Detection (Security Command Center): The team activates the Premium tier of Security Command Center (SCC) at the organization level. This provides a central dashboard for all security findings. SCC's built-in Security Health Analytics automatically scans for hundreds of potential misconfigurations. When a developer accidentally leaves a sensitive port open in a firewall rule, SCC generates a high-priority finding.
Investigation (Cloud Audit Logs): Upon seeing the SCC finding, the security team needs to know "how did this happen?" They use the Logs Explorer to query Cloud Audit Logs for the project in question, filtering for the specific firewall-related API calls around the time of the incident to identify the exact user and action.
Optimization (Active Assist - IAM Recommender): To reduce risk proactively, the team regularly reviews the recommendations from the IAM Recommender. They discover several service accounts with the over-privileged
Editorrole. The recommender, having analyzed 90 days of usage data, suggests specific, more restrictive roles. The team applies these recommendations, hardening their IAM posture without breaking applications.
Key Concepts Demonstrated:
Continuous Compliance: Security is not a one-time setup. This demonstrates the lifecycle of preventing, detecting, and remediating issues.
Organization Policies: The primary tool for establishing preventative "guardrails" that enforce corporate policy across the entire cloud environment.
Security Command Center (SCC): The single pane of glass for security monitoring, threat detection, and compliance reporting.
IAM Recommender (Active Assist): A powerful tool for rightsizing permissions and continuously applying the principle of least privilege based on actual usage data.
Example CLI Command (Org Policy):
# Create a policy.yaml file that enforces a constraint # --- # constraint: constraints/compute.vmExternalIpAccess # listPolicy: # allValues: DENY # --- # Set the organization policy from the file gcloud resource-manager org-policies set-policy policy.yaml --organization=123456789012
Expert-Level Scenarios
Scenario 13: The Ultra-Specific Auditor (Custom Roles)
Need/Requirement: A company hires a third-party auditing firm to run a compliance tool. The tool only needs to perform two very specific actions: list the Compute Engine instances in a project and get the details of the firewall rules. Granting a broad role like
Compute Vieweris a security risk, as it includes many unnecessary permissions (e.g., viewing disk data). A new, minimal role is needed.GCP Solution: Create a Custom IAM Role at the project or organization level.
Define a new role, for example,
firewallAndVmAuditor.Instead of choosing from predefined roles, you add individual permissions to this new role. In this case, you would add only two permissions:
compute.instances.listcompute.firewalls.get
Grant this new, custom role to the auditor's service account. The tool now has exactly the permissions it needs and nothing more.
Key Concepts Demonstrated:
Custom Roles: The ability to create your own roles when none of GCP's hundreds of predefined roles meet your specific, minimal requirements. This is the ultimate application of the Principle of Least Privilege.
Granular Permissions: Understanding that roles are simply collections of individual permissions (e.g.,
service.resource.verb). Custom roles give you direct control over these permissions.Secure Third-Party Integration: A critical pattern for safely integrating external tools and vendors into your environment by ensuring they cannot access anything beyond their specific mandate.
Example CLI Command:
# 1. Create a role definition file, e.g., role-definition.yaml # --- # title: "Firewall And VM Auditor" # description: "Minimal permissions for the compliance tool" # stage: "GA" # includedPermissions: # - compute.instances.list # - compute.firewalls.get # --- # 2. Create the custom role in your project from the file gcloud iam roles create firewallAndVmAuditor --project=your-project-id \ --file=role-definition.yaml
Scenario 14: The Multi-Cloud Pipeline (Workload Identity Federation)
Need/Requirement: An organization's primary CI/CD pipeline runs in AWS CodePipeline. This pipeline needs to deploy a container image to Google Cloud Run and Artifact Registry. The CISO has forbidden the use of long-lived service account keys, as downloading and managing a JSON key file is a major security risk. A secure, keyless authentication method is required.
Key Concept: What is a Service Account Key?
A service account key is a permanent, downloadable password for an application. It's a JSON file containing a private key. Any application that possesses this file can authenticate to GCP as that service account, inheriting all its permissions. The risk is that this key is long-lived (valid forever until revoked) and, being a file, can be easily leaked (e.g., committed to Git, stolen from a laptop).
Key Concept: What is a Workload Identity Pool?
A Workload Identity Pool is an entity in GCP that allows you to manage external identities. Think of it as creating a directory or "phonebook" of external systems you trust. Instead of creating a permanent key for an external application, you tell GCP, "I trust identities coming from this specific AWS account" or "I trust identities with a specific token from my on-premise Active Directory."
GCP Solution: Use Workload Identity Federation. This provides a universal, "360-degree" solution for any external workload.
Establish Trust in GCP: In GCP IAM, create a Workload Identity Pool and a Provider. Configure the provider to trust the AWS account where the pipeline runs. You can add attribute conditions, for example, to only trust actions originating from a specific AWS role.
Grant Permissions in GCP: Grant a GCP service account (e.g.,
aws-deployer-sa) the necessary roles (Cloud Run Admin,Artifact Registry Writer). Then, allow the trusted identities from the AWS provider to impersonate this GCP service account.Exchange Credentials in AWS: In the AWS CodePipeline script, use the native AWS identity to get temporary AWS credentials. The script then calls the GCP Security Token Service, presenting its AWS credentials.
Receive GCP Token: GCP STS verifies the AWS credentials against the trusted provider. If valid, it returns a short-lived GCP access token for the
aws-deployer-saservice account. The pipeline now uses this token to deploy to Cloud Run, all without ever seeing a GCP key.
Key Concepts Demonstrated:
Workload Identity Federation: The modern, secure standard for allowing workloads outside GCP (in AWS, Azure, on-prem) to access GCP resources.
Keyless Authentication: This is the primary benefit. It completely eliminates the need for service account keys, which are a common source of security breaches if leaked.
Identity Federation: The core principle is establishing a trust relationship between GCP and an external Identity Provider (IdP). The external workload authenticates with its native credentials, which are then exchanged for temporary, short-lived GCP credentials.
Universal Applicability: This pattern works for virtually any external system that can provide a verifiable identity token (OIDC, SAML, AWS), making it a true 360-degree solution for multi-cloud and hybrid environments.
Example CLI Commands:
# 1. Create the identity pool gcloud iam workload-identity-pools create my-aws-pool \ --location="global" --display-name="AWS Federation Pool" # 2. Create the AWS provider within the pool gcloud iam workload-identity-pools providers create-aws my-aws-provider \ --location="global" \ --workload-identity-pool="my-aws-pool" \ --account-id="123456789012" # The AWS Account ID
0 comments
Note: only a member of this blog may post a comment.