Skip to content

Commit faaade6

Browse files
committed
Custom Credentials now read from secrets.json as opposed to env variables.
1 parent a275976 commit faaade6

14 files changed

Lines changed: 635 additions & 198 deletions

auth/README.md

Lines changed: 0 additions & 71 deletions
Original file line numberDiff line numberDiff line change
@@ -60,77 +60,6 @@ information](https://developers.google.com/identity/protocols/application-defaul
6060

6161
$ npm run test:downscoping
6262

63-
## Custom Credential Suppliers
64-
65-
If you want to use external credentials (like AWS or Okta) that require custom retrieval logic not supported natively by the library, you can provide a custom supplier implementation.
66-
67-
### Custom AWS Credential Supplier
68-
69-
This sample demonstrates how to use the AWS SDK for Node.js as a custom `AwsSecurityCredentialsSupplier` to bridge AWS credentials—from sources like EKS IRSA, ECS, or local profiles—to Google Cloud Workload Identity.
70-
71-
#### 1. Set Environment Variables
72-
73-
```bash
74-
# AWS Credentials (or use ~/.aws/credentials)
75-
export AWS_ACCESS_KEY_ID="YOUR_AWS_ACCESS_KEY_ID"
76-
export AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET_ACCESS_KEY"
77-
export AWS_REGION="us-east-1"
78-
79-
# Google Cloud Config
80-
# Format: //iam.googleapis.com/projects/<PROJECT_NUMBER>/locations/global/workloadIdentityPools/<POOL_ID>/providers/<PROVIDER_ID>
81-
export GCP_WORKLOAD_AUDIENCE="//iam.googleapis.com/projects/123456/locations/global/workloadIdentityPools/my-pool/providers/my-aws-provider"
82-
export GCS_BUCKET_NAME="your-bucket-name"
83-
84-
# Optional: Service Account Impersonation
85-
# export GCP_SERVICE_ACCOUNT_IMPERSONATION_URL="https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/my-sa@my-project.iam.gserviceaccount.com:generateAccessToken"
86-
```
87-
88-
#### 2. Run the Sample
89-
90-
```bash
91-
node custom-credential-supplier-aws.js
92-
```
93-
94-
#### Running in Kubernetes (EKS)
95-
96-
To run this in an EKS cluster using IAM Roles for Service Accounts (IRSA):
97-
98-
1. **Configure IRSA:** Associate an AWS IAM Role with your Kubernetes Service Account.
99-
2. **Configure GCP:** Allow the AWS IAM Role ARN to impersonate your Workload Identity Pool.
100-
3. **Deploy:** When deploying your Node.js application, ensure the Pod uses the annotated Service Account. The AWS SDK in the sample will automatically detect the credentials injected by the EKS OIDC webhook.
101-
102-
---
103-
104-
### Custom Okta Credential Supplier
105-
106-
This sample demonstrates how to use a custom `SubjectTokenSupplier` to fetch an OIDC token from **Okta** using the Client Credentials flow and exchange it for Google Cloud credentials via Workload Identity Federation.
107-
108-
#### 1. Okta Configuration
109-
110-
Ensure you have an Okta Machine-to-Machine (M2M) application set up with "Client Credentials" grant type enabled. You will need the Domain, Client ID, and Client Secret.
111-
112-
#### 2. Set Environment Variables
113-
114-
```bash
115-
# Okta Configuration
116-
export OKTA_DOMAIN="https://your-okta-domain.okta.com"
117-
export OKTA_CLIENT_ID="your-okta-client-id"
118-
export OKTA_CLIENT_SECRET="your-okta-client-secret"
119-
120-
# Google Cloud Config
121-
export GCP_WORKLOAD_AUDIENCE="//iam.googleapis.com/projects/123456/locations/global/workloadIdentityPools/my-pool/providers/my-oidc-provider"
122-
export GCS_BUCKET_NAME="your-bucket-name"
123-
124-
# Optional: Service Account Impersonation
125-
# export GCP_SERVICE_ACCOUNT_IMPERSONATION_URL="https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/my-sa@my-project.iam.gserviceaccount.com:generateAccessToken"
126-
```
127-
128-
#### 3. Run the Sample
129-
130-
```bash
131-
node custom-credential-supplier-okta.js
132-
```
133-
13463
### Additional resources
13564

13665
For more information on downscoped credentials you can visit:
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# Use the official Node.js image.
2+
# https://hub.docker.com/_/node
3+
FROM node:20-slim
4+
5+
# Create and change to the app directory.
6+
WORKDIR /app
7+
8+
# Copy application dependency manifests to the container image.
9+
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
10+
COPY package*.json ./
11+
12+
# Install production dependencies.
13+
RUN npm install --omit=dev
14+
15+
# Create a non-root user for security
16+
RUN useradd -m appuser
17+
18+
# Copy local code to the container image.
19+
COPY --chown=appuser:appuser . .
20+
21+
# Switch to non-root user
22+
USER appuser
23+
24+
# Run the web service on container startup.
25+
CMD [ "node", "customCredentialSupplierAws.js" ]
Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
# Running the Custom AWS Credential Supplier Sample (Node.js)
2+
3+
This sample demonstrates how to use a custom AWS security credential supplier to authenticate with Google Cloud using AWS as an external identity provider. It uses the **AWS SDK for JavaScript (v3)** to fetch credentials from sources like Amazon Elastic Kubernetes Service (EKS) with IAM Roles for Service Accounts (IRSA), Elastic Container Service (ECS), or Fargate.
4+
5+
## Prerequisites
6+
7+
* An AWS account.
8+
* A Google Cloud project with the IAM API enabled.
9+
* A GCS bucket.
10+
* **Node.js 16** or later installed.
11+
* **npm** installed.
12+
13+
If you want to use AWS security credentials that cannot be retrieved using methods supported natively by the Google Auth library, a custom `AwsSecurityCredentialsSupplier` implementation may be specified. The supplier must return valid, unexpired AWS security credentials when called by the Google Cloud Auth library.
14+
15+
## Running Locally
16+
17+
For local development, you can provide credentials and configuration in a JSON file.
18+
19+
### Install Dependencies
20+
21+
Ensure you have Node.js installed, then install the required libraries:
22+
23+
```bash
24+
npm install
25+
```
26+
27+
### Configure Credentials for Local Development
28+
29+
1. Copy the example secrets file to a new file named `custom-credentials-aws-secrets.json` in the project root:
30+
```bash
31+
cp custom-credentials-aws-secrets.json.example custom-credentials-aws-secrets.json
32+
```
33+
2. Open `custom-credentials-aws-secrets.json` and fill in the required values for your AWS and Google Cloud configuration. Do not check your `custom-credentials-aws-secrets.json` file into version control.
34+
35+
36+
### Run the Application
37+
38+
Execute the script using node:
39+
40+
```bash
41+
node customCredentialSupplierAws.js
42+
```
43+
44+
When run locally, the application will detect the `custom-credentials-aws-secrets.json` file and use it to configure the necessary environment variables for the AWS SDK.
45+
46+
## Running in a Containerized Environment (EKS)
47+
48+
This section provides a brief overview of how to run the sample in an Amazon EKS cluster.
49+
50+
### EKS Cluster Setup
51+
52+
First, you need an EKS cluster. You can create one using `eksctl` or the AWS Management Console. For detailed instructions, refer to the [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html).
53+
54+
### Configure IAM Roles for Service Accounts (IRSA)
55+
56+
IRSA enables you to associate an IAM role with a Kubernetes service account. This provides a secure way for your pods to access AWS services without hardcoding long-lived credentials.
57+
58+
Run the following command to create the IAM role and bind it to a Kubernetes Service Account:
59+
60+
```bash
61+
eksctl create iamserviceaccount \
62+
--name your-k8s-service-account \
63+
--namespace default \
64+
--cluster your-cluster-name \
65+
--region your-aws-region \
66+
--role-name your-role-name \
67+
--attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
68+
--approve
69+
```
70+
71+
> **Note**: The `--attach-policy-arn` flag is used here to demonstrate attaching permissions. Update this with the specific AWS policy ARN your application requires.
72+
73+
For a deep dive into how this works without using `eksctl`, refer to the [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) documentation.
74+
75+
### Configure Google Cloud to Trust the AWS Role
76+
77+
To allow your AWS role to authenticate as a Google Cloud service account, you need to configure Workload Identity Federation. This process involves these key steps:
78+
79+
1. **Create a Workload Identity Pool and an AWS Provider:** The pool holds the configuration, and the provider is set up to trust your AWS account.
80+
81+
2. **Create or select a Google Cloud Service Account:** This service account will be impersonated by your AWS role.
82+
83+
3. **Bind the AWS Role to the Google Cloud Service Account:** Create an IAM policy binding that gives your AWS role the `Workload Identity User` (`roles/iam.workloadIdentityUser`) role on the Google Cloud service account.
84+
85+
For more detailed information, see the documentation on [Configuring Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds).
86+
87+
### Containerize and Package the Application
88+
89+
Create a `Dockerfile` for the Node.js application and push the image to a container registry (for example Amazon ECR) that your EKS cluster can access.
90+
91+
**Note:** The provided [`Dockerfile`](Dockerfile) is an example and may need modification for your specific needs.
92+
93+
Build and push the image:
94+
```bash
95+
docker build -t your-container-image:latest .
96+
docker push your-container-image:latest
97+
```
98+
99+
### Deploy to EKS
100+
101+
Create a Kubernetes deployment manifest to deploy your application to the EKS cluster. See the [`pod.yaml`](pod.yaml) file for an example.
102+
103+
**Note:** The provided [`pod.yaml`](pod.yaml) is an example and may need to be modified for your specific needs.
104+
105+
Deploy the pod:
106+
107+
```bash
108+
kubectl apply -f pod.yaml
109+
```
110+
111+
### Clean Up
112+
113+
To clean up the resources, delete the EKS cluster and any other AWS and Google Cloud resources you created.
114+
115+
```bash
116+
eksctl delete cluster --name your-cluster-name
117+
```
118+
119+
## Testing
120+
121+
This sample is not continuously tested. It is provided for instructional purposes and may require modifications to work in your environment.
122+
```
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
{
2+
"aws_access_key_id": "YOUR_AWS_ACCESS_KEY_ID",
3+
"aws_secret_access_key": "YOUR_AWS_SECRET_ACCESS_KEY",
4+
"aws_region": "YOUR_AWS_REGION",
5+
"gcp_workload_audience": "YOUR_GCP_WORKLOAD_AUDIENCE",
6+
"gcs_bucket_name": "YOUR_GCS_BUCKET_NAME",
7+
"gcp_service_account_impersonation_url": "YOUR_GCP_SERVICE_ACCOUNT_IMPERSONATION_URL"
8+
}

auth/customCredentialSupplierAws.js renamed to auth/customcredentials/aws/customCredentialSupplierAws.js

Lines changed: 65 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,10 @@
1515
// [START auth_custom_credential_supplier_aws]
1616
const {AwsClient} = require('google-auth-library');
1717
const {fromNodeProviderChain} = require('@aws-sdk/credential-providers');
18+
const fs = require('fs');
19+
const path = require('path');
1820
const {STSClient} = require('@aws-sdk/client-sts');
21+
const {Storage} = require('@google-cloud/storage');
1922

2023
/**
2124
* Custom AWS Security Credentials Supplier.
@@ -93,36 +96,86 @@ async function authenticateWithAwsCredentials(
9396
audience,
9497
impersonationUrl
9598
) {
96-
// 1. Instantiate the custom supplier.
9799
const customSupplier = new CustomAwsSupplier();
98100

99-
// 2. Configure the AwsClient options.
100101
const clientOptions = {
101102
audience: audience,
102103
subject_token_type: 'urn:ietf:params:aws:token-type:aws4_request',
103104
service_account_impersonation_url: impersonationUrl,
104105
aws_security_credentials_supplier: customSupplier,
105106
};
106107

107-
// 3. Create the auth client
108-
const client = new AwsClient(clientOptions);
108+
const authClient = new AwsClient(clientOptions);
109109

110-
// 4. Make an authenticated request to GCS.
111-
const bucketUrl = `https://storage.googleapis.com/storage/v1/b/${bucketName}`;
112-
const res = await client.request({url: bucketUrl});
113-
return res.data;
110+
const storage = new Storage({
111+
authClient: authClient,
112+
});
113+
114+
const [metadata] = await storage.bucket(bucketName).getMetadata();
115+
return metadata;
114116
}
115117
// [END auth_custom_credential_supplier_aws]
116118

119+
/**
120+
* If a local secrets file is present, load it into the process environment.
121+
* This is a "just-in-time" configuration for local development. These
122+
* variables are only set for the current process.
123+
*/
124+
function loadConfigFromFile() {
125+
const secretsFile = 'custom-credentials-aws-secrets.json';
126+
const secretsPath = path.resolve(__dirname, secretsFile);
127+
128+
if (!fs.existsSync(secretsPath)) {
129+
return;
130+
}
131+
132+
try {
133+
const secrets = JSON.parse(fs.readFileSync(secretsPath, 'utf8'));
134+
135+
if (!secrets) {
136+
return;
137+
}
138+
139+
// AWS SDK for Node.js looks for environment variables with specific names.
140+
if (secrets.aws_access_key_id) {
141+
process.env.AWS_ACCESS_KEY_ID = secrets.aws_access_key_id;
142+
}
143+
if (secrets.aws_secret_access_key) {
144+
process.env.AWS_SECRET_ACCESS_KEY = secrets.aws_secret_access_key;
145+
}
146+
if (secrets.aws_region) {
147+
process.env.AWS_REGION = secrets.aws_region;
148+
}
149+
150+
// Set custom GCP variables so they can be retrieved from process.env.
151+
if (secrets.gcp_workload_audience) {
152+
process.env.GCP_WORKLOAD_AUDIENCE = secrets.gcp_workload_audience;
153+
}
154+
if (secrets.gcs_bucket_name) {
155+
process.env.GCS_BUCKET_NAME = secrets.gcs_bucket_name;
156+
}
157+
if (secrets.gcp_service_account_impersonation_url) {
158+
process.env.GCP_SERVICE_ACCOUNT_IMPERSONATION_URL =
159+
secrets.gcp_service_account_impersonation_url;
160+
}
161+
} catch (error) {
162+
console.error(`Error reading secrets file: ${error.message}`);
163+
}
164+
}
165+
117166
async function main() {
118-
require('dotenv').config();
167+
// Reads the secrets.json if running locally.
168+
loadConfigFromFile();
169+
119170
const gcpAudience = process.env.GCP_WORKLOAD_AUDIENCE;
120171
const saImpersonationUrl = process.env.GCP_SERVICE_ACCOUNT_IMPERSONATION_URL;
121172
const gcsBucketName = process.env.GCS_BUCKET_NAME;
122173

123174
if (!gcpAudience || !gcsBucketName) {
124175
throw new Error(
125-
'Missing required environment variables: GCP_WORKLOAD_AUDIENCE, GCS_BUCKET_NAME'
176+
'Missing required configuration. Please provide it in a ' +
177+
'secrets.json file or as environment variables: ' +
178+
'GCP_WORKLOAD_AUDIENCE, GCS_BUCKET_NAME'
126179
);
127180
}
128181

@@ -134,11 +187,10 @@ async function main() {
134187
saImpersonationUrl
135188
);
136189
console.log('\n--- SUCCESS! ---');
137-
console.log('Bucket Name:', bucketMetadata.name);
138-
console.log('Bucket Location:', bucketMetadata.location);
190+
console.log('Bucket Metadata:', JSON.stringify(bucketMetadata, null, 2));
139191
} catch (error) {
140192
console.error('\n--- FAILED ---');
141-
console.error(error.response?.data || error);
193+
console.error(error.message || error);
142194
process.exitCode = 1;
143195
}
144196
}

0 commit comments

Comments
 (0)