Conversation
| opts := []func(*awsconfig.LoadOptions) error{} | ||
|
|
||
| if cfg.Region != "" { | ||
| opts = append(opts, awsconfig.WithRegion(cfg.Region)) |
There was a problem hiding this comment.
@alanshaw you'll need to do something like this when creating the S3 client. AWS injects the region in the ECS task's metadata, so the task will attempt to access stuff on the region it's running in by default.
I kept the config property because I don't know if it is used when the service is deployed as a container in a local smelt stack. If it isn't, it can be removed.
| data "aws_iam_policy_document" "task_upload_service_dynamodb_query_document" { | ||
| statement { | ||
| actions = [ | ||
| "dynamodb:Query", | ||
| ] | ||
| resources = [ | ||
| data.aws_dynamodb_table.agent_index_table.arn, | ||
| data.aws_dynamodb_table.blob_registry_table.arn, | ||
| data.aws_dynamodb_table.consumer_table.arn, | ||
| data.aws_dynamodb_table.customer_table.arn, | ||
| data.aws_dynamodb_table.delegation_table.arn, | ||
| data.aws_dynamodb_table.space_metrics_table.arn, | ||
| data.aws_dynamodb_table.admin_metrics_table.arn, | ||
| data.aws_dynamodb_table.replica_table.arn, | ||
| data.aws_dynamodb_table.revocation_table.arn, | ||
| data.aws_dynamodb_table.storage_provider_table.arn, | ||
| data.aws_dynamodb_table.subscription_table.arn, | ||
| data.aws_dynamodb_table.space_diff_table.arn, | ||
| data.aws_dynamodb_table.upload_table.arn, | ||
| ] | ||
| } | ||
| } |
There was a problem hiding this comment.
as is, this policy allows the service to query all tables. I assume it will also need to write, so the policy will need to be tweaked to allow whatever access patterns the service requires.
Bear in mind that, aside of the required actions, access to indices in tables needs to be explicitly stated by adding them to the resources list (things like "${data.aws_dynamodb_table.consumer_table.arn}/index/consumer" should work; take a look at the etracker's external.tf file, it does something similar to this).
This PR adds storoku-based deployment files and configs to sprue. The deployment workflow is set up to deploy to the forge environments. DIDs and private keys are configured for the service to impersonate the upload-service of the corresponding environment.
Big caveat: these changes allow sprue to access the upload-service tables, but the infra will still be handled by w3infra. At some point, when the main service is sprue and not the upload-service, we'll want to add the tables and buckets to sprue's deployment and import the existing infra, so that it can be managed from here. I didn't get time to do it myself (sorry), but this should be good enough for a while.