Skip to content

feat: storokufy sprue#14

Open
volmedo wants to merge 7 commits intomainfrom
vic/chore/storokufy
Open

feat: storokufy sprue#14
volmedo wants to merge 7 commits intomainfrom
vic/chore/storokufy

Conversation

@volmedo
Copy link
Copy Markdown

@volmedo volmedo commented Mar 20, 2026

This PR adds storoku-based deployment files and configs to sprue. The deployment workflow is set up to deploy to the forge environments. DIDs and private keys are configured for the service to impersonate the upload-service of the corresponding environment.

Big caveat: these changes allow sprue to access the upload-service tables, but the infra will still be handled by w3infra. At some point, when the main service is sprue and not the upload-service, we'll want to add the tables and buckets to sprue's deployment and import the existing infra, so that it can be managed from here. I didn't get time to do it myself (sorry), but this should be good enough for a while.

@volmedo volmedo requested a review from alanshaw March 20, 2026 17:02
@volmedo volmedo self-assigned this Mar 20, 2026
Comment thread pkg/dynamo/store.go
Comment on lines +55 to +58
opts := []func(*awsconfig.LoadOptions) error{}

if cfg.Region != "" {
opts = append(opts, awsconfig.WithRegion(cfg.Region))
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alanshaw you'll need to do something like this when creating the S3 client. AWS injects the region in the ECS task's metadata, so the task will attempt to access stuff on the region it's running in by default.

I kept the config property because I don't know if it is used when the service is deployed as a container in a local smelt stack. If it isn't, it can be removed.

Comment on lines +106 to +127
data "aws_iam_policy_document" "task_upload_service_dynamodb_query_document" {
statement {
actions = [
"dynamodb:Query",
]
resources = [
data.aws_dynamodb_table.agent_index_table.arn,
data.aws_dynamodb_table.blob_registry_table.arn,
data.aws_dynamodb_table.consumer_table.arn,
data.aws_dynamodb_table.customer_table.arn,
data.aws_dynamodb_table.delegation_table.arn,
data.aws_dynamodb_table.space_metrics_table.arn,
data.aws_dynamodb_table.admin_metrics_table.arn,
data.aws_dynamodb_table.replica_table.arn,
data.aws_dynamodb_table.revocation_table.arn,
data.aws_dynamodb_table.storage_provider_table.arn,
data.aws_dynamodb_table.subscription_table.arn,
data.aws_dynamodb_table.space_diff_table.arn,
data.aws_dynamodb_table.upload_table.arn,
]
}
}
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as is, this policy allows the service to query all tables. I assume it will also need to write, so the policy will need to be tweaked to allow whatever access patterns the service requires.

Bear in mind that, aside of the required actions, access to indices in tables needs to be explicitly stated by adding them to the resources list (things like "${data.aws_dynamodb_table.consumer_table.arn}/index/consumer" should work; take a look at the etracker's external.tf file, it does something similar to this).

@volmedo volmedo marked this pull request as ready for review March 20, 2026 17:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant