Web application for the simple, structured and harmonized recording of geological assets.
This document contains very broad information to get the project up and running; for more specific information, please refer to the Developer Docs and the Release Process.
The following components must be installed on the development computer:
✔️ Git
✔️ Docker
✔️ Node.js 22 LTS
Follow these steps to set up the development environment on your local machine:
Install node modules:
npm installGenerate prisma-client for database-access:
npm run prisma -- generateFiles are stored in a local MinIO instance which is accessed by the backend, frontend and OCR service using an access key with a "god" policy. The createbuckets service is used to create the storage as well as the access keys, so you should not have to worry about that.
If you need to change the user or password, be sure to
- Adjust the environment variables in
development/.envand, if need be, the directory parameters indocker-compose.yaml - Adjust the environment variables in
apps-server-sg/.envso they match the changes to docker compose
Per default, our user is not an admin and cannot access admin functionality. In order to so, login once and then manually set the user to admin in the database directly:
docker compose exec db sh -c 'psql --dbname=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/${POSTGRES_DB} -c "UPDATE asset_user SET is_admin = true WHERE email = '\''admin@assets.swissgeol.ch'\''"'Start development services:
cd development
docker compose upImportant: make sure that both
development/volumes/elasticsearchanddevelopment/volumes/pgadminare writable by the container user, otherwise the elasticsearch and pgadmin container will be stuck in an infinite boot loop.
Start the application:
npm run start
# Or individually:
npm run start:server
npm run start:client| 🔖App/Service | 🔗Link | 🧞User | 🔐Password |
|---|---|---|---|
| Assets (client) | localhost:4200 | admin |
admin |
| Assets REST API (server) | localhost:3333/api/ | n/a | n/a |
| postgreSQL (docker) | localhost:5432 | .env $DB_USER |
.env $DB_PASSWORD |
| Elasticsearch (docker) | localhost:9200 | n/a | n/a |
| Kibana (docker) | localhost:5601 | n/a | n/a |
| pgAdmin (docker) | localhost:5051 | .env $PGADMIN_EMAIL |
.env $PGADMIN_PASSWORD |
| MinIO (docker) | localhost:9001 | .env $STORAGE_USER |
.env $STORAGE_PASSWORD |
| smtp4dev (docker) | localhost:5000 | n/a | n/a |
| oidc-server (docker) | localhost:4011 | n/a | n/a |
| createbuckets (docker) | n/a | n/a | n/a |
You can dump data from a remote environment into a local file so you can initialize your development database with it.
To do so, use the following commands.
Be aware that you need to manually insert the {DB_*} values beforehand.
cd development
docker compose exec db sh -c 'pg_dump --dbname=postgresql://{DB_USERNAME}:{DB_PASSWORD}@{DB_HOST}:5432/{DB_DATABASE} --data-only --exclude-table _prisma_migrations --exclude-table asset_test --exclude-table asset_user_bak -n public > /dump.sql'The export will output warnings related to circular foreign-key constraints. These can be safely ignored.
The export will only contain the database's data, not its structure. Data related to the authentication process is also excluded, so we don't run into conflicts when using a different eIAM provider.
To import the dumped data, run the following commands. Ensure to start your database service beforehand.
# Reset the database:
npm run prisma -- migrate reset -f
# Switch to the directory containing the database's `docker-compose.yml`:
cd development
# Remove the initial workgroup as it will collide with the import:
docker compose exec db sh -c 'psql --dbname=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/${POSTGRES_DB} -c "DELETE FROM workgroup"'
# Import example data:
docker compose exec db sh -c 'psql --dbname=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/${POSTGRES_DB} -v ON_ERROR_STOP=1 -f /dump.sql -f /docker-entrypoint-initdb.d/01_roles.sql'You will need to manually sync the data to Elasticsearch via the admin panel in the web UI via the cogwheel symbol in the bottom left of the GUI (requires
is_adminto be true for the given user).
Tests execute automatically on every push to the Git repository.
The local tests require a running instance of both postgreSQL and Elasticsearch. Make sure that your local development environment is fully shutdown and then run the test services:
cd development
docker compose down
docker compose -f docker-compose.test.yml upThen run all tests:
npm run testIt is also possible to run only specific tests:
# Run only the server tests:
nx run server-asset-sg:test
# Run only a specific test suite:
nx run server-asset-sg:test -t 'AssetRepo'
# Run only a specific, nested test suite:
nx run server-asset-sg:test -t 'AssetRepo create'The file apps/server-asset-sg/.env configures the configuration for the SwissGeol Assets server.
By default, it is configured to work with the Docker services found in development/docker-compose.yml.
| Variable | Example | Description |
|---|---|---|
| FRONTEND_URL | http://localhost:4200 | Public URL of the SwissGeol Asset web client. |
| S3_REGION | local | Region of the S3 instance. |
| S3_ENDPOINT | http://localhost:9000 | URL to the S3 instance. |
| S3_ACCESS_KEY_ID | AP6wpeXraSc0IH4d42IN | Access Key for the S3 instance. |
| S3_SECRET_ACCESS_KEY | fSx5Bfib0OeAyG1mwtslKA04Qj6oPStLcpnkACmF | Secret Key for the S3 instance. |
| S3_BUCKET_NAME | asset-sg | S3 bucket name. |
| S3_ASSET_FOLDER | asset-sg | Folder within the S3 bucket into which objects are stored. |
| DATABASE_URL | postgres://postgres:postgres@localhost:5432/postgres?schema=public | PostgreSQL access URL. |
| OAUTH_ISSUER | http://localhost:4011 | OAuth API URL. |
| OAUTH_CLIENT_ID | assets | Name of the client within the OAuth issuer. |
| OAUTH_SCOPE | openid profile email cognito | The scopes requested on each OAuth login. |
| OAUTH_SHOW_DEBUG_INFO | true | Whether to show debug info about the OAuth process. |
| OAUTH_TOKEN_ENDPOINT | http://localhost:4011/connect/token | The endpoint at which OAuth tokens can be fetched. |
| OAUTH_AUTHORIZED_GROUPS | assets.swissgeol | The name of the groups (comma-separated) which should grant access. |
| OCR_URL | Leave empty. | |
| OCR_CALLBACK_URL | Leave empty. |
The file development/.env configures secrets for the services used in local development.
By default, these secrets align with the server's configuration.
| Variable | Beschreibung |
|---|---|
| STORAGE_USER | Username for the MinIO container. |
| STORAGE_PASSWORD | Password for the MinIO container. |
| DB_USER | Username for the PostgreSQL container. |
| DB_PASSWORD | Password for the PostgreSQL container. |
| PGADMIN_EMAIL | Email for the PgAdmin container. |
| PGADMIN_PASSWORD | Password for the PgAdmin container. |
git update-index --no-skip-worktree development/.env
git update-index --no-skip-worktree apps/server-asset-sg/.env.localThen, after having committed your changes, remove them again:
git update-index --skip-worktree development/.env
git update-index --skip-worktree apps/server-asset-sg/.env.localNote that worktree modifications need to be committed, just like file changes.
This project uses Prisma as its database ORM. The schema can be found at libs/persistence/prisma/schema.prisma.
To run prisma commands, you can use the following shortcut:
npm run prisma -- {command}To apply all new migrations to your local database, run the following command:
npm run prisma -- migrate deployIf your local database's state can't be migrated from, you might have to fully reset your database. This can happen when manually modifying the database, and will remove all local data.
npm run prisma -- migrate resetTo create a new migration, first modify the Prisma schema. Then, create a shadow database:
docker compose exec db sh -c 'psql --dbname=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/${POSTGRES_DB} -c "CREATE DATABASE postgres_shadow;"'Afterward, you can generate the new migration:
npm run prisma -- migrate dev --create-onlyYou can find and modify your new migration within the migrations/ directory.
The finalized migration can be applied like any other migration:
npm run prisma -- migrate deployMigrations are automatically applied to any environment running the swissgeol-assets-api Docker image.
Processing files runs through an external OCR service, followed by data extraction using AI models. The API version can be specified in the server configuration via a class property (or, as is currently the case for OCR, to undefined).
Interfaces for the data extraction service can be generated by running:
npm run generate-interfaces:data-extractionThis command fetches the OpenAPI specification from the local data extraction service and generates the interfaces. Note that when the version changes, the command needs to be adjusted as well.
We use Semantic Versioning for versioning. The new version number is determined by this script. Merges to develop will either increase the minor version or the prerelease version based on the previous release version. Merges to main from develop will increase the minor version if it is not the same as the minor version on develop, otherwise the prerelease version is increased. Merges to main from hotfix branches will increase the patch version. Releases to prod will remove the prerelease tag from the release candidate.
Note
Currently, this describes the process for setting up a new environment on an existing cluster, e.g. setting up VIEW on the existing INT cluster. Setting up a completely new cluster requires more setup, which is currently not documented.
Setting up a new environment requires multiple steps.
- Order the necessary infrastructure (e.g. AWS resources, Vault secrets, etc.) via a ticket to the Cloudteam (see e.g. this ticket as a guideline).
- Create all necessary secrets in the vault, consider existing environments for reference
- Create folder for the new environment, e.g.
assets-ext(Important: Depending on which environment you are setting up, you might implicitly set up multiple environments, as e.g.VIEWdirectly also requiresSYNC-VIEW). - Create the
namespacesecret, containing the namespace you are deploying - Create the
helm_secretssecret (see ./k8s secrets for reference)- As for elastic encryption - key is identical within environments and can be reused if existing
- For elastic password, generate a secure one with enough chars and use this in the actual apps
- Create the
helm_valuessecret (see ./k8s secrets for reference)- Oauth settings can be easily found in the AWS console
- Lots of settings are identical between e.g. PROD and INT; if copypasting, be very aware that they sometimes differ in suffixes only
- Create folder for the new environment, e.g.
- Make sure to store ALL secrets you get in step 1 and use in step 2 in Keepass as well
- Except for VIEW, populate the following tables once by dumping them from INT and importing them into the new environment:
legal_item_code,nat_rel_item,man_cat_label_item,contact_kind_item,language_item,asset_format_item,asset_kind_item - Except for VIEW, copy all existing contacts from a reference environment (if there is one) to the new environment, if desired
- Make sure that there are no Github action conditions that prevent the new environment from being deployed, e.g. by adding the new environment to the condition in the deploy action
- If everything is set, run
Deploy k8sfor the given cluster - If the action succeeds, check if everything runs smoothly:
- Access via browser, login
- Make yourself admin in the database directly
- First: Trigger an elasticsearch sync to make sure the search index is populated!
- Try uploading a file and check its processing chain
- Trigger a cronjob manually via k8s to see if all sync jobs run properly
- If everything works, notify the customer and test in action.
- Service accounts do not have proper permissions for S3 buckets; evident by cheking logs of OCR or Dataextraction services
- Solution: Inform cloud team