You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An example of a local deployment in KinD is provided https://github.com/camptocamp/devops-stack/tree/main/examples/kind[here]. Clone this repository and modify it at your convenance.
3
+
An example of a local deployment on KinD is provided https://github.com/camptocamp/devops-stack/tree/main/examples/kind[here]. Clone this repository and modify the files at your convenience.
4
4
In the folder, as in a standard https://developer.hashicorp.com/terraform/tutorials/modules/module#what-is-a-terraform-module[Terraform module], you will find the following files:
5
5
6
-
* `terraform.tf`: declaration of the Terraform providers used in this project.
7
-
* `locals.tf`: local variables used in the DevOps Stacks.
8
-
* `main.tf`: definition of all deployed modules.
9
-
* `s3_bucket.tf`: configuration of the MinIO bucket, used as backend for Loki and Thanos.
10
-
* `outputs.tf`: the output variables of the DevOps Stack, e.g. credentials and the `.kubeconfig` file to use with `kubectl`.
6
+
* *`terraform.tf`* - declaration of the Terraform providers used in this project as well as their configuration;
7
+
* *`locals.tf`* - local variables used by the DevOps Stack modules;
8
+
* *`main.tf`* - definition of all the deployed modules;
9
+
* *`s3_bucket.tf`* - configuration of the MinIO bucket, used as backend for Loki and Thanos;
10
+
* *`outputs.tf`* - the output variables of the DevOps Stack, e.g. credentials and the `.kubeconfig` file to use with `kubectl`;
11
11
12
-
== Specificities of the KinD deployment
12
+
== Requirements
13
+
14
+
On your local machine, you need to have the following tools installed:
15
+
16
+
* https://docs.docker.com/get-docker[Docker] to deploy the KinD containers;
17
+
* https://www.terraform.io/[Terraform] to provision the whole stack;
18
+
* https://kubernetes.io/docs/reference/kubectl/[`kubectl`] or https://github.com/derailed/k9s[`k9s`]to interact with your cluster;
19
+
20
+
== Specificities and explanations
13
21
14
22
=== Local Load Balancer
15
23
16
24
https://metallb.universe.tf/[MetalLB] is used as a load balancer for the cluster. This allows us to have a multi-node KinD cluster without the need to use Traefik in a single replica with a NodePort configuration.
17
25
26
+
=== OIDC authentication
27
+
28
+
IMPORTANT: The DevOps Stack modules are developed with OIDC in mind. In production, you should have an identity provider that supports OIDC and use it to authenticate to the DevOps Stack applications.
29
+
30
+
TIP: You can have a local containing the OIDC configuration properly structured for the DevOps Stack applications and simply use an external OIDC provider instead of using Keycloak. Check https://github.com/camptocamp/devops-stack-module-keycloak/blob/main/oidc_bootstrap/locals.tf[this `locals.tf` on the Keycloak module] for an example.
31
+
32
+
To quickly deploy a testing environment on KinD you can use the Keycloak module, as shown in the example.
33
+
34
+
After deploying Keycloak, you can use the OIDC bootstrap module to create the Keycloak realm, groups, users, etc.
35
+
36
+
The `user_map` variable of that module allows you to create OIDC users used to authenticate to the DevOps Stack applications. The module will generate a password for each user, which you can check later after the deployment.
37
+
38
+
TIP: If you do not provide a value for the `user_map` variable, the module will create a user named `devopsadmin` with a random password.
39
+
18
40
=== Self-signed SSL certificates
19
41
20
-
Since KinD is locally deployed, there is no easy way of creating valid SSL certificates for the ingresses using Let's Encrypt. As such, `cert-manager` is configured to use a self-signed Certificate Authority and the remaining modules are configured to ignore the SSL warnings/errors that are a consequence of that.
42
+
Since KinD is deployed on your machine, there is no easy way of creating valid SSL certificates for the ingresses using Let's Encrypt. As such, `cert-manager` is configured to use a self-signed Certificate Authority and the remaining modules are configured to ignore the SSL warnings/errors that are a consequence of that.
21
43
22
44
NOTE: When accessing the ingresses on your browser, you'll obviously see warnings saying that the certificate is not valid. You can safely ignore them.
23
45
24
-
== Requirements
46
+
== Deployment
25
47
26
-
For this setup, you will need to have installed on your machine:
48
+
1. Clone the repository and `cd` into the `examples/kind` folder.
27
49
28
-
* https://docs.docker.com/get-docker[Docker] to deploy the KinD containers
29
-
* https://www.terraform.io/[Terraform] to provision the whole stack
30
-
* https://kubernetes.io/docs/reference/kubectl/[`kubectl`] to interact with your cluster
50
+
2. Check out the modules you want to deploy in the `main.tf` file, and comment out the others;
51
+
+
52
+
TIP: You can also add your own Terraform modules in this file or any other file on the root folder. A good place to start to write your own module is to clone the https://github.com/camptocamp/devops-stack-module-template[devops-stack-module-template] repository and adapt it to your needs;
31
53
32
-
== Deployment
54
+
3. On the `oidc` module, adapt the `user_map` variable as you wish (please check the <<oidc-authentication,OIDC section>> for more information).
33
55
34
-
1. From the source of the example deployment, initialize Terraform, which downloads all required providers and modules locally (they will be stored in the hidden folder `.terraform`).
56
+
4. From the source of the example deployment, initialize Terraform, which downloads all required providers and modules locally (they will be stored in the hidden folder `.terraform`);
35
57
+
36
58
[source,bash]
37
59
----
38
60
terraform init
39
61
----
40
62
41
-
2. Check out the modules you want to deploy in the `main.tf` file, and comment out the others.
63
+
5. Configure the variables in `locals.tf` to your preference:
42
64
+
43
-
TIP: You can also add your owns Terraform modules in this file or any other file on the root folder. A good place to start to write your own module is to clone the https://github.com/camptocamp/devops-stack-helloworld[devops-stack-helloworld] repository and adapt it to your needs.
44
-
45
-
3. Configure the variables in `locals.tf` to your preference:
46
-
+
47
-
[source,hcl]
65
+
[source,terraform]
48
66
----
49
67
include::example$deploy_examples/kind/locals.tf[]
50
68
----
51
69
52
-
4. Finally, run `terraform apply` and accept the proposed changes to create the Kubernetes nodes as Docker containers, and populate them with our services.
70
+
6. Finally, run `terraform apply` and accept the proposed changes to create the Kubernetes nodes as Docker containers, and populate them with our services;
53
71
+
54
72
[source,bash]
55
73
----
56
74
terraform apply
57
75
----
58
76
59
-
5. After the first deployment (please note the troubleshooting step related with Argo CD), you can go to locals and enable the ServiceMonitor boolean to activate the Prometheus exporters that will send metrics to Prometheus.
77
+
7. After the first deployment (please note the troubleshooting step related with Argo CD), you can go to locals and enable the _ServiceMonitor_ boolean to activate the Prometheus exporters that will send metrics to Prometheus;
60
78
+
61
79
IMPORTANT: This flag needs to be set as `false` for the first bootstrap of the cluster, otherwise the applications will fail to deploy while the Custom Resource Definitions of the kube-prometheus-stack are not yet created.
62
80
+
63
-
NOTE: You can either set the flag as `true` in the `locals.tf` file or you can simply delete the line on the modules declarations, since this variable is set as `true` by default on each module.
64
-
65
-
=== Troubleshooting
66
-
67
-
==== Argo CD: connection refused
68
-
69
-
In some cases, you could encounter an error like this one:
70
-
71
-
[source,shell]
72
-
----
73
-
╷
74
-
│ Error: Error while waiting for application argocd to be created
75
-
│
76
-
│ with module.argocd.argocd_application.this,
77
-
│ on .terraform/modules/argocd/main.tf line 55, in resource "argocd_application" "this":
78
-
│ 55: resource "argocd_application" "this" {
79
-
│
80
-
│ error while waiting for application argocd to be synced and healthy: rpc error: code = Unavailable desc = connection error: desc = "transport: error while dialing: dial tcp 127.0.0.1:45729: connect: connection refused"
81
-
╵
82
-
----
83
-
84
-
This error is due to the way we finally provision Argo CD on the final steps of the deployment. We use the bootstrap Argo CD to deploy and manage the final Argo CD module, which causes a redeployment of Argo CD and consequently a momentary loss of connection between the Argo CD Terraform provider and the Argo CD server.
81
+
NOTE: You can either set the flag as `true` in the `locals.tf` file or you can simply delete the line on the modules' declarations, since this variable is set as `true` by default on each module.
82
+
+
83
+
TIP: Take note of the local called `app_autosync`. If you set the condition of the ternary operator to `false` you will disable the auto-sync for all the DevOps Stack modules. This allows you to choose when to manually sync the module on the Argo CD interface and is useful for troubleshooting purposes.
85
84
86
-
You can simply re-run the command `terraform apply` to finalize the bootstrap of the cluster.
85
+
== Access the cluster and the DevOps Stack applications
87
86
88
-
TIP: For more informations about the Argo CD module, please refer to the xref:argocd:ROOT:README.adoc[respective documentation page].
87
+
Typically the KinD Terraform provider used in our code already appends the credentials to your default Kubeconfig, so you should be good to go to access the cluster.
89
88
90
-
==== `loki-stack-promtail` pods stuck with status `CrashLoopBackOff`
89
+
Otherwise, you can use the content of the `kubernetes_kubeconfig` output to manually generate a Kubeconfig file or you can use the one automatically created on the root folder of the project.
91
90
92
-
You could stumble upon `loki-stack-promtail` stuck in a creation loop with the following logs:
91
+
Then you can use the `kubectl` or `k9s` command to interact with the cluster:
93
92
94
-
[source]
93
+
[source,bash]
95
94
----
96
-
level=error ts=2023-05-09T06:32:38.495673778Z caller=main.go:117 msg="error creating promtail" error="failed to make file target manager: too many open files"
97
-
Stream closed EOF for loki-stack/loki-stack-promtail-bxcmw (promtail)
If that's the case, you will have to increase the upper limit on the number of INotify instances that can be created per real user ID:
98
+
As for the DevOps Stack applications, you can access them through the ingress domain that you can find in the `ingress_domain` output. If you used the code from the example without modifying the outputs, you will see something like this on your terminal after the `terraform apply` has done its job:
101
99
102
-
[source,bash]
103
-
----
104
-
# Increase the limit until next reboot
105
-
sudo sysctl fs.inotify.max_user_instances=512
106
-
# Increase the limit permanently (run this command as root)
@@ -164,7 +162,7 @@ NOTE: When the host computer is restarted, the Docker container will start again
164
162
165
163
== Stop the cluster
166
164
167
-
To definitively stop the cluster on a single command (that is the reason we delete some resources from the state file), we can use the following command:
165
+
To definitively stop the cluster on a single command (that is the reason we delete some resources from the state file), you can use the following command:
168
166
169
167
[source,bash]
170
168
----
@@ -183,4 +181,55 @@ rm terraform.state
183
181
184
182
== Conclusion
185
183
186
-
That's it, you have deployed the DevOps Stack locally! For more informations, keep on reading the https://devops-stack.io/docs/latest/[documentation]. **You can explore the possibilities of each module and get the link to the source code on their respective documentation pages.**
184
+
That's it, you have deployed the DevOps Stack locally! For more information, keep on reading the https://devops-stack.io/docs/latest/[documentation]. **You can explore the possibilities of each module and get the link to the source code on their respective documentation pages.**
185
+
186
+
== Troubleshooting
187
+
188
+
=== `connection_error` during the first deployment
189
+
190
+
In some cases, you could encounter an error like this the first deployment:
191
+
192
+
[source,shell]
193
+
----
194
+
╷
195
+
│ Error: Error while waiting for application argocd to be created
196
+
│
197
+
│ with module.argocd.argocd_application.this,
198
+
│ on .terraform/modules/argocd/main.tf line 55, in resource "argocd_application" "this":
199
+
│ 55: resource "argocd_application" "this" {
200
+
│
201
+
│ error while waiting for application argocd to be synced and healthy: rpc error: code = Unavailable desc = connection error: desc = "transport: error while dialing: dial tcp 127.0.0.1:45729: connect: connection refused"
202
+
╵
203
+
----
204
+
205
+
This error is due to the way we provision Argo CD on the final steps of the deployment. We use the bootstrap Argo CD to deploy the final Argo CD module, which causes a redeployment of Argo CD and consequently a momentary loss of connection between the Argo CD Terraform provider and the Argo CD server.
206
+
207
+
*You can simply re-run the command `terraform apply` to finalize the bootstrap of the cluster.*
208
+
209
+
TIP: For more informations about the Argo CD module, please refer to the xref:argocd:ROOT:README.adoc[respective documentation page].
210
+
211
+
=== Argo CD interface reload loop when clicking on login
212
+
213
+
If you encounter a loop when clicking on the login button on the Argo CD interface, you can try to delete the Argo CD server pod and let it be recreated.
214
+
215
+
TIP: For more informations about the Argo CD module, please refer to the xref:argocd:ROOT:README.adoc[respective documentation page].
216
+
217
+
=== `loki-stack-promtail` pods stuck with status `CrashLoopBackOff`
218
+
219
+
You could stumble upon `loki-stack-promtail` stuck in a creation loop with the following logs:
220
+
221
+
[source]
222
+
----
223
+
level=error ts=2023-05-09T06:32:38.495673778Z caller=main.go:117 msg="error creating promtail" error="failed to make file target manager: too many open files"
224
+
Stream closed EOF for loki-stack/loki-stack-promtail-bxcmw (promtail)
225
+
----
226
+
227
+
If that's the case, you will have to increase the upper limit on the number of INotify instances that can be created per real user ID:
228
+
229
+
[source,bash]
230
+
----
231
+
# Increase the limit until next reboot
232
+
sudo sysctl fs.inotify.max_user_instances=512
233
+
# Increase the limit permanently (run this command as root)
0 commit comments