Skip to content

Commit add5d47

Browse files
authored
Switch from docker-compose to the one build into docker (docker compose) (#1065)
* Switch from docker-compose to the one build into docker (docker compose) github removed docker-compose from their old images, so moving to docker compose seemed like the best solution. * Remove version from docker-compose.yml
1 parent 2fda3e0 commit add5d47

20 files changed

Lines changed: 158 additions & 166 deletions

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ tablespaces live in the same location across replicas. This necessitates
8989
matching directory structures across the nodes, and thus, multiple,
9090
simultaneously running containers.
9191

92-
Interaction with each node is done using `docker-compose` commands. Refer to
92+
Interaction with each node is done using `docker compose` commands. Refer to
9393
the [Makefile](tests/tablespaces/Makefile) in the test directory for examples.
9494

9595
To run the tests from the top-level directory:

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ The main documentation for pg_auto_failover includes the following 3 tutorial:
105105

106106
- The main [pg_auto_failover
107107
Tutorial](https://pg-auto-failover.readthedocs.io/en/main/tutorial.html)
108-
uses docker-compose on your local computer to start multiple Postgres
108+
uses docker compose on your local computer to start multiple Postgres
109109
nodes and implement your first failover.
110110

111111
- The complete [pg_auto_failover Azure VM
@@ -117,7 +117,7 @@ The main documentation for pg_auto_failover includes the following 3 tutorial:
117117

118118
- The [Citus Cluster Quick
119119
Start](https://pg-auto-failover.readthedocs.io/en/main/citus-quickstart.html)
120-
tutorial uses docker-compose to create a full Citus cluster and guide
120+
tutorial uses docker compose to create a full Citus cluster and guide
121121
you to a worker failover and then a coordinator failover.
122122

123123
## Reporting Security Issues

docker-compose.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
version: "3.9" # optional since v1.27.0
21
services:
32
monitor:
43
image: citusdata/pg_auto_failover:demo

docs/citus-quickstart.rst

Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ workers. Every node will have a secondary for failover. We’ll simulate
88
failure in the coordinator and worker nodes and see how the system continues
99
to function.
1010

11-
This tutorial uses `docker-compose`__ in order to separate the architecture
11+
This tutorial uses `docker compose`__ in order to separate the architecture
1212
design from some of the implementation details. This allows reasoning at
1313
the architecture level within this tutorial, and better see which software
1414
component needs to be deployed and run on which node.
@@ -24,7 +24,7 @@ pg_auto_failover to provide HA to a Citus formation.
2424
Pre-requisites
2525
--------------
2626

27-
When using `docker-compose` we describe a list of services, each service may
27+
When using `docker compose` we describe a list of services, each service may
2828
run on one or more nodes, and each service just runs a single isolated
2929
process in a container.
3030

@@ -45,12 +45,12 @@ or run the docker build command directly:
4545
$ cd pg_auto_failover/docs/cluster
4646

4747
$ docker build -t pg_auto_failover:citus -f Dockerfile ../..
48-
$ docker-compose build
48+
$ docker compose build
4949

5050
Our first Citus Cluster
5151
-----------------------
5252

53-
To create a cluster we use the following docker-compose definition:
53+
To create a cluster we use the following docker compose definition:
5454

5555
.. literalinclude:: citus/docker-compose-scale.yml
5656
:language: yaml
@@ -62,7 +62,7 @@ following command:
6262

6363
::
6464

65-
$ docker-compose up --scale coord=2 --scale worker=6
65+
$ docker compose up --scale coord=2 --scale worker=6
6666

6767
The command above starts the services up. The command also specifies a
6868
``--scale`` option that is different for each service. We need:
@@ -91,11 +91,11 @@ that we don't have to. In a High Availability setup, every node should be
9191
ready to be promoted primary at any time, so knowing which node in a group
9292
is assigned primary first is not very interesting.
9393

94-
While the cluster is being provisionned by docker-compose, you can run the
94+
While the cluster is being provisionned by docker compose, you can run the
9595
following command and have a dynamic dashboard to follow what's happening.
9696
The following command is like ``top`` for pg_auto_failover::
9797

98-
$ docker-compose exec monitor pg_autoctl watch
98+
$ docker compose exec monitor pg_autoctl watch
9999

100100
Because the ``pg_basebackup`` operation that is used to create the secondary
101101
nodes takes some time when using Citus, because of the first CHECKPOINT
@@ -104,7 +104,7 @@ might see the following output:
104104

105105
.. code-block:: bash
106106
107-
$ docker-compose exec monitor pg_autoctl show state
107+
$ docker compose exec monitor pg_autoctl show state
108108
Name | Node | Host:Port | TLI: LSN | Connection | Reported State | Assigned State
109109
---------+-------+-------------------+----------------+--------------+---------------------+--------------------
110110
coord0a | 0/1 | cd52db444544:5432 | 1: 0/200C4A0 | read-write | wait_primary | wait_primary
@@ -121,7 +121,7 @@ same command again for stable result:
121121

122122
.. code-block:: bash
123123
124-
$ docker-compose exec monitor pg_autoctl show state
124+
$ docker compose exec monitor pg_autoctl show state
125125
126126
Name | Node | Host:Port | TLI: LSN | Connection | Reported State | Assigned State
127127
---------+-------+-------------------+----------------+--------------+---------------------+--------------------
@@ -142,7 +142,7 @@ and supports reads and writes.
142142
We can review the available Postgres URIs with the
143143
:ref:`pg_autoctl_show_uri` command::
144144

145-
$ docker-compose exec monitor pg_autoctl show uri
145+
$ docker compose exec monitor pg_autoctl show uri
146146
Type | Name | Connection String
147147
-------------+---------+-------------------------------
148148
monitor | monitor | postgres://autoctl_node@552dd89d5d63:5432/pg_auto_failover?sslmode=require
@@ -153,15 +153,15 @@ can run a psql session right from the coordinator container:
153153

154154
.. code-block:: bash
155155
156-
$ docker-compose exec coord psql -d citus -c 'select * from citus_get_active_worker_nodes();'
156+
$ docker compose exec coord psql -d citus -c 'select * from citus_get_active_worker_nodes();'
157157
node_name | node_port
158158
--------------+-----------
159159
dae7c062e2c1 | 5432
160160
5bf86f9ef784 | 5432
161161
c23610380024 | 5432
162162
(3 rows)
163163
164-
We are now reaching the limits of using a simplified docker-compose setup.
164+
We are now reaching the limits of using a simplified docker compose setup.
165165
When using the ``--scale`` option, it is not possible to give a specific
166166
hostname to each running node, and then we get a randomly generated string
167167
instead or useful node names such as ``worker1a`` or ``worker3b``.
@@ -170,14 +170,14 @@ Create a Citus Cluster, take two
170170
--------------------------------
171171

172172
In order to implement the following architecture, we need to introduce a
173-
more complex docker-compose file than in the previous section.
173+
more complex docker compose file than in the previous section.
174174

175175
.. figure:: ./tikz/arch-citus.svg
176176
:alt: pg_auto_failover architecture with a Citus formation
177177

178178
pg_auto_failover architecture with a Citus formation
179179

180-
This time we create a cluster using the following docker-compose definition:
180+
This time we create a cluster using the following docker compose definition:
181181

182182
.. literalinclude:: citus/docker-compose.yml
183183
:language: yaml
@@ -200,13 +200,13 @@ We start this cluster with a simplified command line this time:
200200

201201
::
202202

203-
$ docker-compose up
203+
$ docker compose up
204204

205205
And this time we get the following cluster as a result:
206206

207207
::
208208

209-
$ docker-compose exec monitor pg_autoctl show state
209+
$ docker compose exec monitor pg_autoctl show state
210210
Name | Node | Host:Port | TLI: LSN | Connection | Reported State | Assigned State
211211
---------+-------+---------------+----------------+--------------+---------------------+--------------------
212212
coord0a | 0/3 | coord0a:5432 | 1: 0/312B040 | read-write | primary | primary
@@ -223,7 +223,7 @@ And then we have the following application connection string to use:
223223

224224
::
225225

226-
$ docker-compose exec monitor pg_autoctl show uri
226+
$ docker compose exec monitor pg_autoctl show uri
227227
Type | Name | Connection String
228228
-------------+---------+-------------------------------
229229
monitor | monitor | postgres://autoctl_node@f0135b83edcd:5432/pg_auto_failover?sslmode=require
@@ -234,7 +234,7 @@ sense:
234234

235235
::
236236

237-
$ docker-compose exec coord0a psql -d citus -c 'select * from citus_get_active_worker_nodes()'
237+
$ docker compose exec coord0a psql -d citus -c 'select * from citus_get_active_worker_nodes()'
238238
node_name | node_port
239239
-----------+-----------
240240
worker1a | 5432
@@ -264,7 +264,7 @@ then. With pg_auto_failover, this is as easy as doing:
264264

265265
::
266266

267-
$ docker-compose exec monitor pg_autoctl perform failover --group 2
267+
$ docker compose exec monitor pg_autoctl perform failover --group 2
268268
15:40:03 9246 INFO Waiting 60 secs for a notification with state "primary" in formation "default" and group 2
269269
15:40:03 9246 INFO Listening monitor notifications about state changes in formation "default" and group 2
270270
15:40:03 9246 INFO Following table displays times when notifications are received
@@ -291,7 +291,7 @@ the resulting cluster state.
291291

292292
::
293293

294-
$ docker-compose exec monitor pg_autoctl show state
294+
$ docker compose exec monitor pg_autoctl show state
295295
Name | Node | Host:Port | TLI: LSN | Connection | Reported State | Assigned State
296296
---------+-------+---------------+----------------+--------------+---------------------+--------------------
297297
coord0a | 0/3 | coord0a:5432 | 1: 0/312ADA8 | read-write | primary | primary
@@ -307,7 +307,7 @@ Which seen from the Citus coordinator, looks like the following:
307307

308308
::
309309

310-
$ docker-compose exec coord0a psql -d citus -c 'select * from citus_get_active_worker_nodes()'
310+
$ docker compose exec coord0a psql -d citus -c 'select * from citus_get_active_worker_nodes()'
311311
node_name | node_port
312312
-----------+-----------
313313
worker1a | 5432
@@ -322,7 +322,7 @@ Let's create a database schema with a single distributed table.
322322

323323
::
324324

325-
$ docker-compose exec app psql
325+
$ docker compose exec app psql
326326

327327
.. code-block:: sql
328328
@@ -357,10 +357,10 @@ registers the secondary instead.
357357

358358
# the pg_auto_failover keeper process will be unable to resurrect
359359
# the worker node if pg_control has been removed
360-
$ docker-compose exec worker1a rm /tmp/pgaf/global/pg_control
360+
$ docker compose exec worker1a rm /tmp/pgaf/global/pg_control
361361

362362
# shut it down
363-
$ docker-compose exec worker1a /usr/lib/postgresql/14/bin/pg_ctl stop -D /tmp/pgaf
363+
$ docker compose exec worker1a /usr/lib/postgresql/14/bin/pg_ctl stop -D /tmp/pgaf
364364

365365
The keeper will attempt to start worker 1a three times and then report the
366366
failure to the monitor, who promotes worker1b to replace worker1a. Citus
@@ -372,7 +372,7 @@ and worker3a:
372372

373373
::
374374

375-
$ docker-compose exec app psql -c 'select * from master_get_active_worker_nodes();'
375+
$ docker compose exec app psql -c 'select * from master_get_active_worker_nodes();'
376376

377377
node_name | node_port
378378
-----------+-----------
@@ -385,7 +385,7 @@ Finally, verify that all rows of data are still present:
385385

386386
::
387387

388-
$ docker-compose exec app psql -c 'select count(*) from companies;'
388+
$ docker compose exec app psql -c 'select count(*) from companies;'
389389
count
390390
-------
391391
75
@@ -398,7 +398,7 @@ secondary.
398398

399399
::
400400

401-
$ docker-compose exec monitor pg_autoctl show state
401+
$ docker compose exec monitor pg_autoctl show state
402402
Name | Node | Host:Port | TLI: LSN | Connection | Reported State | Assigned State
403403
---------+-------+---------------+----------------+--------------+---------------------+--------------------
404404
coord0a | 0/3 | coord0a:5432 | 1: 0/3178B20 | read-write | primary | primary
@@ -424,15 +424,15 @@ primary coordinator, we can watch how the monitor promotes the secondary.
424424

425425
::
426426

427-
$ docker-compose exec coord0a rm /tmp/pgaf/global/pg_control
428-
$ docker-compose exec coord0a /usr/lib/postgresql/14/bin/pg_ctl stop -D /tmp/pgaf
427+
$ docker compose exec coord0a rm /tmp/pgaf/global/pg_control
428+
$ docker compose exec coord0a /usr/lib/postgresql/14/bin/pg_ctl stop -D /tmp/pgaf
429429

430430
After some time, coordinator A's keeper heals it, and the cluster converges
431431
in this state:
432432

433433
::
434434

435-
$ docker-compose exec monitor pg_autoctl show state
435+
$ docker compose exec monitor pg_autoctl show state
436436
Name | Node | Host:Port | TLI: LSN | Connection | Reported State | Assigned State
437437
---------+-------+---------------+----------------+--------------+---------------------+--------------------
438438
coord0a | 0/3 | coord0a:5432 | 2: 0/50000D8 | read-only | secondary | secondary
@@ -450,7 +450,7 @@ node too:
450450

451451
::
452452

453-
$ docker-compose exec app psql -c 'select count(*) from companies;'
453+
$ docker compose exec app psql -c 'select count(*) from companies;'
454454
count
455455
-------
456456
75
@@ -462,24 +462,24 @@ To dispose of the entire tutorial environment, just use the following command:
462462

463463
::
464464

465-
$ docker-compose down
465+
$ docker compose down
466466

467467
Next steps
468468
----------
469469

470470
As mentioned in the first section of this tutorial, the way we use
471-
docker-compose here is not meant to be production ready. It's useful to
471+
docker compose here is not meant to be production ready. It's useful to
472472
understand and play with a distributed system such as Citus though, and
473473
makes it simple to introduce faults and see how the pg_auto_failover High
474474
Availability reacts to those faults.
475475

476476
One obvious missing element to better test the system is the lack of
477-
persistent volumes in our docker-compose based test rig. It is possible to
478-
create external volumes and use them for each node in the docker-compose
477+
persistent volumes in our docker compose based test rig. It is possible to
478+
create external volumes and use them for each node in the docker compose
479479
definition. This allows restarting nodes over the same data set.
480480

481481
See the command :ref:`pg_autoctl_do_tmux_compose_session` for more details
482-
about how to run a docker-compose test environment with docker-compose,
482+
about how to run a docker compose test environment with docker compose,
483483
including external volumes for each node.
484484

485485
Now is a good time to go read `Citus Documentation`__ too, so that you know

docs/citus/Makefile

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,27 +6,27 @@ scale: build scale-down scale-up ;
66

77
build:
88
docker build -t $(CONTAINER_NAME) -f Dockerfile ../..
9-
docker-compose build
9+
docker compose build
1010

1111
scale-up:
12-
docker-compose -f docker-compose-scale.yml up --scale coord=2 --scale worker=6
12+
docker compose -f docker-compose-scale.yml up --scale coord=2 --scale worker=6
1313

1414
scale-down:
15-
docker-compose -f docker-compose-scale.yml down
15+
docker compose -f docker-compose-scale.yml down
1616

1717
up:
18-
docker-compose up
18+
docker compose up
1919

2020
down:
21-
docker-compose down
21+
docker compose down
2222

2323
state:
24-
docker-compose exec monitor pg_autoctl show state
24+
docker compose exec monitor pg_autoctl show state
2525

2626
failover:
27-
docker-compose exec monitor pg_autoctl perform failover --group 1
27+
docker compose exec monitor pg_autoctl perform failover --group 1
2828

2929
nodes:
30-
docker-compose exec coord psql -d analytics -c 'table pg_dist_node'
30+
docker compose exec coord psql -d analytics -c 'table pg_dist_node'
3131

3232
.PHONY: all scale build scale-up scale-down up down state failover nodes

docs/citus/app.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
#
44
# Write a Python application that knows how to exit gracefully when
5-
# receiving SIGTERM (at docker-compose down time), but doesn't know how to
5+
# receiving SIGTERM (at docker compose down time), but doesn't know how to
66
# do much else.
77
#
88

docs/citus/docker-compose.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
version: "3.9" # optional since v1.27.0
2-
31
x-coord: &coordinator
42
image: pg_auto_failover:citus
53
environment:

docs/how-to.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ To understand which replication settings to use in your case, see
2020
To follow a step by step guide that you can reproduce on your own Azure
2121
subscription and create a production Postgres setup from VMs, see the
2222
:ref:`azure_tutorial` section. To get started with a local setup using
23-
docker-compose to run multiple Postgres nodes, see the :ref:`tutorial`
23+
docker compose to run multiple Postgres nodes, see the :ref:`tutorial`
2424
section.
2525

2626
To understand how to setup pg_auto_failover in a way that is compliant with

0 commit comments

Comments
 (0)