tooling: dev docker db script and readme nits (#6456)

* tooling: dev docker db script and readme nits

* nits

* nits
This commit is contained in:
hugocasa
2025-08-25 14:38:35 +02:00
committed by GitHub
parent 3845744492
commit 1074b22900
3 changed files with 75 additions and 62 deletions

View File

@@ -332,40 +332,40 @@ you to have it being synced automatically everyday.
## Environment Variables
| Environment Variable name | Default | Description | Api Server/Worker/All |
| ----------------------------------- | ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
| DATABASE_URL | | The Postgres database url. | All |
| WORKER_GROUP | default | The worker group the worker belongs to and get its configuration pulled from | Worker |
| MODE | standalone | The mode if the binary. Possible values: standalone, worker, server, agent | All |
| METRICS_ADDR | None | (ee only) The socket addr at which to expose Prometheus metrics at the /metrics path. Set to "true" to expose it on port 8001 | All |
| JSON_FMT | false | Output the logs in json format instead of logfmt | All |
| BASE_URL | http://localhost:8000 | The base url that is exposed publicly to access your instance. Is overriden by the instance settings if any. | Server |
| ZOMBIE_JOB_TIMEOUT | 30 | The timeout after which a job is considered to be zombie if the worker did not send pings about processing the job (every server check for zombie jobs every 30s) | Server |
| RESTART_ZOMBIE_JOBS | true | If true then a zombie job is restarted (in-place with the same uuid and some logs), if false the zombie job is failed | Server |
| SLEEP_QUEUE | 50 | The number of ms to sleep in between the last check for new jobs in the DB. It is multiplied by NUM_WORKERS such that in average, for one worker instance, there is one pull every SLEEP_QUEUE ms. | Worker |
| KEEP_JOB_DIR | false | Keep the job directory after the job is done. Useful for debugging. | Worker |
| LICENSE_KEY (EE only) | None | License key checked at startup for the Enterprise Edition of Windmill | Worker |
| SLACK_SIGNING_SECRET | None | The signing secret of your Slack app. See [Slack documentation](https://api.slack.com/authentication/verifying-requests-from-slack) | Server |
| COOKIE_DOMAIN | None | The domain of the cookie. If not set, the cookie will be set by the browser based on the full origin | Server |
| DENO_PATH | /usr/bin/deno | The path to the deno binary. | Worker |
| PYTHON_PATH | | The path to the python binary if wanting to not have it managed by uv. | Worker |
| GO_PATH | /usr/bin/go | The path to the go binary. | Worker |
| GOPRIVATE | | The GOPRIVATE env variable to use private go modules | Worker |
| GOPROXY | | The GOPROXY env variable to use | Worker |
| NETRC | | The netrc content to use a private go registry | Worker |
| PY_CONCURRENT_DOWNLOADS | 20 | Sets the maximum number of in-flight concurrent python downloads that windmill will perform at any given time. | Worker |
| PATH | None | The path environment variable, usually inherited | Worker |
| HOME | None | The home directory to use for Go and Bash , usually inherited | Worker |
| DATABASE_CONNECTIONS | 50 (Server)/3 (Worker) | The max number of connections in the database connection pool | All |
| SUPERADMIN_SECRET | None | A token that would let the caller act as a virtual superadmin superadmin@windmill.dev | Server |
| TIMEOUT_WAIT_RESULT | 20 | The number of seconds to wait before timeout on the 'run_wait_result' endpoint | Worker |
| QUEUE_LIMIT_WAIT_RESULT | None | The number of max jobs in the queue before rejecting immediately the request in 'run_wait_result' endpoint. Takes precedence on the query arg. If none is specified, there are no limit. | Worker |
| DENO_AUTH_TOKENS | None | Custom DENO_AUTH_TOKENS to pass to worker to allow the use of private modules | Worker |
| DISABLE_RESPONSE_LOGS | false | Disable response logs | Server |
| CREATE_WORKSPACE_REQUIRE_SUPERADMIN | true | If true, only superadmins can create new workspaces | Server |
| MIN_FREE_DISK_SPACE_MB | 15000 | Minimum amount of free space on worker. Sends critical alert if worker has less free space. | Worker |
| RUN_UPDATE_CA_CERTIFICATE_AT_START | false | If true, runs CA certificate update command at startup before other initialization | All |
| RUN_UPDATE_CA_CERTIFICATE_PATH | /usr/sbin/update-ca-certificates | Path to the CA certificate update command/script to run when RUN_UPDATE_CA_CERTIFICATE_AT_START is true | All |
| Environment Variable name | Default | Description | Api Server/Worker/All |
| ----------------------------------- | -------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
| DATABASE_URL | | The Postgres database url. | All |
| WORKER_GROUP | default | The worker group the worker belongs to and get its configuration pulled from | Worker |
| MODE | standalone | The mode if the binary. Possible values: standalone, worker, server, agent | All |
| METRICS_ADDR | None | (ee only) The socket addr at which to expose Prometheus metrics at the /metrics path. Set to "true" to expose it on port 8001 | All |
| JSON_FMT | false | Output the logs in json format instead of logfmt | All |
| BASE_URL | http://localhost:8000 | The base url that is exposed publicly to access your instance. Is overriden by the instance settings if any. | Server |
| ZOMBIE_JOB_TIMEOUT | 30 | The timeout after which a job is considered to be zombie if the worker did not send pings about processing the job (every server check for zombie jobs every 30s) | Server |
| RESTART_ZOMBIE_JOBS | true | If true then a zombie job is restarted (in-place with the same uuid and some logs), if false the zombie job is failed | Server |
| SLEEP_QUEUE | 50 | The number of ms to sleep in between the last check for new jobs in the DB. It is multiplied by NUM_WORKERS such that in average, for one worker instance, there is one pull every SLEEP_QUEUE ms. | Worker |
| KEEP_JOB_DIR | false | Keep the job directory after the job is done. Useful for debugging. | Worker |
| LICENSE_KEY (EE only) | None | License key checked at startup for the Enterprise Edition of Windmill | Worker |
| SLACK_SIGNING_SECRET | None | The signing secret of your Slack app. See [Slack documentation](https://api.slack.com/authentication/verifying-requests-from-slack) | Server |
| COOKIE_DOMAIN | None | The domain of the cookie. If not set, the cookie will be set by the browser based on the full origin | Server |
| DENO_PATH | /usr/bin/deno | The path to the deno binary. | Worker |
| PYTHON_PATH | | The path to the python binary if wanting to not have it managed by uv. | Worker |
| GO_PATH | /usr/bin/go | The path to the go binary. | Worker |
| GOPRIVATE | | The GOPRIVATE env variable to use private go modules | Worker |
| GOPROXY | | The GOPROXY env variable to use | Worker |
| NETRC | | The netrc content to use a private go registry | Worker |
| PY_CONCURRENT_DOWNLOADS | 20 | Sets the maximum number of in-flight concurrent python downloads that windmill will perform at any given time. | Worker |
| PATH | None | The path environment variable, usually inherited | Worker |
| HOME | None | The home directory to use for Go and Bash , usually inherited | Worker |
| DATABASE_CONNECTIONS | 50 (Server)/3 (Worker) | The max number of connections in the database connection pool | All |
| SUPERADMIN_SECRET | None | A token that would let the caller act as a virtual superadmin superadmin@windmill.dev | Server |
| TIMEOUT_WAIT_RESULT | 20 | The number of seconds to wait before timeout on the 'run_wait_result' endpoint | Worker |
| QUEUE_LIMIT_WAIT_RESULT | None | The number of max jobs in the queue before rejecting immediately the request in 'run_wait_result' endpoint. Takes precedence on the query arg. If none is specified, there are no limit. | Worker |
| DENO_AUTH_TOKENS | None | Custom DENO_AUTH_TOKENS to pass to worker to allow the use of private modules | Worker |
| DISABLE_RESPONSE_LOGS | false | Disable response logs | Server |
| CREATE_WORKSPACE_REQUIRE_SUPERADMIN | true | If true, only superadmins can create new workspaces | Server |
| MIN_FREE_DISK_SPACE_MB | 15000 | Minimum amount of free space on worker. Sends critical alert if worker has less free space. | Worker |
| RUN_UPDATE_CA_CERTIFICATE_AT_START | false | If true, runs CA certificate update command at startup before other initialization | All |
| RUN_UPDATE_CA_CERTIFICATE_PATH | /usr/sbin/update-ca-certificates | Path to the CA certificate update command/script to run when RUN_UPDATE_CA_CERTIFICATE_AT_START is true | All |
## Run a local dev setup
@@ -374,7 +374,6 @@ Using [Nix](./frontend/README_DEV.md#nix) (Recommended).
See the [./frontend/README_DEV.md](./frontend/README_DEV.md) file for all
running options.
### only Frontend
This will use the backend of <https://app.windmill.dev> but your own frontend
@@ -400,29 +399,27 @@ npm run generate-backend-client-mac
See the [./frontend/README_DEV.md](./frontend/README_DEV.md) file for all
running options.
1. Create a Postgres Database for Windmill and create an admin role inside your
Postgres setup. The easiest way to get a working db is to run
1. Start a local Postgres database using for instance the `start-dev-db.sh` script which will make a database available at `postgres://postgres:changeme@localhost:5432/windmill`
Then run the migrations using the following command:
```
cargo install sqlx-cli
env DATABASE_URL=<YOUR_DATABASE_URL> sqlx migrate run
```
This will also avoid compile time issue with sqlx's `query!` macro
2. Install [nsjail](https://github.com/google/nsjail) and have it accessible in
This will also avoid compile time issue with sqlx's `query!` macro.
2. (optional, linux only) Install [nsjail](https://github.com/google/nsjail) and have it accessible in
your PATH
3. Install deno and python3, have the bins at `/usr/bin/deno` and
`/usr/local/bin/python3`
4. Install [caddy](https://caddyserver.com)
5. Install the [lld linker](https://lld.llvm.org/)
6. Go to `frontend/`:
1. `npm install`, `npm run generate-backend-client` then `npm run dev`
3. Install bun, deno and python3 (+ any languages you want to use), have the bins at `/usr/bin/bun`,`/usr/bin/deno`, and
`/usr/local/bin/python3` or set the corresponding environment variables.
4. (optional) Install the [lld linker](https://lld.llvm.org/)
5. Go to `frontend/`:
1. `npm install`, `npm run generate-backend-client` then `REMOTE=http://localhost:8000 npm run dev`
2. You might need to set some extra heap space for the node runtime
`export NODE_OPTIONS="--max-old-space-size=4096"`
3. In another shell `npm run build` otherwise the backend will not find the
`frontend/build` folder and will not compile.
4. In another shell `sudo caddy run --config Caddyfile`
7. Go to `backend/`:
`env DATABASE_URL=<DATABASE_URL_TO_YOUR_WINDMILL_DB> RUST_LOG=info cargo run`
8. Et voilà, windmill should be available at `http://localhost/`
3. Create an empty `frontend/build` folder using `mkdir frontend/build`
6. Go to `backend/`:
1. `env DATABASE_URL=<YOUR_DATABASE_URL> RUST_LOG=info cargo run`
2. You can specify any feature flag you want to enable, for example `cargo run --features python` to enable the python executor.
7. Et voilà, windmill should be available at `http://localhost:3000`
## Contributors

View File

@@ -15,44 +15,45 @@ That's it! You are ready to go.
> Using **direnv** is highly recommended, since it can load shell automatically based on your CWD. It also can give you hints.
### Development
```bash
# enter a dev shell containing all necessary packages. `direnv allow` if direnv is installed.
nix develop
nix develop
## or ignore if you have `direnv`
# Start db (if not started already)
sudo docker compose up db -d
sudo docker compose up db -d
# run the frontend.
wm
# In an other shell:
#
nix develop
nix develop
## or ignore if you have `direnv`
cd backend
# You don't need to install anything extra. All dependencies are already in place!
cargo run --features all_languages
cargo run --features all_languages
```
The default proxy is setup to use the local backend: <http://localhost:8000>.
### wm-* Commands
### wm-\* Commands
Nix shell provides you with several helper commands prefixed with `wm-`
```bash
# Start minio server (implements S3)
wm-minio
# Note: You will need access to EE private repo in order to compile, don't forget "enterprise" and "parquet" freatures as well.
wm-minio
# Note: You will need access to EE private repo in order to compile, don't forget "enterprise" and "parquet" freatures as well.
# Generate keys for local dev.
wm-minio-keys
wm-minio-keys
# Minio data as well as generated keys are stored in `backend/.minio-data`
```
You can read about all others commands individually in [flake.nix](../flake.nix).
You can read about all others commands individually in [flake.nix](../flake.nix).
### dev.nu
@@ -106,7 +107,7 @@ REMOTE=http://localhost REMOTE_LSP=http://localhost npm run dev
Sometimes it is important to build docker image for your branch locally. It is crucial part of testing, since local environment may differ from the containerized one.
That's why we provide [docker/dev.nu](../docker/dev.nu). It is helper that can build images locally and execute them.
That's why we provide [docker/dev.nu](../docker/dev.nu). It is helper that can build images locally and execute them.
it can build the image and run on local repository.
@@ -150,7 +151,7 @@ If you develop wasm parser for new language you can also pass `--wasm-pkg <langu
In the root folder:
```bash
docker-compose up db
./start-dev-db.sh
```
In the backend folder:
@@ -218,6 +219,7 @@ nix flake update # update the lock file.
```
Some cargo dependencies use fixed git revisions, which are also fixed in `flake.nix`:
```nix
outputHashes = {
"php-parser-rs-0.1.3" = "sha256-ZeI3KgUPmtjlRfq6eAYveqt8Ay35gwj6B9iOQRjQa9A=";
@@ -226,6 +228,7 @@ outputHashes = {
```
When updating a revision, replace the incorrect `sha256` with `pkgs.lib.fakeHash`:
```diff
- "php-parser-rs-0.1.3" = "sha256-ZeI3KgUPmtjlRfq6eAYveqt8Ay35gwj6B9iOQRjQa9A=";
+ "php-parser-rs-0.1.3" = pkgs.lib.fakeHash;

13
start-dev-db.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
set -e
docker run --rm -d \
--name windmill-db-dev \
-e POSTGRES_PASSWORD=changeme \
-e POSTGRES_DB=windmill \
-p 5432:5432 \
-v windmill_db_data:/var/lib/postgresql/data \
postgres:16
echo "PostgreSQL database started successfully!"
echo "Connection string: postgres://postgres:changeme@localhost:5432/windmill"