buid/test/docs: Docker image, Darwin fix, logo

This commit is contained in:
2026-03-14 15:33:45 +01:00
parent 8c8d5a8abb
commit 708e26e4f4
13 changed files with 1151 additions and 83 deletions

20
.env.example Normal file
View File

@@ -0,0 +1,20 @@
PARRHESIA_IMAGE=parrhesia:latest
PARRHESIA_HOST_PORT=4000
POSTGRES_DB=parrhesia
POSTGRES_USER=parrhesia
POSTGRES_PASSWORD=parrhesia
DATABASE_URL=ecto://parrhesia:parrhesia@db:5432/parrhesia
POOL_SIZE=20
# Optional runtime overrides:
# PARRHESIA_RELAY_URL=ws://localhost:4000/relay
# PARRHESIA_POLICIES_AUTH_REQUIRED_FOR_WRITES=false
# PARRHESIA_POLICIES_AUTH_REQUIRED_FOR_READS=false
# PARRHESIA_POLICIES_MIN_POW_DIFFICULTY=0
# PARRHESIA_FEATURES_VERIFY_EVENT_SIGNATURES=true
# PARRHESIA_METRICS_ENABLED_ON_MAIN_ENDPOINT=true
# PARRHESIA_METRICS_PRIVATE_NETWORKS_ONLY=true
# PARRHESIA_METRICS_AUTH_TOKEN=
# PARRHESIA_EXTRA_CONFIG=/config/parrhesia.runtime.exs

332
README.md
View File

@@ -1,5 +1,7 @@
# Parrhesia # Parrhesia
<img alt="Parrhesia Logo" src="./docs/logo.svg" width="150" align="right">
Parrhesia is a Nostr relay server written in Elixir/OTP with PostgreSQL storage. Parrhesia is a Nostr relay server written in Elixir/OTP with PostgreSQL storage.
It exposes: It exposes:
@@ -20,6 +22,7 @@ Current `supported_nips` list:
- Elixir `~> 1.19` - Elixir `~> 1.19`
- Erlang/OTP 28 - Erlang/OTP 28
- PostgreSQL (18 used in the dev environment; 16+ recommended) - PostgreSQL (18 used in the dev environment; 16+ recommended)
- Docker or Podman plus Docker Compose support if you want to run the published container image
--- ---
@@ -65,78 +68,177 @@ ws://localhost:4000/relay
## Production configuration ## Production configuration
### Minimal setup
Before a Nostr client can publish its first event successfully, make sure these pieces are in place:
1. PostgreSQL is reachable from Parrhesia.
Set `DATABASE_URL` and create/migrate the database with `Parrhesia.Release.migrate()` or `mix ecto.migrate`.
2. Parrhesia is reachable behind your reverse proxy.
Parrhesia itself listens on plain HTTP on port `4000`, and the reverse proxy is expected to terminate TLS and forward WebSocket traffic to `/relay`.
3. `:relay_url` matches the public relay URL clients should use.
Set `PARRHESIA_RELAY_URL` to the public relay URL exposed by the reverse proxy.
In the normal deployment model, this should be your public `wss://.../relay` URL.
4. The database schema is migrated before starting normal traffic.
The app image does not auto-run migrations on boot.
That is the actual minimum. With default policy settings, writes do not require auth, event signatures are verified, and no extra Nostr-specific bootstrap step is needed before posting ordinary events.
In `prod`, these environment variables are used: In `prod`, these environment variables are used:
- `DATABASE_URL` (**required**), e.g. `ecto://USER:PASS@HOST/parrhesia_prod` - `DATABASE_URL` (**required**), e.g. `ecto://USER:PASS@HOST/parrhesia_prod`
- `POOL_SIZE` (optional, default `10`) - `POOL_SIZE` (optional, default `32`)
- `PORT` (optional, default `4000`) - `PORT` (optional, default `4000`)
- `PARRHESIA_*` runtime overrides for relay config, limits, policies, metrics, and features
- `PARRHESIA_EXTRA_CONFIG` (optional path to an extra runtime config file)
`config/runtime.exs` reads these values at runtime in production releases. `config/runtime.exs` reads these values at runtime in production releases.
### Typical relay config ### Runtime env naming
Add/override in config files (for example in `config/prod.exs` or a `config/runtime.exs`): For runtime overrides, use the `PARRHESIA_...` prefix:
```elixir - `PARRHESIA_RELAY_URL`
config :parrhesia, Parrhesia.Web.Endpoint, - `PARRHESIA_MODERATION_CACHE_ENABLED`
ip: {0, 0, 0, 0}, - `PARRHESIA_ENABLE_EXPIRATION_WORKER`
port: 4000 - `PARRHESIA_LIMITS_*`
- `PARRHESIA_POLICIES_*`
- `PARRHESIA_METRICS_*`
- `PARRHESIA_FEATURES_*`
- `PARRHESIA_METRICS_ENDPOINT_*`
# Optional dedicated metrics listener (keep this internal) Examples:
config :parrhesia, Parrhesia.Web.MetricsEndpoint,
enabled: true,
ip: {127, 0, 0, 1},
port: 9568
config :parrhesia, ```bash
metrics: [ export PARRHESIA_POLICIES_AUTH_REQUIRED_FOR_WRITES=true
enabled_on_main_endpoint: false, export PARRHESIA_FEATURES_VERIFY_EVENT_SIGNATURES=true
public: false, export PARRHESIA_METRICS_ALLOWED_CIDRS="10.0.0.0/8,192.168.0.0/16"
private_networks_only: true, export PARRHESIA_LIMITS_OUTBOUND_OVERFLOW_STRATEGY=drop_oldest
allowed_cidrs: [],
auth_token: nil
],
limits: [
max_frame_bytes: 1_048_576,
max_event_bytes: 262_144,
max_filters_per_req: 16,
max_filter_limit: 500,
max_subscriptions_per_connection: 32,
max_event_future_skew_seconds: 900,
max_outbound_queue: 256,
outbound_drain_batch_size: 64,
outbound_overflow_strategy: :close
],
policies: [
auth_required_for_writes: false,
auth_required_for_reads: false,
min_pow_difficulty: 0,
accept_ephemeral_events: true,
mls_group_event_ttl_seconds: 300,
marmot_require_h_for_group_queries: true,
marmot_group_max_h_values_per_filter: 32,
marmot_group_max_query_window_seconds: 2_592_000,
marmot_media_max_imeta_tags_per_event: 8,
marmot_media_max_field_value_bytes: 1024,
marmot_media_max_url_bytes: 2048,
marmot_media_allowed_mime_prefixes: [],
marmot_media_reject_mip04_v1: true,
marmot_push_server_pubkeys: [],
marmot_push_max_relay_tags: 16,
marmot_push_max_payload_bytes: 65_536,
marmot_push_max_trigger_age_seconds: 120,
marmot_push_require_expiration: true,
marmot_push_max_expiration_window_seconds: 120,
marmot_push_max_server_recipients: 1
],
features: [
nip_45_count: true,
nip_50_search: true,
nip_77_negentropy: true,
marmot_push_notifications: false
]
``` ```
For settings that are awkward to express as env vars, mount an extra config file and set `PARRHESIA_EXTRA_CONFIG` to its path inside the container.
### Config reference
CSV env vars use comma-separated values. Boolean env vars accept `1/0`, `true/false`, `yes/no`, or `on/off`.
#### Top-level `:parrhesia`
| Atom key | ENV | Default | Notes |
| --- | --- | --- | --- |
| `:relay_url` | `PARRHESIA_RELAY_URL` | `ws://localhost:4000/relay` | Advertised relay URL and auth relay tag target |
| `:moderation_cache_enabled` | `PARRHESIA_MODERATION_CACHE_ENABLED` | `true` | Toggle moderation cache |
| `:enable_expiration_worker` | `PARRHESIA_ENABLE_EXPIRATION_WORKER` | `true` | Toggle background expiration worker |
| `:limits` | `PARRHESIA_LIMITS_*` | see table below | Runtime override group |
| `:policies` | `PARRHESIA_POLICIES_*` | see table below | Runtime override group |
| `:metrics` | `PARRHESIA_METRICS_*` | see table below | Runtime override group |
| `:features` | `PARRHESIA_FEATURES_*` | see table below | Runtime override group |
| `:storage.events` | `-` | `Parrhesia.Storage.Adapters.Postgres.Events` | Config-file override only |
| `:storage.moderation` | `-` | `Parrhesia.Storage.Adapters.Postgres.Moderation` | Config-file override only |
| `:storage.groups` | `-` | `Parrhesia.Storage.Adapters.Postgres.Groups` | Config-file override only |
| `:storage.admin` | `-` | `Parrhesia.Storage.Adapters.Postgres.Admin` | Config-file override only |
#### `Parrhesia.Repo`
| Atom key | ENV | Default | Notes |
| --- | --- | --- | --- |
| `:url` | `DATABASE_URL` | required | Example: `ecto://USER:PASS@HOST/DATABASE` |
| `:pool_size` | `POOL_SIZE` | `32` | DB connection pool size |
| `:queue_target` | `DB_QUEUE_TARGET_MS` | `1000` | Ecto queue target in ms |
| `:queue_interval` | `DB_QUEUE_INTERVAL_MS` | `5000` | Ecto queue interval in ms |
| `:types` | `-` | `Parrhesia.PostgresTypes` | Internal config-file setting |
#### `Parrhesia.Web.Endpoint`
| Atom key | ENV | Default | Notes |
| --- | --- | --- | --- |
| `:port` | `PORT` | `4000` | Main HTTP/WebSocket listener |
#### `Parrhesia.Web.MetricsEndpoint`
| Atom key | ENV | Default | Notes |
| --- | --- | --- | --- |
| `:enabled` | `PARRHESIA_METRICS_ENDPOINT_ENABLED` | `false` | Enables dedicated metrics listener |
| `:ip` | `PARRHESIA_METRICS_ENDPOINT_IP` | `127.0.0.1` | IPv4 only |
| `:port` | `PARRHESIA_METRICS_ENDPOINT_PORT` | `9568` | Dedicated metrics port |
#### `:limits`
| Atom key | ENV | Default |
| --- | --- | --- |
| `:max_frame_bytes` | `PARRHESIA_LIMITS_MAX_FRAME_BYTES` | `1048576` |
| `:max_event_bytes` | `PARRHESIA_LIMITS_MAX_EVENT_BYTES` | `262144` |
| `:max_filters_per_req` | `PARRHESIA_LIMITS_MAX_FILTERS_PER_REQ` | `16` |
| `:max_filter_limit` | `PARRHESIA_LIMITS_MAX_FILTER_LIMIT` | `500` |
| `:max_subscriptions_per_connection` | `PARRHESIA_LIMITS_MAX_SUBSCRIPTIONS_PER_CONNECTION` | `32` |
| `:max_event_future_skew_seconds` | `PARRHESIA_LIMITS_MAX_EVENT_FUTURE_SKEW_SECONDS` | `900` |
| `:max_event_ingest_per_window` | `PARRHESIA_LIMITS_MAX_EVENT_INGEST_PER_WINDOW` | `120` |
| `:event_ingest_window_seconds` | `PARRHESIA_LIMITS_EVENT_INGEST_WINDOW_SECONDS` | `1` |
| `:auth_max_age_seconds` | `PARRHESIA_LIMITS_AUTH_MAX_AGE_SECONDS` | `600` |
| `:max_outbound_queue` | `PARRHESIA_LIMITS_MAX_OUTBOUND_QUEUE` | `256` |
| `:outbound_drain_batch_size` | `PARRHESIA_LIMITS_OUTBOUND_DRAIN_BATCH_SIZE` | `64` |
| `:outbound_overflow_strategy` | `PARRHESIA_LIMITS_OUTBOUND_OVERFLOW_STRATEGY` | `:close` |
| `:max_negentropy_payload_bytes` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_PAYLOAD_BYTES` | `4096` |
| `:max_negentropy_sessions_per_connection` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_SESSIONS_PER_CONNECTION` | `8` |
| `:max_negentropy_total_sessions` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_TOTAL_SESSIONS` | `10000` |
| `:negentropy_session_idle_timeout_seconds` | `PARRHESIA_LIMITS_NEGENTROPY_SESSION_IDLE_TIMEOUT_SECONDS` | `60` |
| `:negentropy_session_sweep_interval_seconds` | `PARRHESIA_LIMITS_NEGENTROPY_SESSION_SWEEP_INTERVAL_SECONDS` | `10` |
#### `:policies`
| Atom key | ENV | Default |
| --- | --- | --- |
| `:auth_required_for_writes` | `PARRHESIA_POLICIES_AUTH_REQUIRED_FOR_WRITES` | `false` |
| `:auth_required_for_reads` | `PARRHESIA_POLICIES_AUTH_REQUIRED_FOR_READS` | `false` |
| `:min_pow_difficulty` | `PARRHESIA_POLICIES_MIN_POW_DIFFICULTY` | `0` |
| `:accept_ephemeral_events` | `PARRHESIA_POLICIES_ACCEPT_EPHEMERAL_EVENTS` | `true` |
| `:mls_group_event_ttl_seconds` | `PARRHESIA_POLICIES_MLS_GROUP_EVENT_TTL_SECONDS` | `300` |
| `:marmot_require_h_for_group_queries` | `PARRHESIA_POLICIES_MARMOT_REQUIRE_H_FOR_GROUP_QUERIES` | `true` |
| `:marmot_group_max_h_values_per_filter` | `PARRHESIA_POLICIES_MARMOT_GROUP_MAX_H_VALUES_PER_FILTER` | `32` |
| `:marmot_group_max_query_window_seconds` | `PARRHESIA_POLICIES_MARMOT_GROUP_MAX_QUERY_WINDOW_SECONDS` | `2592000` |
| `:marmot_media_max_imeta_tags_per_event` | `PARRHESIA_POLICIES_MARMOT_MEDIA_MAX_IMETA_TAGS_PER_EVENT` | `8` |
| `:marmot_media_max_field_value_bytes` | `PARRHESIA_POLICIES_MARMOT_MEDIA_MAX_FIELD_VALUE_BYTES` | `1024` |
| `:marmot_media_max_url_bytes` | `PARRHESIA_POLICIES_MARMOT_MEDIA_MAX_URL_BYTES` | `2048` |
| `:marmot_media_allowed_mime_prefixes` | `PARRHESIA_POLICIES_MARMOT_MEDIA_ALLOWED_MIME_PREFIXES` | `[]` |
| `:marmot_media_reject_mip04_v1` | `PARRHESIA_POLICIES_MARMOT_MEDIA_REJECT_MIP04_V1` | `true` |
| `:marmot_push_server_pubkeys` | `PARRHESIA_POLICIES_MARMOT_PUSH_SERVER_PUBKEYS` | `[]` |
| `:marmot_push_max_relay_tags` | `PARRHESIA_POLICIES_MARMOT_PUSH_MAX_RELAY_TAGS` | `16` |
| `:marmot_push_max_payload_bytes` | `PARRHESIA_POLICIES_MARMOT_PUSH_MAX_PAYLOAD_BYTES` | `65536` |
| `:marmot_push_max_trigger_age_seconds` | `PARRHESIA_POLICIES_MARMOT_PUSH_MAX_TRIGGER_AGE_SECONDS` | `120` |
| `:marmot_push_require_expiration` | `PARRHESIA_POLICIES_MARMOT_PUSH_REQUIRE_EXPIRATION` | `true` |
| `:marmot_push_max_expiration_window_seconds` | `PARRHESIA_POLICIES_MARMOT_PUSH_MAX_EXPIRATION_WINDOW_SECONDS` | `120` |
| `:marmot_push_max_server_recipients` | `PARRHESIA_POLICIES_MARMOT_PUSH_MAX_SERVER_RECIPIENTS` | `1` |
| `:management_auth_required` | `PARRHESIA_POLICIES_MANAGEMENT_AUTH_REQUIRED` | `true` |
#### `:metrics`
| Atom key | ENV | Default |
| --- | --- | --- |
| `:enabled_on_main_endpoint` | `PARRHESIA_METRICS_ENABLED_ON_MAIN_ENDPOINT` | `true` |
| `:public` | `PARRHESIA_METRICS_PUBLIC` | `false` |
| `:private_networks_only` | `PARRHESIA_METRICS_PRIVATE_NETWORKS_ONLY` | `true` |
| `:allowed_cidrs` | `PARRHESIA_METRICS_ALLOWED_CIDRS` | `[]` |
| `:auth_token` | `PARRHESIA_METRICS_AUTH_TOKEN` | `nil` |
#### `:features`
| Atom key | ENV | Default |
| --- | --- | --- |
| `:verify_event_signatures` | `PARRHESIA_FEATURES_VERIFY_EVENT_SIGNATURES` | `true` |
| `:nip_45_count` | `PARRHESIA_FEATURES_NIP_45_COUNT` | `true` |
| `:nip_50_search` | `PARRHESIA_FEATURES_NIP_50_SEARCH` | `true` |
| `:nip_77_negentropy` | `PARRHESIA_FEATURES_NIP_77_NEGENTROPY` | `true` |
| `:marmot_push_notifications` | `PARRHESIA_FEATURES_MARMOT_PUSH_NOTIFICATIONS` | `false` |
#### Extra runtime config
| Atom key | ENV | Default | Notes |
| --- | --- | --- | --- |
| extra runtime config file | `PARRHESIA_EXTRA_CONFIG` | unset | Imports an additional runtime `.exs` file |
--- ---
## Deploy ## Deploy
@@ -150,15 +252,15 @@ export POOL_SIZE=20
mix deps.get --only prod mix deps.get --only prod
mix compile mix compile
mix ecto.migrate
mix release mix release
_build/prod/rel/parrhesia/bin/parrhesia eval "Parrhesia.Release.migrate()"
_build/prod/rel/parrhesia/bin/parrhesia foreground _build/prod/rel/parrhesia/bin/parrhesia foreground
``` ```
For systemd/process managers, run the release command in foreground mode. For systemd/process managers, run the release command in foreground mode.
### Option B: Nix package (`default.nix`) ### Option B: Nix release package (`default.nix`)
Build: Build:
@@ -168,6 +270,110 @@ nix-build
Run the built release from `./result/bin/parrhesia` (release command interface). Run the built release from `./result/bin/parrhesia` (release command interface).
### Option C: Docker image via Nix flake
Build the image tarball:
```bash
nix build .#dockerImage
# or with explicit build target:
nix build .#packages.x86_64-linux.dockerImage
```
Load it into Docker:
```bash
docker load < result
```
Run database migrations:
```bash
docker run --rm \
-e DATABASE_URL="ecto://USER:PASS@HOST/parrhesia_prod" \
parrhesia:latest \
eval "Parrhesia.Release.migrate()"
```
Start the relay:
```bash
docker run --rm \
-p 4000:4000 \
-e DATABASE_URL="ecto://USER:PASS@HOST/parrhesia_prod" \
-e POOL_SIZE=20 \
parrhesia:latest
```
### Option D: Docker Compose with PostgreSQL
The repo includes [`compose.yaml`](./compose.yaml) and [`.env.example`](./.env.example) so Docker users can run Postgres and Parrhesia together.
Set up the environment file:
```bash
cp .env.example .env
```
If you are building locally from source, build and load the image first:
```bash
nix build .#dockerImage
docker load < result
```
Then start the stack:
```bash
docker compose up -d db
docker compose run --rm migrate
docker compose up -d parrhesia
```
The relay will be available on:
```text
ws://localhost:4000/relay
```
Notes:
- `compose.yaml` keeps PostgreSQL in a separate container; the Parrhesia image only runs the app release.
- The container listens on port `4000`; use `PARRHESIA_HOST_PORT` if you want a different published host port.
- Migrations are run explicitly through the one-shot `migrate` service instead of on every app boot.
- Common runtime overrides can go straight into `.env`; see [`.env.example`](./.env.example) for examples.
- For more specialized overrides, mount a file and set `PARRHESIA_EXTRA_CONFIG=/path/in/container/runtime.exs`.
- When a GHCR image is published, set `PARRHESIA_IMAGE=ghcr.io/<owner>/parrhesia:<tag>` in `.env` and reuse the same compose flow.
---
## Benchmark
The benchmark compares Parrhesia against [`strfry`](https://github.com/hoytech/strfry) and [`nostr-rs-relay`](https://sr.ht/~gheartsfield/nostr-rs-relay/) using [`nostr-bench`](https://github.com/rnostr/nostr-bench).
Run it with:
```bash
mix bench
```
Current comparison results from [BENCHMARK.md](./BENCHMARK.md):
| metric | parrhesia | strfry | nostr-rs-relay | strfry/parrhesia | nostr-rs/parrhesia |
| --- | ---: | ---: | ---: | ---: | ---: |
| connect avg latency (ms) ↓ | 13.50 | 3.00 | 2.00 | **0.22x** | **0.15x** |
| connect max latency (ms) ↓ | 22.50 | 5.50 | 3.00 | **0.24x** | **0.13x** |
| echo throughput (TPS) ↑ | 80385.00 | 61673.00 | 164516.00 | 0.77x | **2.05x** |
| echo throughput (MiB/s) ↑ | 44.00 | 34.45 | 90.10 | 0.78x | **2.05x** |
| event throughput (TPS) ↑ | 2000.00 | 3404.50 | 788.00 | **1.70x** | 0.39x |
| event throughput (MiB/s) ↑ | 1.30 | 2.20 | 0.50 | **1.69x** | 0.38x |
| req throughput (TPS) ↑ | 3664.00 | 1808.50 | 877.50 | 0.49x | 0.24x |
| req throughput (MiB/s) ↑ | 20.75 | 11.75 | 2.45 | 0.57x | 0.12x |
Higher is better for `↑` metrics. Lower is better for `↓` metrics.
(Results from a Linux container on a 6-core Intel i5-8400T with NVMe drive)
--- ---
## Development quality checks ## Development quality checks
@@ -178,13 +384,13 @@ Before opening a PR:
mix precommit mix precommit
``` ```
For external CLI end-to-end checks with `nak`: Additional external CLI end-to-end checks with `nak`:
```bash ```bash
mix test.nak_e2e mix test.nak_e2e
``` ```
For Marmot client end-to-end checks (TypeScript/Node suite using `marmot-ts`): For Marmot client end-to-end checks (TypeScript/Node suite using `marmot-ts`, included in `precommit`):
```bash ```bash
mix test.marmot_e2e mix test.marmot_e2e

42
compose.yaml Normal file
View File

@@ -0,0 +1,42 @@
services:
db:
image: postgres:17
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB:-parrhesia}
POSTGRES_USER: ${POSTGRES_USER:-parrhesia}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-parrhesia}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 5s
timeout: 5s
retries: 12
volumes:
- postgres-data:/var/lib/postgresql/data
migrate:
image: ${PARRHESIA_IMAGE:-parrhesia:latest}
profiles: ["tools"]
restart: "no"
depends_on:
db:
condition: service_healthy
environment:
DATABASE_URL: ${DATABASE_URL:-ecto://parrhesia:parrhesia@db:5432/parrhesia}
POOL_SIZE: ${POOL_SIZE:-20}
command: ["eval", "Parrhesia.Release.migrate()"]
parrhesia:
image: ${PARRHESIA_IMAGE:-parrhesia:latest}
restart: unless-stopped
depends_on:
db:
condition: service_healthy
environment:
DATABASE_URL: ${DATABASE_URL:-ecto://parrhesia:parrhesia@db:5432/parrhesia}
POOL_SIZE: ${POOL_SIZE:-20}
ports:
- "${PARRHESIA_HOST_PORT:-4000}:4000"
volumes:
postgres-data:

View File

@@ -1,33 +1,373 @@
import Config import Config
string_env = fn name, default ->
case System.get_env(name) do
nil -> default
"" -> default
value -> value
end
end
int_env = fn name, default ->
case System.get_env(name) do
nil -> default
value -> String.to_integer(value)
end
end
bool_env = fn name, default ->
case System.get_env(name) do
nil ->
default
value ->
case String.downcase(value) do
"1" -> true
"true" -> true
"yes" -> true
"on" -> true
"0" -> false
"false" -> false
"no" -> false
"off" -> false
_other -> raise "environment variable #{name} must be a boolean value"
end
end
end
csv_env = fn name, default ->
case System.get_env(name) do
nil ->
default
value ->
value
|> String.split(",", trim: true)
|> Enum.map(&String.trim/1)
|> Enum.reject(&(&1 == ""))
end
end
outbound_overflow_strategy_env = fn name, default ->
case System.get_env(name) do
nil ->
default
"close" ->
:close
"drop_oldest" ->
:drop_oldest
"drop_newest" ->
:drop_newest
_other ->
raise "environment variable #{name} must be one of: close, drop_oldest, drop_newest"
end
end
ipv4_env = fn name, default ->
case System.get_env(name) do
nil ->
default
value ->
case String.split(value, ".", parts: 4) do
[a, b, c, d] ->
octets = Enum.map([a, b, c, d], &String.to_integer/1)
if Enum.all?(octets, &(&1 >= 0 and &1 <= 255)) do
List.to_tuple(octets)
else
raise "environment variable #{name} must be a valid IPv4 address"
end
_other ->
raise "environment variable #{name} must be a valid IPv4 address"
end
end
end
if config_env() == :prod do if config_env() == :prod do
database_url = database_url =
System.get_env("DATABASE_URL") || System.get_env("DATABASE_URL") ||
raise "environment variable DATABASE_URL is missing. Example: ecto://USER:PASS@HOST/DATABASE" raise "environment variable DATABASE_URL is missing. Example: ecto://USER:PASS@HOST/DATABASE"
repo_defaults = Application.get_env(:parrhesia, Parrhesia.Repo, []) repo_defaults = Application.get_env(:parrhesia, Parrhesia.Repo, [])
relay_url_default = Application.get_env(:parrhesia, :relay_url)
moderation_cache_enabled_default =
Application.get_env(:parrhesia, :moderation_cache_enabled, true)
enable_expiration_worker_default =
Application.get_env(:parrhesia, :enable_expiration_worker, true)
limits_defaults = Application.get_env(:parrhesia, :limits, [])
policies_defaults = Application.get_env(:parrhesia, :policies, [])
metrics_defaults = Application.get_env(:parrhesia, :metrics, [])
features_defaults = Application.get_env(:parrhesia, :features, [])
metrics_endpoint_defaults = Application.get_env(:parrhesia, Parrhesia.Web.MetricsEndpoint, [])
default_pool_size = Keyword.get(repo_defaults, :pool_size, 32) default_pool_size = Keyword.get(repo_defaults, :pool_size, 32)
default_queue_target = Keyword.get(repo_defaults, :queue_target, 1_000) default_queue_target = Keyword.get(repo_defaults, :queue_target, 1_000)
default_queue_interval = Keyword.get(repo_defaults, :queue_interval, 5_000) default_queue_interval = Keyword.get(repo_defaults, :queue_interval, 5_000)
pool_size = pool_size = int_env.("POOL_SIZE", default_pool_size)
case System.get_env("POOL_SIZE") do queue_target = int_env.("DB_QUEUE_TARGET_MS", default_queue_target)
nil -> default_pool_size queue_interval = int_env.("DB_QUEUE_INTERVAL_MS", default_queue_interval)
value -> String.to_integer(value)
end
queue_target = limits = [
case System.get_env("DB_QUEUE_TARGET_MS") do max_frame_bytes:
nil -> default_queue_target int_env.(
value -> String.to_integer(value) "PARRHESIA_LIMITS_MAX_FRAME_BYTES",
end Keyword.get(limits_defaults, :max_frame_bytes, 1_048_576)
),
max_event_bytes:
int_env.(
"PARRHESIA_LIMITS_MAX_EVENT_BYTES",
Keyword.get(limits_defaults, :max_event_bytes, 262_144)
),
max_filters_per_req:
int_env.(
"PARRHESIA_LIMITS_MAX_FILTERS_PER_REQ",
Keyword.get(limits_defaults, :max_filters_per_req, 16)
),
max_filter_limit:
int_env.(
"PARRHESIA_LIMITS_MAX_FILTER_LIMIT",
Keyword.get(limits_defaults, :max_filter_limit, 500)
),
max_subscriptions_per_connection:
int_env.(
"PARRHESIA_LIMITS_MAX_SUBSCRIPTIONS_PER_CONNECTION",
Keyword.get(limits_defaults, :max_subscriptions_per_connection, 32)
),
max_event_future_skew_seconds:
int_env.(
"PARRHESIA_LIMITS_MAX_EVENT_FUTURE_SKEW_SECONDS",
Keyword.get(limits_defaults, :max_event_future_skew_seconds, 900)
),
max_event_ingest_per_window:
int_env.(
"PARRHESIA_LIMITS_MAX_EVENT_INGEST_PER_WINDOW",
Keyword.get(limits_defaults, :max_event_ingest_per_window, 120)
),
event_ingest_window_seconds:
int_env.(
"PARRHESIA_LIMITS_EVENT_INGEST_WINDOW_SECONDS",
Keyword.get(limits_defaults, :event_ingest_window_seconds, 1)
),
auth_max_age_seconds:
int_env.(
"PARRHESIA_LIMITS_AUTH_MAX_AGE_SECONDS",
Keyword.get(limits_defaults, :auth_max_age_seconds, 600)
),
max_outbound_queue:
int_env.(
"PARRHESIA_LIMITS_MAX_OUTBOUND_QUEUE",
Keyword.get(limits_defaults, :max_outbound_queue, 256)
),
outbound_drain_batch_size:
int_env.(
"PARRHESIA_LIMITS_OUTBOUND_DRAIN_BATCH_SIZE",
Keyword.get(limits_defaults, :outbound_drain_batch_size, 64)
),
outbound_overflow_strategy:
outbound_overflow_strategy_env.(
"PARRHESIA_LIMITS_OUTBOUND_OVERFLOW_STRATEGY",
Keyword.get(limits_defaults, :outbound_overflow_strategy, :close)
),
max_negentropy_payload_bytes:
int_env.(
"PARRHESIA_LIMITS_MAX_NEGENTROPY_PAYLOAD_BYTES",
Keyword.get(limits_defaults, :max_negentropy_payload_bytes, 4096)
),
max_negentropy_sessions_per_connection:
int_env.(
"PARRHESIA_LIMITS_MAX_NEGENTROPY_SESSIONS_PER_CONNECTION",
Keyword.get(limits_defaults, :max_negentropy_sessions_per_connection, 8)
),
max_negentropy_total_sessions:
int_env.(
"PARRHESIA_LIMITS_MAX_NEGENTROPY_TOTAL_SESSIONS",
Keyword.get(limits_defaults, :max_negentropy_total_sessions, 10_000)
),
negentropy_session_idle_timeout_seconds:
int_env.(
"PARRHESIA_LIMITS_NEGENTROPY_SESSION_IDLE_TIMEOUT_SECONDS",
Keyword.get(limits_defaults, :negentropy_session_idle_timeout_seconds, 60)
),
negentropy_session_sweep_interval_seconds:
int_env.(
"PARRHESIA_LIMITS_NEGENTROPY_SESSION_SWEEP_INTERVAL_SECONDS",
Keyword.get(limits_defaults, :negentropy_session_sweep_interval_seconds, 10)
)
]
queue_interval = policies = [
case System.get_env("DB_QUEUE_INTERVAL_MS") do auth_required_for_writes:
nil -> default_queue_interval bool_env.(
value -> String.to_integer(value) "PARRHESIA_POLICIES_AUTH_REQUIRED_FOR_WRITES",
end Keyword.get(policies_defaults, :auth_required_for_writes, false)
),
auth_required_for_reads:
bool_env.(
"PARRHESIA_POLICIES_AUTH_REQUIRED_FOR_READS",
Keyword.get(policies_defaults, :auth_required_for_reads, false)
),
min_pow_difficulty:
int_env.(
"PARRHESIA_POLICIES_MIN_POW_DIFFICULTY",
Keyword.get(policies_defaults, :min_pow_difficulty, 0)
),
accept_ephemeral_events:
bool_env.(
"PARRHESIA_POLICIES_ACCEPT_EPHEMERAL_EVENTS",
Keyword.get(policies_defaults, :accept_ephemeral_events, true)
),
mls_group_event_ttl_seconds:
int_env.(
"PARRHESIA_POLICIES_MLS_GROUP_EVENT_TTL_SECONDS",
Keyword.get(policies_defaults, :mls_group_event_ttl_seconds, 300)
),
marmot_require_h_for_group_queries:
bool_env.(
"PARRHESIA_POLICIES_MARMOT_REQUIRE_H_FOR_GROUP_QUERIES",
Keyword.get(policies_defaults, :marmot_require_h_for_group_queries, true)
),
marmot_group_max_h_values_per_filter:
int_env.(
"PARRHESIA_POLICIES_MARMOT_GROUP_MAX_H_VALUES_PER_FILTER",
Keyword.get(policies_defaults, :marmot_group_max_h_values_per_filter, 32)
),
marmot_group_max_query_window_seconds:
int_env.(
"PARRHESIA_POLICIES_MARMOT_GROUP_MAX_QUERY_WINDOW_SECONDS",
Keyword.get(policies_defaults, :marmot_group_max_query_window_seconds, 2_592_000)
),
marmot_media_max_imeta_tags_per_event:
int_env.(
"PARRHESIA_POLICIES_MARMOT_MEDIA_MAX_IMETA_TAGS_PER_EVENT",
Keyword.get(policies_defaults, :marmot_media_max_imeta_tags_per_event, 8)
),
marmot_media_max_field_value_bytes:
int_env.(
"PARRHESIA_POLICIES_MARMOT_MEDIA_MAX_FIELD_VALUE_BYTES",
Keyword.get(policies_defaults, :marmot_media_max_field_value_bytes, 1024)
),
marmot_media_max_url_bytes:
int_env.(
"PARRHESIA_POLICIES_MARMOT_MEDIA_MAX_URL_BYTES",
Keyword.get(policies_defaults, :marmot_media_max_url_bytes, 2048)
),
marmot_media_allowed_mime_prefixes:
csv_env.(
"PARRHESIA_POLICIES_MARMOT_MEDIA_ALLOWED_MIME_PREFIXES",
Keyword.get(policies_defaults, :marmot_media_allowed_mime_prefixes, [])
),
marmot_media_reject_mip04_v1:
bool_env.(
"PARRHESIA_POLICIES_MARMOT_MEDIA_REJECT_MIP04_V1",
Keyword.get(policies_defaults, :marmot_media_reject_mip04_v1, true)
),
marmot_push_server_pubkeys:
csv_env.(
"PARRHESIA_POLICIES_MARMOT_PUSH_SERVER_PUBKEYS",
Keyword.get(policies_defaults, :marmot_push_server_pubkeys, [])
),
marmot_push_max_relay_tags:
int_env.(
"PARRHESIA_POLICIES_MARMOT_PUSH_MAX_RELAY_TAGS",
Keyword.get(policies_defaults, :marmot_push_max_relay_tags, 16)
),
marmot_push_max_payload_bytes:
int_env.(
"PARRHESIA_POLICIES_MARMOT_PUSH_MAX_PAYLOAD_BYTES",
Keyword.get(policies_defaults, :marmot_push_max_payload_bytes, 65_536)
),
marmot_push_max_trigger_age_seconds:
int_env.(
"PARRHESIA_POLICIES_MARMOT_PUSH_MAX_TRIGGER_AGE_SECONDS",
Keyword.get(policies_defaults, :marmot_push_max_trigger_age_seconds, 120)
),
marmot_push_require_expiration:
bool_env.(
"PARRHESIA_POLICIES_MARMOT_PUSH_REQUIRE_EXPIRATION",
Keyword.get(policies_defaults, :marmot_push_require_expiration, true)
),
marmot_push_max_expiration_window_seconds:
int_env.(
"PARRHESIA_POLICIES_MARMOT_PUSH_MAX_EXPIRATION_WINDOW_SECONDS",
Keyword.get(policies_defaults, :marmot_push_max_expiration_window_seconds, 120)
),
marmot_push_max_server_recipients:
int_env.(
"PARRHESIA_POLICIES_MARMOT_PUSH_MAX_SERVER_RECIPIENTS",
Keyword.get(policies_defaults, :marmot_push_max_server_recipients, 1)
),
management_auth_required:
bool_env.(
"PARRHESIA_POLICIES_MANAGEMENT_AUTH_REQUIRED",
Keyword.get(policies_defaults, :management_auth_required, true)
)
]
metrics = [
enabled_on_main_endpoint:
bool_env.(
"PARRHESIA_METRICS_ENABLED_ON_MAIN_ENDPOINT",
Keyword.get(metrics_defaults, :enabled_on_main_endpoint, true)
),
public:
bool_env.(
"PARRHESIA_METRICS_PUBLIC",
Keyword.get(metrics_defaults, :public, false)
),
private_networks_only:
bool_env.(
"PARRHESIA_METRICS_PRIVATE_NETWORKS_ONLY",
Keyword.get(metrics_defaults, :private_networks_only, true)
),
allowed_cidrs:
csv_env.(
"PARRHESIA_METRICS_ALLOWED_CIDRS",
Keyword.get(metrics_defaults, :allowed_cidrs, [])
),
auth_token:
string_env.(
"PARRHESIA_METRICS_AUTH_TOKEN",
Keyword.get(metrics_defaults, :auth_token)
)
]
features = [
verify_event_signatures:
bool_env.(
"PARRHESIA_FEATURES_VERIFY_EVENT_SIGNATURES",
Keyword.get(features_defaults, :verify_event_signatures, true)
),
nip_45_count:
bool_env.(
"PARRHESIA_FEATURES_NIP_45_COUNT",
Keyword.get(features_defaults, :nip_45_count, true)
),
nip_50_search:
bool_env.(
"PARRHESIA_FEATURES_NIP_50_SEARCH",
Keyword.get(features_defaults, :nip_50_search, true)
),
nip_77_negentropy:
bool_env.(
"PARRHESIA_FEATURES_NIP_77_NEGENTROPY",
Keyword.get(features_defaults, :nip_77_negentropy, true)
),
marmot_push_notifications:
bool_env.(
"PARRHESIA_FEATURES_MARMOT_PUSH_NOTIFICATIONS",
Keyword.get(features_defaults, :marmot_push_notifications, false)
)
]
config :parrhesia, Parrhesia.Repo, config :parrhesia, Parrhesia.Repo,
url: database_url, url: database_url,
@@ -35,6 +375,39 @@ if config_env() == :prod do
queue_target: queue_target, queue_target: queue_target,
queue_interval: queue_interval queue_interval: queue_interval
config :parrhesia, Parrhesia.Web.Endpoint, config :parrhesia, Parrhesia.Web.Endpoint, port: int_env.("PORT", 4000)
port: String.to_integer(System.get_env("PORT") || "4000")
config :parrhesia, Parrhesia.Web.MetricsEndpoint,
enabled:
bool_env.(
"PARRHESIA_METRICS_ENDPOINT_ENABLED",
Keyword.get(metrics_endpoint_defaults, :enabled, false)
),
ip:
ipv4_env.(
"PARRHESIA_METRICS_ENDPOINT_IP",
Keyword.get(metrics_endpoint_defaults, :ip, {127, 0, 0, 1})
),
port:
int_env.(
"PARRHESIA_METRICS_ENDPOINT_PORT",
Keyword.get(metrics_endpoint_defaults, :port, 9568)
)
config :parrhesia,
relay_url: string_env.("PARRHESIA_RELAY_URL", relay_url_default),
moderation_cache_enabled:
bool_env.("PARRHESIA_MODERATION_CACHE_ENABLED", moderation_cache_enabled_default),
enable_expiration_worker:
bool_env.("PARRHESIA_ENABLE_EXPIRATION_WORKER", enable_expiration_worker_default),
limits: limits,
policies: policies,
metrics: metrics,
features: features
case System.get_env("PARRHESIA_EXTRA_CONFIG") do
nil -> :ok
"" -> :ok
path -> import_config path
end
end end

BIN
docs/logo.afdesign Normal file

Binary file not shown.

1
docs/logo.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 37 KiB

279
docs/slop/HARDEN.md Normal file
View File

@@ -0,0 +1,279 @@
# Hardening Review: Parrhesia Nostr Relay
You are a security engineer specialising in real-time WebSocket servers, Erlang/OTP systems, and protocol-level abuse. You are reviewing **Parrhesia**, a Nostr relay (NIP-01 compliant) written in Elixir, for hardening opportunities — with a primary focus on **denial-of-service resilience** and a secondary focus on the full attack surface.
Produce a prioritised list of **specific, actionable recommendations** with rationale. For each recommendation, state:
1. The attack or failure mode it mitigates
2. Suggested implementation (config change, code change, or architectural change)
3. Severity estimate (critical / high / medium / low)
---
## 1. Architecture Overview
| Component | Technology | Notes |
|---|---|---|
| Runtime | Elixir/OTP 27, BEAM VM | Each WS connection is a separate process |
| HTTP server | Bandit (pure Elixir) | HTTP/1.1 only, no HTTP/2 |
| WebSocket | `websock_adapter` | Text frames only; binary rejected |
| Database | PostgreSQL via Ecto | Range-partitioned `events` table by `created_at` |
| Caching | ETS | Config snapshot + moderation ban/allow lists |
| Multi-node | Erlang `:pg` groups | Fanout across BEAM cluster nodes |
| Metrics | Prometheus (Telemetry) | `/metrics` endpoint |
| TLS termination | **Out of scope** — handled by reverse proxy (nginx/Caddy) |
### Supervision Tree
```
Parrhesia.Supervisor
├─ Telemetry (Prometheus exporter)
├─ Config (ETS snapshot of runtime config)
├─ Storage.Supervisor (Ecto repo + moderation cache)
├─ Subscriptions.Supervisor (ETS subscription index for fanout)
├─ Auth.Supervisor (NIP-42 challenge GenServer)
├─ Policy.Supervisor (policy enforcement)
├─ Web.Endpoint (Bandit listener)
└─ Tasks.Supervisor (ExpirationWorker, 30s GC loop)
```
### Data Flow
1. Client connects via WebSocket at `/relay`
2. NIP-42 AUTH challenge issued immediately (16-byte random, base64url)
3. Inbound text frames are: size-checked → JSON-decoded → rate-limited → protocol-dispatched
4. EVENT messages: validated → policy-checked → stored in Postgres → ACK → async fanout to matching subscriptions
5. REQ messages: filters validated → Postgres query → results streamed → EOSE → live subscription registered
6. Fanout: post-ingest, subscription index (ETS) is traversed; matching connection processes receive events via `send/2`
---
## 2. Current Defences Inventory
### Connection Layer
| Defence | Value | Enforcement Point |
|---|---|---|
| Max WebSocket frame size | **1,048,576 bytes (1 MiB)** | Checked in `handle_in` *before* JSON decode, and at Bandit upgrade (`max_frame_size`) |
| WebSocket upgrade timeout | **60,000 ms** | Passed to `WebSockAdapter.upgrade` |
| Binary frame rejection | Returns NOTICE, connection stays open | `handle_in` opcode check |
| Outbound queue limit | **256 events** per connection | Overflow strategy: **`:close`** (WS 1008) |
| Outbound drain batch | **64 events** | Async drain via `send(self(), :drain_outbound_queue)` |
| Outbound pressure telemetry | Threshold at **75%** of queue | Emits telemetry event only, no enforcement |
| IP blocking | Via moderation cache (ETS) | Management API can add blocked IPs |
### Protocol Layer
| Defence | Value | Notes |
|---|---|---|
| Max event JSON size | **262,144 bytes (256 KiB)** | Re-serialises decoded event and checks byte size |
| Max filters per REQ | **16** | Rejected at filter validation |
| Max filter `limit` | **500** | `min(client_limit, 500)` applied at query time |
| Max subscriptions per connection | **32** | Existing sub IDs updated without counting toward limit |
| Subscription ID max length | **64 characters** | Must be non-empty |
| Event kind range | **065,535** | Integer range check |
| Max future event skew | **900 seconds (15 min)** | Events with `created_at > now + 900` rejected |
| Unknown filter keys | **Rejected** | Allowed: `ids`, `authors`, `kinds`, `since`, `until`, `limit`, `search`, `#<letter>` |
### Event Validation Pipeline
Strict order:
1. Required fields present (`id`, `pubkey`, `created_at`, `kind`, `tags`, `content`, `sig`)
2. `id` — 64-char lowercase hex
3. `pubkey` — 64-char lowercase hex
4. `created_at` — non-negative integer, max 900s future skew
5. `kind` — integer in [0, 65535]
6. `tags` — list of non-empty string arrays (**no length limit on tags array or individual tag values**)
7. `content` — any binary string
8. `sig` — 128-char lowercase hex
9. ID hash recomputation and comparison
10. Schnorr signature verification via `lib_secp256k1` (gated by `verify_event_signatures` flag, default `true`)
### Rate Limiting
| Defence | Value | Notes |
|---|---|---|
| Event ingest rate | **120 events per window** | Per-connection sliding window |
| Ingest window | **1 second** | Resets on first event after expiry |
| No per-IP connection rate limiting | — | Must be handled at reverse proxy |
| No global connection count ceiling | — | BEAM handles thousands but no configured limit |
### Authentication (NIP-42)
- Challenge issued to **all** connections on connect (optional escalation model)
- AUTH event must: pass full NIP-01 validation, be kind `22242`, contain matching `challenge` tag, contain matching `relay` tag
- `created_at` freshness: must be `>= now - 600s` (10 min)
- On success: pubkey added to `authenticated_pubkeys` MapSet; challenge rotated
- Supports multiple authenticated pubkeys per connection
### Authentication (NIP-98 HTTP)
- Management endpoint (`POST /management`) requires NIP-98 header
- Auth event must be kind `27235`, `created_at` within **60 seconds** of now
- Must include `method` and `u` tags matching request exactly
### Access Control
- `auth_required_for_writes`: default **false** (configurable)
- `auth_required_for_reads`: default **false** (configurable)
- Protected events (NIP-70, tagged `["-"]`): require auth + pubkey match
- Giftwrap (kind 1059): unauthenticated REQ → CLOSED; authenticated REQ must include `#p` containing own pubkey
### Database
- All queries use Ecto parameterised bindings — no raw string interpolation
- LIKE search patterns escaped (`%`, `_`, `\` characters)
- Deletion enforces `pubkey == deleter_pubkey` in WHERE clause
- Soft-delete via `deleted_at`; hard-delete only via vanish (NIP-62) or expiration purge
- DB pool: **32 connections** (prod), queue target 1s, interval 5s
### Moderation
- Banned pubkeys, allowed pubkeys, banned events, blocked IPs stored in ETS cache
- Management API (NIP-98 authed) for CRUD on moderation lists
- Cache invalidated atomically on writes
---
## 3. Known Gaps and Areas of Concern
The following are areas where the current implementation may be vulnerable or where defences could be strengthened. **Please evaluate each and provide recommendations.**
### 3.1 Connection Exhaustion
- There is **no global limit on concurrent WebSocket connections**. Each connection is an Elixir process (~23 KiB base), but subscriptions, auth state, and outbound queues add per-connection memory.
- There is **no per-IP connection rate limiting at the application layer**. IP blocking exists but is reactive (management API), not automatic.
- There is **no idle timeout** after the WebSocket upgrade completes. A connection can remain open indefinitely without sending or receiving messages.
**Questions:**
- What connection limits should be configured at the Bandit/BEAM level?
- Should an idle timeout be implemented? If so, what value balances real-time subscription use against resource waste?
- Should per-IP connection counting be implemented at the application layer, or is this strictly a reverse proxy concern?
### 3.2 Subscription Abuse
- A single connection can hold **32 subscriptions**, each with up to **16 filters**. That's 512 filter predicates per connection being evaluated on every fanout.
- Filter arrays (`ids`, `authors`, `kinds`, tag values) have **no element count limits**. A filter could contain thousands of author pubkeys.
- There is no cost accounting for "expensive" subscriptions (e.g., wide open filters matching all events).
**Questions:**
- Should filter array element counts be bounded? If so, what limits per field?
- Should there be a per-connection "filter complexity" budget?
- How expensive is the current ETS subscription index traversal at scale (e.g., 10K concurrent connections × 32 subs each)?
### 3.3 Tag Array Size
- Event validation does **not limit the number of tags** or the length of individual tag values beyond the 256 KiB total event size cap.
- A maximally-tagged event could contain thousands of short tags, causing amplification in `event_tags` table inserts (one row per tag).
**Questions:**
- Should a max tag count be enforced? What is a reasonable limit?
- What is the insert cost of storing e.g. 1,000 tags per event? Could this be used for write amplification?
- Should individual tag value lengths be bounded?
### 3.4 AUTH Timing
- AUTH event `created_at` freshness only checks the **lower bound** (`>= now - 600`). An AUTH event with `created_at` far in the future passes validation.
- Regular events have a future skew cap of 900s, but AUTH events do not.
**Questions:**
- Should AUTH events also enforce a future `created_at` bound?
- Is a 600-second AUTH window too wide? Could it be reduced?
### 3.5 Outbound Amplification
- A single inbound EVENT can fan out to an unbounded number of matching subscriptions across all connections.
- The outbound queue (256 events, `:close` strategy) protects individual connections but does not limit total fanout work per event.
- The fanout traverses the ETS subscription index synchronously in the ingesting connection's process.
**Questions:**
- Should fanout be bounded per event (e.g., max N recipients before yielding)?
- Should fanout happen in a separate process pool rather than inline?
- Is the `:close` overflow strategy optimal, or would `:drop_oldest` be better for well-behaved clients with temporary backpressure?
### 3.6 Query Amplification
- A single REQ with 16 filters, each with `limit: 500`, could trigger 16 separate Postgres queries returning up to 8,000 events total.
- COUNT requests also execute per-filter queries (now deduplicated via UNION ALL).
- `search` filters use `ILIKE %pattern%` which cannot use B-tree indexes.
**Questions:**
- Should there be a per-REQ total result cap (across all filters)?
- Should `search` queries be rate-limited or require a minimum pattern length?
- Should COUNT be disabled or rate-limited separately?
- Are there missing indexes that would help common query patterns?
### 3.7 Multi-Node Trust
- Events received via `:remote_fanout_event` from peer BEAM nodes **skip all validation and policy checks** and go directly to the subscription index.
- This assumes all cluster peers are trusted.
**Questions:**
- If cluster membership is dynamic or spans trust boundaries, should remote events be re-validated?
- Should there be a shared secret or HMAC on inter-node messages?
### 3.8 Metrics Endpoint
- `/metrics` (Prometheus) is **unauthenticated**.
- Exposes internal telemetry: connection counts, event throughput, queue depths, database timing.
**Questions:**
- Should `/metrics` require authentication or be restricted to internal networks?
- Could metrics data be used to profile the relay's capacity and craft targeted attacks?
### 3.9 Negentropy Stub
- NEG-OPEN, NEG-MSG, NEG-CLOSE messages are accepted and acknowledged but the reconciliation logic is a stub (cursor counter only).
- Are there resource implications of accepting negentropy sessions without real implementation?
### 3.10 Event Re-Serialisation Cost
- To enforce the 256 KiB event size limit, the relay calls `JSON.encode!(event)` on the already-decoded event map. This re-serialisation happens on every inbound EVENT.
- Could this be replaced with a byte-length check on the raw frame payload (already available)?
---
## 4. Specific Review Requests
Beyond the gaps above, please also evaluate:
1. **Bandit configuration**: Are there Bandit-level options (max connections, header limits, request timeouts, keepalive settings) that should be tuned for a public-facing relay?
2. **BEAM VM flags**: Are there any Erlang VM flags (`+P`, `+Q`, `+S`, memory limits) that should be set for production hardening?
3. **Ecto pool exhaustion**: With 32 DB connections and potentially thousands of concurrent REQ queries, what happens under pool exhaustion? Is the 1s queue target + 5s interval appropriate?
4. **ETS table sizing**: The subscription index and moderation cache use ETS. Are there memory limits or table options (`read_concurrency`, `write_concurrency`, `compressed`) that should be tuned?
5. **Process mailbox overflow**: Connection processes receive events via `send/2` during fanout. If a process is slow to consume, its mailbox grows. The outbound queue mechanism is application-level — but is the BEAM-level mailbox also protected?
6. **Reverse proxy recommendations**: What nginx/Caddy configuration should complement the relay's defences? (Rate limiting, connection limits, WebSocket-specific settings, request body size.)
7. **Monitoring and alerting**: What telemetry signals should trigger alerts? (Connection count spikes, queue overflow rates, DB pool saturation, error rates.)
---
## 5. Out of Scope
The following are **not** in scope for this review:
- TLS configuration (handled by reverse proxy)
- DNS and network-level DDoS mitigation
- Operating system hardening
- Key management for the relay identity
- Client-side security
- Nostr protocol design flaws (we implement the spec as-is)
---
## 6. Response Format
For each recommendation, use this format:
### [Severity] Title
**Attack/failure mode:** What goes wrong without this mitigation.
**Current state:** What exists today (or doesn't).
**Recommendation:** Specific change — config value, code change, or architectural decision.
**Trade-offs:** Any impact on legitimate users or operational complexity.

27
flake.lock generated Normal file
View File

@@ -0,0 +1,27 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1773389992,
"narHash": "sha256-wvfdLLWJ2I9oEpDd9PfMA8osfIZicoQ5MT1jIwNs9Tk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "c06b4ae3d6599a672a6210b7021d699c351eebda",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

68
flake.nix Normal file
View File

@@ -0,0 +1,68 @@
{
description = "Parrhesia Nostr relay";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
outputs = {nixpkgs, ...}: let
systems = [
"x86_64-linux"
"aarch64-linux"
"x86_64-darwin"
"aarch64-darwin"
];
forAllSystems = nixpkgs.lib.genAttrs systems;
in {
formatter = forAllSystems (system: (import nixpkgs {inherit system;}).alejandra);
packages = forAllSystems (
system: let
pkgs = import nixpkgs {inherit system;};
lib = pkgs.lib;
parrhesia = pkgs.callPackage ./default.nix {};
in
{
default = parrhesia;
inherit parrhesia;
}
// lib.optionalAttrs pkgs.stdenv.hostPlatform.isLinux {
dockerImage = pkgs.dockerTools.buildLayeredImage {
name = "parrhesia";
tag = "latest";
contents = [
parrhesia
pkgs.bash
pkgs.cacert
pkgs.coreutils
pkgs.fakeNss
];
extraCommands = ''
mkdir -p tmp
chmod 1777 tmp
'';
config = {
Entrypoint = ["${parrhesia}/bin/parrhesia"];
Cmd = ["foreground"];
ExposedPorts = {
"4000/tcp" = {};
};
WorkingDir = "/";
User = "65534:65534";
Env = [
"HOME=/tmp"
"LANG=C.UTF-8"
"LC_ALL=C.UTF-8"
"MIX_ENV=prod"
"PORT=4000"
"RELEASE_DISTRIBUTION=none"
"SSL_CERT_FILE=${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt"
];
};
};
}
);
};
}

35
lib/parrhesia/release.ex Normal file
View File

@@ -0,0 +1,35 @@
defmodule Parrhesia.Release do
@moduledoc """
Helpers for running Ecto tasks from a production release.
"""
@app :parrhesia
def migrate do
load_app()
for repo <- repos() do
{:ok, _, _} =
Ecto.Migrator.with_repo(repo, fn repo ->
Ecto.Migrator.run(repo, :up, all: true)
end)
end
end
def rollback(repo, version) when is_atom(repo) and is_integer(version) do
load_app()
{:ok, _, _} =
Ecto.Migrator.with_repo(repo, fn repo ->
Ecto.Migrator.run(repo, :down, to: version)
end)
end
defp load_app do
Application.load(@app)
end
defp repos do
Application.fetch_env!(@app, :ecto_repos)
end
end

View File

@@ -86,7 +86,24 @@ cleanup() {
trap cleanup EXIT INT TERM trap cleanup EXIT INT TERM
if ss -ltn "( sport = :${TEST_HTTP_PORT} )" | tail -n +2 | grep -q .; then port_in_use() {
local port="$1"
if command -v ss >/dev/null 2>&1; then
ss -ltn "( sport = :${port} )" | tail -n +2 | grep -q .
return
fi
if command -v lsof >/dev/null 2>&1; then
lsof -nP -iTCP:"${port}" -sTCP:LISTEN >/dev/null 2>&1
return
fi
echo "Neither ss nor lsof is available for checking whether port ${port} is already in use." >&2
exit 1
}
if port_in_use "$TEST_HTTP_PORT"; then
echo "Port ${TEST_HTTP_PORT} is already in use. Set ${PORT_ENV_VAR} to a free port." >&2 echo "Port ${TEST_HTTP_PORT} is already in use. Set ${PORT_ENV_VAR} to a free port." >&2
exit 1 exit 1
fi fi

View File

@@ -1,5 +1,5 @@
defmodule Parrhesia.Protocol.EventValidatorSignatureTest do defmodule Parrhesia.Protocol.EventValidatorSignatureTest do
use ExUnit.Case, async: true use ExUnit.Case, async: false
alias Parrhesia.Protocol.EventValidator alias Parrhesia.Protocol.EventValidator

View File

@@ -1,5 +1,5 @@
defmodule Parrhesia.Storage.Adapters.Postgres.EventsTest do defmodule Parrhesia.Storage.Adapters.Postgres.EventsTest do
use ExUnit.Case, async: true use ExUnit.Case, async: false
alias Parrhesia.Protocol.EventValidator alias Parrhesia.Protocol.EventValidator
alias Parrhesia.Storage.Adapters.Postgres.Events alias Parrhesia.Storage.Adapters.Postgres.Events