30 Commits

Author SHA1 Message Date
3e5bf462e9 chore: Bump version to 0.6.0, fix tests
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 0s
CI / Test (OTP 28.4 / Elixir 1.19.4 + E2E) (push) Failing after 0s
Release / Release Gate (push) Failing after 0s
Release / Build and publish image (push) Has been skipped
2026-03-18 21:58:08 +01:00
fc3d121599 Benchmark capture and plot
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 0s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 0s
2026-03-18 21:23:23 +01:00
970cee2c0e Document embedded API surface
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 0s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 0s
2026-03-18 20:22:12 +01:00
7a43ebd395 Expand in-memory storage indexes 2026-03-18 19:43:11 +01:00
4c40edfd83 Optimize memory-backed benchmark path 2026-03-18 18:56:47 +01:00
f60b8ba02a Add memory-backed benchmark profile 2026-03-18 18:39:53 +01:00
2225dfdc9e Improve public API documentation 2026-03-18 18:08:47 +01:00
9014912e9d Unify HTTP metadata handling 2026-03-18 18:00:07 +01:00
c30449b318 Expand relay metrics and observability 2026-03-18 17:39:13 +01:00
c377ed4b62 Separate read pool and harden fanout state handling 2026-03-18 17:21:58 +01:00
dce473662f Lock signature verification and add per-IP ingest limits 2026-03-18 16:46:32 +01:00
a2bdf11139 Add DB constraints for binary identifier lengths 2026-03-18 16:00:07 +01:00
bc66dfcbbe Upgrade NIP-50 search to ranked Postgres FTS 2026-03-18 15:56:45 +01:00
f732d9cf24 Implement full NIP-43 relay access flow 2026-03-18 15:28:15 +01:00
f2856d000e Implement NIP-66 relay discovery publishing 2026-03-18 14:50:25 +01:00
dc5f0c1e5d Add first-class listener connection caps 2026-03-18 14:21:43 +01:00
b56925f413 Decouple publish fanout and use ETS ingest counters
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 0s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 0s
2026-03-18 14:10:32 +01:00
05718d4b91 Prevent NIP-98 token replay 2026-03-18 14:05:38 +01:00
1fef184f50 Add relay-wide event ingest limiter 2026-03-18 14:05:27 +01:00
57fdb4ed85 Add configurable tag guardrails 2026-03-18 14:05:09 +01:00
8dbf05b7fe docs: Opus review 2026-03-18 13:23:06 +01:00
7b2d92b714 fix: Sandbox owner checks in DB connection before exiting
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 0s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 0s
The shared sandbox owner process exited without releasing its Postgrex
connection, causing intermittent "client exited" error logs on CI. The
owner now calls Sandbox.checkin before exiting, and on_exit waits for
the owner to finish before switching to manual mode.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 20:11:31 +01:00
a19b7d97f0 fix: Subscription workers restart strategy, sandbox ownership race condition
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 0s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 0s
Clear OTP SSL PEM cache between listener terminate/restart so reloaded
certs are read from disk instead of serving stale cached data. Make
reconcile_worker idempotent to prevent unnecessary worker churn when
put_server is followed by start_server. Add request timeouts to
RelayInfoClient to prevent hanging connections.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 19:42:18 +01:00
65b47ec191 fix: Subscription workers restart strategy, sandbox ownership race condition
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 0s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 0s
2026-03-17 18:49:50 +01:00
e13c08fd5a Strengthening the TLS reload test 2026-03-17 12:42:08 +01:00
101ded43cb Stabilize TLS and sync worker tests
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 0s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 0s
2026-03-17 12:17:29 +01:00
f4d94c9fcb Refactor test runtime ownership
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 0s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 0s
2026-03-17 12:06:32 +01:00
35c8d50db0 Stabilize TLS listener reload e2e
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 1s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 1s
2026-03-17 04:12:42 +01:00
4d169c23ae Harden CI-sensitive integration tests
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 1s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 1s
2026-03-17 03:55:49 +01:00
a1a8b30d12 Stabilize test harness and node sync e2e
Some checks failed
CI / Test (OTP 27.2 / Elixir 1.18.2) (push) Failing after 1s
CI / Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) (push) Failing after 1s
2026-03-17 03:46:58 +01:00
141 changed files with 9188 additions and 1639 deletions

View File

@@ -25,7 +25,7 @@ jobs:
otp: "27.2" otp: "27.2"
elixir: "1.18.2" elixir: "1.18.2"
main: false main: false
- name: Test (OTP 28.4 / Elixir 1.19.4 + Marmot E2E) - name: Test (OTP 28.4 / Elixir 1.19.4 + E2E)
otp: "28.4" otp: "28.4"
elixir: "1.19.4" elixir: "1.19.4"
main: true main: true

View File

@@ -1,33 +0,0 @@
Running 2 comparison run(s)...
Versions:
parrhesia 0.4.0
strfry 1.0.4 (nixpkgs)
nostr-rs-relay 0.9.0
nostr-bench 0.4.0
[run 1/2] Parrhesia
[run 1/2] strfry
[run 1/2] nostr-rs-relay
[run 2/2] Parrhesia
[run 2/2] strfry
[run 2/2] nostr-rs-relay
=== Bench comparison (averages) ===
metric parrhesia strfry nostr-rs-relay strfry/parrhesia nostr-rs/parrhesia
-------------------------- --------- -------- -------------- ---------------- ------------------
connect avg latency (ms) ↓ 10.50 4.00 3.00 0.38x 0.29x
connect max latency (ms) ↓ 19.50 7.50 4.00 0.38x 0.21x
echo throughput (TPS) ↑ 78520.00 60353.00 164420.50 0.77x 2.09x
echo throughput (MiB/s) ↑ 43.00 33.75 90.05 0.78x 2.09x
event throughput (TPS) ↑ 1919.50 3520.50 781.00 1.83x 0.41x
event throughput (MiB/s) ↑ 1.25 2.25 0.50 1.80x 0.40x
req throughput (TPS) ↑ 4608.50 1809.50 875.50 0.39x 0.19x
req throughput (MiB/s) ↑ 26.20 11.75 2.40 0.45x 0.09x
Legend: ↑ higher is better, ↓ lower is better.
Ratio columns are server/parrhesia (for ↓ metrics, <1.00x means that server is faster).
Run details:
run 1: parrhesia(echo_tps=78892, event_tps=1955, req_tps=4671, connect_avg_ms=10) | strfry(echo_tps=59132, event_tps=3462, req_tps=1806, connect_avg_ms=4) | nostr-rs-relay(echo_tps=159714, event_tps=785, req_tps=873, connect_avg_ms=3)
run 2: parrhesia(echo_tps=78148, event_tps=1884, req_tps=4546, connect_avg_ms=11) | strfry(echo_tps=61574, event_tps=3579, req_tps=1813, connect_avg_ms=4) | nostr-rs-relay(echo_tps=169127, event_tps=777, req_tps=878, connect_avg_ms=3)

175
README.md
View File

@@ -2,7 +2,12 @@
<img alt="Parrhesia Logo" src="./docs/logo.svg" width="150" align="right"> <img alt="Parrhesia Logo" src="./docs/logo.svg" width="150" align="right">
Parrhesia is a Nostr relay server written in Elixir/OTP with PostgreSQL storage. Parrhesia is a Nostr relay server written in Elixir/OTP.
Supported storage backends:
- PostgreSQL, which is the primary and production-oriented backend
- in-memory storage, which is useful for tests, local experiments, and benchmarks
**ALPHA CONDITION BREAKING CHANGES MIGHT HAPPEN!** **ALPHA CONDITION BREAKING CHANGES MIGHT HAPPEN!**
@@ -32,9 +37,15 @@ Current `supported_nips` list:
`1, 9, 11, 13, 17, 40, 42, 43, 44, 45, 50, 59, 62, 66, 70, 77, 86, 98` `1, 9, 11, 13, 17, 40, 42, 43, 44, 45, 50, 59, 62, 66, 70, 77, 86, 98`
`43` is advertised when the built-in NIP-43 relay access flow is enabled. Parrhesia generates relay-signed `28935` invite responses on `REQ`, validates join and leave requests locally, and publishes the resulting signed `8000`, `8001`, and `13534` relay membership events into its own local event store.
`50` uses ranked PostgreSQL full-text search over event `content` by default. Parrhesia applies the filter `limit` after ordering by match quality, and falls back to trigram-backed substring matching for short or symbol-heavy queries such as search-as-you-type prefixes, domains, and punctuation-rich tokens.
`66` is advertised when the built-in NIP-66 publisher is enabled and has at least one relay target. The default config enables it for the `public` relay URL. Parrhesia probes those target relays, collects the resulting NIP-11 / websocket liveness data, and then publishes the signed `10166` and `30166` events locally on this relay.
## Requirements ## Requirements
- Elixir `~> 1.19` - Elixir `~> 1.18`
- Erlang/OTP 28 - Erlang/OTP 28
- PostgreSQL (18 used in the dev environment; 16+ recommended) - PostgreSQL (18 used in the dev environment; 16+ recommended)
- Docker or Podman plus Docker Compose support if you want to run the published container image - Docker or Podman plus Docker Compose support if you want to run the published container image
@@ -103,6 +114,38 @@ GitHub CI currently runs the non-Docker node-sync e2e on the main Linux matrix j
--- ---
## Embedding in another Elixir app
Parrhesia is usable as an embedded OTP dependency, not just as a standalone relay process.
The intended in-process surface is `Parrhesia.API.*`, especially:
- `Parrhesia.API.Events` for publish, query, and count
- `Parrhesia.API.Stream` for local REQ-like subscriptions
- `Parrhesia.API.Admin` for management operations
- `Parrhesia.API.Identity`, `Parrhesia.API.ACL`, and `Parrhesia.API.Sync` for relay identity, protected sync ACLs, and outbound relay sync
Start with:
- [`docs/LOCAL_API.md`](./docs/LOCAL_API.md) for the embedding model and a minimal host setup
- generated ExDoc for the `Embedded API` module group when running `mix docs`
Important caveats for host applications:
- Parrhesia is still alpha; expect some public API and config churn.
- Parrhesia currently assumes a single runtime per BEAM node and uses globally registered process names.
- The defaults in this repo's `config/*.exs` are not imported automatically when Parrhesia is used as a dependency. A host app must set `config :parrhesia, ...` explicitly.
- The host app is responsible for migrating Parrhesia's schema, for example with `Parrhesia.Release.migrate()` or `mix ecto.migrate -r Parrhesia.Repo`.
If you only want the in-process API and not the HTTP/WebSocket edge, configure:
```elixir
config :parrhesia, :listeners, %{}
```
The config reference below still applies when embedded. That is the primary place to document basic setup and runtime configuration changes.
---
## Production configuration ## Production configuration
### Minimal setup ### Minimal setup
@@ -112,6 +155,9 @@ Before a Nostr client can publish its first event successfully, make sure these
1. PostgreSQL is reachable from Parrhesia. 1. PostgreSQL is reachable from Parrhesia.
Set `DATABASE_URL` and create/migrate the database with `Parrhesia.Release.migrate()` or `mix ecto.migrate`. Set `DATABASE_URL` and create/migrate the database with `Parrhesia.Release.migrate()` or `mix ecto.migrate`.
PostgreSQL is the supported production datastore. The in-memory backend is intended for
non-persistent runs such as tests and benchmarks.
2. Parrhesia listeners are configured for your deployment. 2. Parrhesia listeners are configured for your deployment.
The default config exposes a `public` listener on plain HTTP port `4413`, and a reverse proxy can terminate TLS and forward WebSocket traffic to `/relay`. Additional listeners can be defined in `config/*.exs`. The default config exposes a `public` listener on plain HTTP port `4413`, and a reverse proxy can terminate TLS and forward WebSocket traffic to `/relay`. Additional listeners can be defined in `config/*.exs`.
@@ -129,7 +175,7 @@ In `prod`, these environment variables are used:
- `DATABASE_URL` (**required**), e.g. `ecto://USER:PASS@HOST/parrhesia_prod` - `DATABASE_URL` (**required**), e.g. `ecto://USER:PASS@HOST/parrhesia_prod`
- `POOL_SIZE` (optional, default `32`) - `POOL_SIZE` (optional, default `32`)
- `PORT` (optional, default `4413`) - `PORT` (optional, default `4413`)
- `PARRHESIA_*` runtime overrides for relay config, limits, policies, listener-related metrics helpers, and features - `PARRHESIA_*` runtime overrides for relay config, metadata, identity, sync, ACL, limits, policies, listeners, retention, and features
- `PARRHESIA_EXTRA_CONFIG` (optional path to an extra runtime config file) - `PARRHESIA_EXTRA_CONFIG` (optional path to an extra runtime config file)
`config/runtime.exs` reads these values at runtime in production releases. `config/runtime.exs` reads these values at runtime in production releases.
@@ -139,12 +185,20 @@ In `prod`, these environment variables are used:
For runtime overrides, use the `PARRHESIA_...` prefix: For runtime overrides, use the `PARRHESIA_...` prefix:
- `PARRHESIA_RELAY_URL` - `PARRHESIA_RELAY_URL`
- `PARRHESIA_METADATA_HIDE_VERSION`
- `PARRHESIA_IDENTITY_*`
- `PARRHESIA_SYNC_*`
- `PARRHESIA_ACL_*`
- `PARRHESIA_TRUSTED_PROXIES` - `PARRHESIA_TRUSTED_PROXIES`
- `PARRHESIA_PUBLIC_MAX_CONNECTIONS`
- `PARRHESIA_MODERATION_CACHE_ENABLED` - `PARRHESIA_MODERATION_CACHE_ENABLED`
- `PARRHESIA_ENABLE_EXPIRATION_WORKER` - `PARRHESIA_ENABLE_EXPIRATION_WORKER`
- `PARRHESIA_ENABLE_PARTITION_RETENTION_WORKER`
- `PARRHESIA_STORAGE_BACKEND`
- `PARRHESIA_LIMITS_*` - `PARRHESIA_LIMITS_*`
- `PARRHESIA_POLICIES_*` - `PARRHESIA_POLICIES_*`
- `PARRHESIA_METRICS_*` - `PARRHESIA_METRICS_*`
- `PARRHESIA_METRICS_ENDPOINT_MAX_CONNECTIONS`
- `PARRHESIA_RETENTION_*` - `PARRHESIA_RETENTION_*`
- `PARRHESIA_FEATURES_*` - `PARRHESIA_FEATURES_*`
- `PARRHESIA_METRICS_ENDPOINT_*` - `PARRHESIA_METRICS_ENDPOINT_*`
@@ -153,12 +207,11 @@ Examples:
```bash ```bash
export PARRHESIA_POLICIES_AUTH_REQUIRED_FOR_WRITES=true export PARRHESIA_POLICIES_AUTH_REQUIRED_FOR_WRITES=true
export PARRHESIA_FEATURES_VERIFY_EVENT_SIGNATURES=true
export PARRHESIA_METRICS_ALLOWED_CIDRS="10.0.0.0/8,192.168.0.0/16" export PARRHESIA_METRICS_ALLOWED_CIDRS="10.0.0.0/8,192.168.0.0/16"
export PARRHESIA_LIMITS_OUTBOUND_OVERFLOW_STRATEGY=drop_oldest export PARRHESIA_LIMITS_OUTBOUND_OVERFLOW_STRATEGY=drop_oldest
``` ```
Listeners themselves are primarily configured under `config :parrhesia, :listeners, ...`. The current runtime env helpers tune the default public listener and the optional dedicated metrics listener. Listeners themselves are primarily configured under `config :parrhesia, :listeners, ...`. The current runtime env helpers tune the default public listener and the optional dedicated metrics listener, including their connection ceilings.
For settings that are awkward to express as env vars, mount an extra config file and set `PARRHESIA_EXTRA_CONFIG` to its path inside the container. For settings that are awkward to express as env vars, mount an extra config file and set `PARRHESIA_EXTRA_CONFIG` to its path inside the container.
@@ -171,8 +224,16 @@ CSV env vars use comma-separated values. Boolean env vars accept `1/0`, `true/fa
| Atom key | ENV | Default | Notes | | Atom key | ENV | Default | Notes |
| --- | --- | --- | --- | | --- | --- | --- | --- |
| `:relay_url` | `PARRHESIA_RELAY_URL` | `ws://localhost:4413/relay` | Advertised relay URL and auth relay tag target | | `:relay_url` | `PARRHESIA_RELAY_URL` | `ws://localhost:4413/relay` | Advertised relay URL and auth relay tag target |
| `:metadata.hide_version?` | `PARRHESIA_METADATA_HIDE_VERSION` | `true` | Hides the relay version from outbound `User-Agent` and NIP-11 when enabled |
| `:acl.protected_filters` | `PARRHESIA_ACL_PROTECTED_FILTERS` | `[]` | JSON-encoded protected filter list for sync ACL checks |
| `:identity.path` | `PARRHESIA_IDENTITY_PATH` | `nil` | Optional path for persisted relay identity material |
| `:identity.private_key` | `PARRHESIA_IDENTITY_PRIVATE_KEY` | `nil` | Optional inline relay private key |
| `:moderation_cache_enabled` | `PARRHESIA_MODERATION_CACHE_ENABLED` | `true` | Toggle moderation cache | | `:moderation_cache_enabled` | `PARRHESIA_MODERATION_CACHE_ENABLED` | `true` | Toggle moderation cache |
| `:enable_expiration_worker` | `PARRHESIA_ENABLE_EXPIRATION_WORKER` | `true` | Toggle background expiration worker | | `:enable_expiration_worker` | `PARRHESIA_ENABLE_EXPIRATION_WORKER` | `true` | Toggle background expiration worker |
| `:nip43` | config-file driven | see table below | Built-in NIP-43 relay access invite / membership flow |
| `:nip66` | config-file driven | see table below | Built-in NIP-66 discovery / monitor publisher |
| `:sync.path` | `PARRHESIA_SYNC_PATH` | `nil` | Optional path to sync peer config |
| `:sync.start_workers?` | `PARRHESIA_SYNC_START_WORKERS` | `true` | Start outbound sync workers on boot |
| `:limits` | `PARRHESIA_LIMITS_*` | see table below | Runtime override group | | `:limits` | `PARRHESIA_LIMITS_*` | see table below | Runtime override group |
| `:policies` | `PARRHESIA_POLICIES_*` | see table below | Runtime override group | | `:policies` | `PARRHESIA_POLICIES_*` | see table below | Runtime override group |
| `:listeners` | config-file driven | see notes below | Ingress listeners with bind, transport, feature, auth, network, and baseline ACL settings | | `:listeners` | config-file driven | see notes below | Ingress listeners with bind, transport, feature, auth, network, and baseline ACL settings |
@@ -193,18 +254,86 @@ CSV env vars use comma-separated values. Boolean env vars accept `1/0`, `true/fa
| `:queue_interval` | `DB_QUEUE_INTERVAL_MS` | `5000` | Ecto queue interval in ms | | `:queue_interval` | `DB_QUEUE_INTERVAL_MS` | `5000` | Ecto queue interval in ms |
| `:types` | `-` | `Parrhesia.PostgresTypes` | Internal config-file setting | | `:types` | `-` | `Parrhesia.PostgresTypes` | Internal config-file setting |
#### `Parrhesia.ReadRepo`
| Atom key | ENV | Default | Notes |
| --- | --- | --- | --- |
| `:url` | `DATABASE_URL` | required | Shares the primary DB URL with the write repo |
| `:pool_size` | `DB_READ_POOL_SIZE` | `32` | Read-only query pool size |
| `:queue_target` | `DB_READ_QUEUE_TARGET_MS` | `1000` | Read pool Ecto queue target in ms |
| `:queue_interval` | `DB_READ_QUEUE_INTERVAL_MS` | `5000` | Read pool Ecto queue interval in ms |
| `:types` | `-` | `Parrhesia.PostgresTypes` | Internal config-file setting |
#### `:listeners` #### `:listeners`
| Atom key | ENV | Default | Notes | | Atom key | ENV | Default | Notes |
| --- | --- | --- | --- | | --- | --- | --- | --- |
| `:public.bind.port` | `PORT` | `4413` | Default public listener port | | `:public.bind.port` | `PORT` | `4413` | Default public listener port |
| `:public.max_connections` | `PARRHESIA_PUBLIC_MAX_CONNECTIONS` | `20000` | Target total connection ceiling for the public listener |
| `:public.proxy.trusted_cidrs` | `PARRHESIA_TRUSTED_PROXIES` | `[]` | Trusted reverse proxies for forwarded IP handling | | `:public.proxy.trusted_cidrs` | `PARRHESIA_TRUSTED_PROXIES` | `[]` | Trusted reverse proxies for forwarded IP handling |
| `:public.features.metrics.*` | `PARRHESIA_METRICS_*` | see below | Convenience runtime overrides for metrics on the public listener | | `:public.features.metrics.*` | `PARRHESIA_METRICS_*` | see below | Convenience runtime overrides for metrics on the public listener |
| `:metrics.bind.port` | `PARRHESIA_METRICS_ENDPOINT_PORT` | `9568` | Optional dedicated metrics listener port | | `:metrics.bind.port` | `PARRHESIA_METRICS_ENDPOINT_PORT` | `9568` | Optional dedicated metrics listener port |
| `:metrics.max_connections` | `PARRHESIA_METRICS_ENDPOINT_MAX_CONNECTIONS` | `1024` | Target total connection ceiling for the dedicated metrics listener |
| `:metrics.enabled` | `PARRHESIA_METRICS_ENDPOINT_ENABLED` | `false` | Enables the optional dedicated metrics listener | | `:metrics.enabled` | `PARRHESIA_METRICS_ENDPOINT_ENABLED` | `false` | Enables the optional dedicated metrics listener |
Listener `max_connections` is a first-class config field. Parrhesia translates it to ThousandIsland's per-acceptor `num_connections` limit based on the active acceptor count. Raw `bandit_options[:thousand_island_options]` can still override that for advanced tuning.
Listener `transport.tls` supports `:disabled`, `:server`, `:mutual`, and `:proxy_terminated`. For TLS-enabled listeners, the main config-file fields are `certfile`, `keyfile`, optional `cacertfile`, optional `cipher_suite`, optional `client_pins`, and `proxy_headers` for proxy-terminated identity. Listener `transport.tls` supports `:disabled`, `:server`, `:mutual`, and `:proxy_terminated`. For TLS-enabled listeners, the main config-file fields are `certfile`, `keyfile`, optional `cacertfile`, optional `cipher_suite`, optional `client_pins`, and `proxy_headers` for proxy-terminated identity.
Every listener supports this config-file schema:
| Atom key | ENV | Default | Notes |
| --- | --- | --- | --- |
| `:id` | `-` | listener key or `:listener` | Listener identifier |
| `:enabled` | public/metrics helpers only | `true` | Whether the listener is started |
| `:bind.ip` | `-` | `0.0.0.0` (`public`) / `127.0.0.1` (`metrics`) | Bind address |
| `:bind.port` | `PORT` / `PARRHESIA_METRICS_ENDPOINT_PORT` | `4413` / `9568` | Bind port |
| `:max_connections` | `PARRHESIA_PUBLIC_MAX_CONNECTIONS` / `PARRHESIA_METRICS_ENDPOINT_MAX_CONNECTIONS` | `20000` / `1024` | Target total listener connection ceiling; accepts integer or `:infinity` in config files |
| `:transport.scheme` | `-` | `:http` | Listener scheme |
| `:transport.tls` | `-` | `%{mode: :disabled}` | TLS mode and TLS-specific options |
| `:proxy.trusted_cidrs` | `PARRHESIA_TRUSTED_PROXIES` on `public` | `[]` | Trusted proxy CIDRs for forwarded identity / IP handling |
| `:proxy.honor_x_forwarded_for` | `-` | `true` | Respect `X-Forwarded-For` from trusted proxies |
| `:network.public` | `-` | `false` | Allow only public networks |
| `:network.private_networks_only` | `-` | `false` | Allow only RFC1918 / local networks |
| `:network.allow_cidrs` | `-` | `[]` | Explicit CIDR allowlist |
| `:network.allow_all` | `-` | `true` | Allow all source IPs |
| `:features.nostr.enabled` | `-` | `true` on `public`, `false` on metrics listener | Enables `/relay` |
| `:features.admin.enabled` | `-` | `true` on `public`, `false` on metrics listener | Enables `/management` |
| `:features.metrics.enabled` | `PARRHESIA_METRICS_ENABLED_ON_MAIN_ENDPOINT` on `public` | `true` on `public`, `true` on metrics listener | Enables `/metrics` |
| `:features.metrics.auth_token` | `PARRHESIA_METRICS_AUTH_TOKEN` | `nil` | Optional bearer token for `/metrics` |
| `:features.metrics.access.public` | `PARRHESIA_METRICS_PUBLIC` | `false` | Allow public-network access to `/metrics` |
| `:features.metrics.access.private_networks_only` | `PARRHESIA_METRICS_PRIVATE_NETWORKS_ONLY` | `true` | Restrict `/metrics` to private networks |
| `:features.metrics.access.allow_cidrs` | `PARRHESIA_METRICS_ALLOWED_CIDRS` | `[]` | Additional CIDR allowlist for `/metrics` |
| `:features.metrics.access.allow_all` | `-` | `true` | Unconditional metrics access in config files |
| `:auth.nip42_required` | `-` | `false` | Require NIP-42 for relay reads / writes |
| `:auth.nip98_required_for_admin` | `PARRHESIA_POLICIES_MANAGEMENT_AUTH_REQUIRED` on `public` | `true` | Require NIP-98 for management API calls |
| `:baseline_acl.read` | `-` | `[]` | Static read deny/allow rules |
| `:baseline_acl.write` | `-` | `[]` | Static write deny/allow rules |
| `:bandit_options` | `-` | `[]` | Advanced Bandit / ThousandIsland passthrough |
#### `:nip66`
| Atom key | ENV | Default | Notes |
| --- | --- | --- | --- |
| `:enabled` | `-` | `true` | Enables the built-in NIP-66 publisher worker |
| `:publish_interval_seconds` | `-` | `900` | Republish cadence for `10166` and `30166` events |
| `:publish_monitor_announcement?` | `-` | `true` | Publish a `10166` monitor announcement alongside discovery events |
| `:timeout_ms` | `-` | `5000` | Probe timeout for websocket and NIP-11 checks |
| `:checks` | `-` | `[:open, :read, :nip11]` | Checks advertised in `10166` and run against each target relay during probing |
| `:targets` | `-` | `[]` | Optional explicit relay targets to probe; when empty, Parrhesia uses `:relay_url` for the `public` listener |
NIP-66 targets are probe sources, not publish destinations. Parrhesia connects to each target relay, collects the configured liveness / discovery data, and stores the resulting signed `10166` / `30166` events in its own local event store so clients can query them here.
#### `:nip43`
| Atom key | ENV | Default | Notes |
| --- | --- | --- | --- |
| `:enabled` | `-` | `true` | Enables the built-in NIP-43 relay access flow and advertises `43` in NIP-11 |
| `:invite_ttl_seconds` | `-` | `900` | Expiration window for generated invite claim strings returned by `REQ` filters targeting kind `28935` |
| `:request_max_age_seconds` | `-` | `300` | Maximum allowed age for inbound join (`28934`) and leave (`28936`) requests |
Parrhesia treats NIP-43 invite requests as synthetic relay output, not stored client input. A `REQ` for kind `28935` causes the relay to generate a fresh relay-signed invite event on the fly. Clients then submit that claim back in a protected kind `28934` join request. When a join or leave request is accepted, Parrhesia updates its local relay membership state and publishes the corresponding relay-signed `8000` / `8001` delta plus the latest `13534` membership snapshot locally.
#### `:limits` #### `:limits`
| Atom key | ENV | Default | | Atom key | ENV | Default |
@@ -213,6 +342,12 @@ Listener `transport.tls` supports `:disabled`, `:server`, `:mutual`, and `:proxy
| `:max_event_bytes` | `PARRHESIA_LIMITS_MAX_EVENT_BYTES` | `262144` | | `:max_event_bytes` | `PARRHESIA_LIMITS_MAX_EVENT_BYTES` | `262144` |
| `:max_filters_per_req` | `PARRHESIA_LIMITS_MAX_FILTERS_PER_REQ` | `16` | | `:max_filters_per_req` | `PARRHESIA_LIMITS_MAX_FILTERS_PER_REQ` | `16` |
| `:max_filter_limit` | `PARRHESIA_LIMITS_MAX_FILTER_LIMIT` | `500` | | `:max_filter_limit` | `PARRHESIA_LIMITS_MAX_FILTER_LIMIT` | `500` |
| `:max_tags_per_event` | `PARRHESIA_LIMITS_MAX_TAGS_PER_EVENT` | `256` |
| `:max_tag_values_per_filter` | `PARRHESIA_LIMITS_MAX_TAG_VALUES_PER_FILTER` | `128` |
| `:ip_max_event_ingest_per_window` | `PARRHESIA_LIMITS_IP_MAX_EVENT_INGEST_PER_WINDOW` | `1000` |
| `:ip_event_ingest_window_seconds` | `PARRHESIA_LIMITS_IP_EVENT_INGEST_WINDOW_SECONDS` | `1` |
| `:relay_max_event_ingest_per_window` | `PARRHESIA_LIMITS_RELAY_MAX_EVENT_INGEST_PER_WINDOW` | `10000` |
| `:relay_event_ingest_window_seconds` | `PARRHESIA_LIMITS_RELAY_EVENT_INGEST_WINDOW_SECONDS` | `1` |
| `:max_subscriptions_per_connection` | `PARRHESIA_LIMITS_MAX_SUBSCRIPTIONS_PER_CONNECTION` | `32` | | `:max_subscriptions_per_connection` | `PARRHESIA_LIMITS_MAX_SUBSCRIPTIONS_PER_CONNECTION` | `32` |
| `:max_event_future_skew_seconds` | `PARRHESIA_LIMITS_MAX_EVENT_FUTURE_SKEW_SECONDS` | `900` | | `:max_event_future_skew_seconds` | `PARRHESIA_LIMITS_MAX_EVENT_FUTURE_SKEW_SECONDS` | `900` |
| `:max_event_ingest_per_window` | `PARRHESIA_LIMITS_MAX_EVENT_INGEST_PER_WINDOW` | `120` | | `:max_event_ingest_per_window` | `PARRHESIA_LIMITS_MAX_EVENT_INGEST_PER_WINDOW` | `120` |
@@ -224,6 +359,8 @@ Listener `transport.tls` supports `:disabled`, `:server`, `:mutual`, and `:proxy
| `:max_negentropy_payload_bytes` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_PAYLOAD_BYTES` | `4096` | | `:max_negentropy_payload_bytes` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_PAYLOAD_BYTES` | `4096` |
| `:max_negentropy_sessions_per_connection` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_SESSIONS_PER_CONNECTION` | `8` | | `:max_negentropy_sessions_per_connection` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_SESSIONS_PER_CONNECTION` | `8` |
| `:max_negentropy_total_sessions` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_TOTAL_SESSIONS` | `10000` | | `:max_negentropy_total_sessions` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_TOTAL_SESSIONS` | `10000` |
| `:max_negentropy_items_per_session` | `PARRHESIA_LIMITS_MAX_NEGENTROPY_ITEMS_PER_SESSION` | `50000` |
| `:negentropy_id_list_threshold` | `PARRHESIA_LIMITS_NEGENTROPY_ID_LIST_THRESHOLD` | `32` |
| `:negentropy_session_idle_timeout_seconds` | `PARRHESIA_LIMITS_NEGENTROPY_SESSION_IDLE_TIMEOUT_SECONDS` | `60` | | `:negentropy_session_idle_timeout_seconds` | `PARRHESIA_LIMITS_NEGENTROPY_SESSION_IDLE_TIMEOUT_SECONDS` | `60` |
| `:negentropy_session_sweep_interval_seconds` | `PARRHESIA_LIMITS_NEGENTROPY_SESSION_SWEEP_INTERVAL_SECONDS` | `10` | | `:negentropy_session_sweep_interval_seconds` | `PARRHESIA_LIMITS_NEGENTROPY_SESSION_SWEEP_INTERVAL_SECONDS` | `10` |
@@ -277,12 +414,14 @@ Listener `transport.tls` supports `:disabled`, `:server`, `:mutual`, and `:proxy
| Atom key | ENV | Default | | Atom key | ENV | Default |
| --- | --- | --- | | --- | --- | --- |
| `:verify_event_signatures` | `PARRHESIA_FEATURES_VERIFY_EVENT_SIGNATURES` | `true` | | `:verify_event_signatures` | `-` | `true` |
| `:nip_45_count` | `PARRHESIA_FEATURES_NIP_45_COUNT` | `true` | | `:nip_45_count` | `PARRHESIA_FEATURES_NIP_45_COUNT` | `true` |
| `:nip_50_search` | `PARRHESIA_FEATURES_NIP_50_SEARCH` | `true` | | `:nip_50_search` | `PARRHESIA_FEATURES_NIP_50_SEARCH` | `true` |
| `:nip_77_negentropy` | `PARRHESIA_FEATURES_NIP_77_NEGENTROPY` | `true` | | `:nip_77_negentropy` | `PARRHESIA_FEATURES_NIP_77_NEGENTROPY` | `true` |
| `:marmot_push_notifications` | `PARRHESIA_FEATURES_MARMOT_PUSH_NOTIFICATIONS` | `false` | | `:marmot_push_notifications` | `PARRHESIA_FEATURES_MARMOT_PUSH_NOTIFICATIONS` | `false` |
`:verify_event_signatures` is config-file only. Production releases always verify event signatures.
#### Extra runtime config #### Extra runtime config
| Atom key | ENV | Default | Notes | | Atom key | ENV | Default | Notes |
@@ -315,7 +454,7 @@ For systemd/process managers, run the release command with `start`.
Build: Build:
```bash ```bash
nix-build nix build
``` ```
Run the built release from `./result/bin/parrhesia` (release command interface). Run the built release from `./result/bin/parrhesia` (release command interface).
@@ -399,7 +538,9 @@ Notes:
## Benchmark ## Benchmark
The benchmark compares Parrhesia against [`strfry`](https://github.com/hoytech/strfry) and [`nostr-rs-relay`](https://sr.ht/~gheartsfield/nostr-rs-relay/) using [`nostr-bench`](https://github.com/rnostr/nostr-bench). The benchmark compares two Parrhesia profiles, one backed by PostgreSQL and one backed by the in-memory adapter, against [`strfry`](https://github.com/hoytech/strfry) and [`nostr-rs-relay`](https://sr.ht/~gheartsfield/nostr-rs-relay/) using [`nostr-bench`](https://github.com/rnostr/nostr-bench). Benchmark runs also lift Parrhesia's relay-side limits by default so the benchmark client, not server guardrails, is the main bottleneck.
`mix bench` is a sequential mixed-workload benchmark, not an isolated per-endpoint microbenchmark. Each relay instance runs `connect`, then `echo`, then `event`, then `req` against the same live process, so later phases measure against state and load created by earlier phases.
Run it with: Run it with:
@@ -409,16 +550,16 @@ mix bench
Current comparison results from [BENCHMARK.md](./BENCHMARK.md): Current comparison results from [BENCHMARK.md](./BENCHMARK.md):
| metric | parrhesia | strfry | nostr-rs-relay | strfry/parrhesia | nostr-rs/parrhesia | | metric | parrhesia-pg | parrhesia-mem | nostr-rs-relay | mem/pg | nostr-rs/pg |
| --- | ---: | ---: | ---: | ---: | ---: | | --- | ---: | ---: | ---: | ---: | ---: |
| connect avg latency (ms) ↓ | 13.50 | 3.00 | 2.00 | **0.22x** | **0.15x** | | connect avg latency (ms) ↓ | 9.33 | 7.67 | 7.00 | **0.82x** | **0.75x** |
| connect max latency (ms) ↓ | 22.50 | 5.50 | 3.00 | **0.24x** | **0.13x** | | connect max latency (ms) ↓ | 12.33 | 9.67 | 10.33 | **0.78x** | **0.84x** |
| echo throughput (TPS) ↑ | 80385.00 | 61673.00 | 164516.00 | 0.77x | **2.05x** | | echo throughput (TPS) ↑ | 64030.33 | 93656.33 | 140767.00 | **1.46x** | **2.20x** |
| echo throughput (MiB/s) ↑ | 44.00 | 34.45 | 90.10 | 0.78x | **2.05x** | | echo throughput (MiB/s) ↑ | 35.07 | 51.27 | 77.07 | **1.46x** | **2.20x** |
| event throughput (TPS) ↑ | 2000.00 | 3404.50 | 788.00 | **1.70x** | 0.39x | | event throughput (TPS) ↑ | 5015.33 | 1505.33 | 2293.67 | 0.30x | 0.46x |
| event throughput (MiB/s) ↑ | 1.30 | 2.20 | 0.50 | **1.69x** | 0.38x | | event throughput (MiB/s) ↑ | 3.40 | 1.00 | 1.50 | 0.29x | 0.44x |
| req throughput (TPS) ↑ | 3664.00 | 1808.50 | 877.50 | 0.49x | 0.24x | | req throughput (TPS) ↑ | 6416.33 | 14566.67 | 3035.67 | **2.27x** | 0.47x |
| req throughput (MiB/s) ↑ | 20.75 | 11.75 | 2.45 | 0.57x | 0.12x | | req throughput (MiB/s) ↑ | 42.43 | 94.23 | 19.23 | **2.22x** | 0.45x |
Higher is better for `↑` metrics. Lower is better for `↓` metrics. Higher is better for `↑` metrics. Lower is better for `↓` metrics.

31
bench/chart.gnuplot Normal file
View File

@@ -0,0 +1,31 @@
# bench/chart.gnuplot — multi-panel SVG showing relay performance over git tags.
#
# Invoked by scripts/run_bench_update.sh with:
# gnuplot -e "data_dir='...'" -e "output_file='...'" bench/chart.gnuplot
#
# The data_dir contains per-metric TSV files and a plot_commands.gnuplot
# fragment generated by the data-prep step that defines the actual plot
# directives (handling variable server columns).
set terminal svg enhanced size 1200,900 font "sans,11"
set output output_file
set style data linespoints
set key outside right top
set grid ytics
set xtics rotate by -30
set datafile separator "\t"
# parrhesia-pg: blue solid, parrhesia-memory: green solid
# strfry: orange dashed, nostr-rs-relay: red dashed
set linetype 1 lc rgb "#2563eb" lw 2 pt 7 ps 1.0
set linetype 2 lc rgb "#16a34a" lw 2 pt 9 ps 1.0
set linetype 3 lc rgb "#ea580c" lw 1.5 pt 5 ps 0.8 dt 2
set linetype 4 lc rgb "#dc2626" lw 1.5 pt 4 ps 0.8 dt 2
set multiplot layout 2,2 title "Parrhesia Relay Benchmark History" font ",14"
# Load dynamically generated plot commands (handles variable column counts)
load data_dir."/plot_commands.gnuplot"
unset multiplot

752
bench/chart.svg Normal file
View File

@@ -0,0 +1,752 @@
<?xml version="1.0" encoding="utf-8" standalone="no"?>
<svg
width="1200" height="900"
viewBox="0 0 1200 900"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
>
<title>Gnuplot</title>
<desc>Produced by GNUPLOT 6.0 patchlevel 4 </desc>
<g id="gnuplot_canvas">
<rect x="0" y="0" width="1200" height="900" fill="none"/>
<defs>
<circle id='gpDot' r='0.5' stroke-width='0.5' stroke='currentColor'/>
<path id='gpPt0' stroke-width='0.242' stroke='currentColor' d='M-1,0 h2 M0,-1 v2'/>
<path id='gpPt1' stroke-width='0.242' stroke='currentColor' d='M-1,-1 L1,1 M1,-1 L-1,1'/>
<path id='gpPt2' stroke-width='0.242' stroke='currentColor' d='M-1,0 L1,0 M0,-1 L0,1 M-1,-1 L1,1 M-1,1 L1,-1'/>
<rect id='gpPt3' stroke-width='0.242' stroke='currentColor' x='-1' y='-1' width='2' height='2'/>
<rect id='gpPt4' stroke-width='0.242' stroke='currentColor' fill='currentColor' x='-1' y='-1' width='2' height='2'/>
<circle id='gpPt5' stroke-width='0.242' stroke='currentColor' cx='0' cy='0' r='1'/>
<use xlink:href='#gpPt5' id='gpPt6' fill='currentColor' stroke='none'/>
<path id='gpPt7' stroke-width='0.242' stroke='currentColor' d='M0,-1.33 L-1.33,0.67 L1.33,0.67 z'/>
<use xlink:href='#gpPt7' id='gpPt8' fill='currentColor' stroke='none'/>
<use xlink:href='#gpPt7' id='gpPt9' stroke='currentColor' transform='rotate(180)'/>
<use xlink:href='#gpPt9' id='gpPt10' fill='currentColor' stroke='none'/>
<use xlink:href='#gpPt3' id='gpPt11' stroke='currentColor' transform='rotate(45)'/>
<use xlink:href='#gpPt11' id='gpPt12' fill='currentColor' stroke='none'/>
<path id='gpPt13' stroke-width='0.242' stroke='currentColor' d='M0,1.330 L1.265,0.411 L0.782,-1.067 L-0.782,-1.076 L-1.265,0.411 z'/>
<use xlink:href='#gpPt13' id='gpPt14' fill='currentColor' stroke='none'/>
<filter id='textbox' filterUnits='objectBoundingBox' x='0' y='0' height='1' width='1'>
<feFlood flood-color='white' flood-opacity='1' result='bgnd'/>
<feComposite in='SourceGraphic' in2='bgnd' operator='atop'/>
</filter>
<filter id='greybox' filterUnits='objectBoundingBox' x='0' y='0' height='1' width='1'>
<feFlood flood-color='lightgrey' flood-opacity='1' result='grey'/>
<feComposite in='SourceGraphic' in2='grey' operator='atop'/>
</filter>
</defs>
<g fill="none" color="white" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(600.00,21.05)" stroke="none" fill="black" font-family="sans" font-size="14.00" text-anchor="middle">
<text><tspan font-family="sans" >Parrhesia Relay Benchmark History</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M74.17,420.94 L368.73,420.94 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,420.94 L82.42,420.94 M368.73,420.94 L360.48,420.94 '/> <g transform="translate(66.48,424.52)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 1500</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M74.17,377.14 L368.73,377.14 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,377.14 L82.42,377.14 M368.73,377.14 L360.48,377.14 '/> <g transform="translate(66.48,380.72)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 2000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M74.17,333.33 L368.73,333.33 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,333.33 L82.42,333.33 M368.73,333.33 L360.48,333.33 '/> <g transform="translate(66.48,336.91)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 2500</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M74.17,289.53 L368.73,289.53 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,289.53 L82.42,289.53 M368.73,289.53 L360.48,289.53 '/> <g transform="translate(66.48,293.11)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 3000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M74.17,245.72 L368.73,245.72 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,245.72 L82.42,245.72 M368.73,245.72 L360.48,245.72 '/> <g transform="translate(66.48,249.30)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 3500</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M74.17,201.92 L368.73,201.92 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,201.92 L82.42,201.92 M368.73,201.92 L360.48,201.92 '/> <g transform="translate(66.48,205.50)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 4000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M74.17,158.12 L368.73,158.12 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,158.12 L82.42,158.12 M368.73,158.12 L360.48,158.12 '/> <g transform="translate(66.48,161.70)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 4500</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M74.17,114.31 L368.73,114.31 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,114.31 L82.42,114.31 M368.73,114.31 L360.48,114.31 '/> <g transform="translate(66.48,117.89)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 5000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M74.17,70.51 L368.73,70.51 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,70.51 L82.42,70.51 M368.73,70.51 L360.48,70.51 '/> <g transform="translate(66.48,74.09)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 5500</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M221.45,420.94 L221.45,412.69 M221.45,70.51 L221.45,78.76 '/> <g transform="translate(219.66,431.73) rotate(30.00)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="start">
<text><tspan font-family="sans" >v0.5.0</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,70.51 L74.17,420.94 L368.73,420.94 L368.73,70.51 L74.17,70.51 Z '/></g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g id="gnuplot_plot_1a" ><title>parrhesia-pg</title>
<g fill="none" color="white" stroke="black" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(537.91,82.34)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >parrhesia-pg</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb( 37, 99, 235)' d='M545.60,78.76 L584.61,78.76 M221.45,112.97 '/> <use xlink:href='#gpPt6' transform='translate(221.45,112.97) scale(4.12)' color='rgb( 37, 99, 235)'/>
<use xlink:href='#gpPt6' transform='translate(565.10,78.76) scale(4.12)' color='rgb( 37, 99, 235)'/>
</g>
</g>
<g id="gnuplot_plot_2a" ><title>parrhesia-memory</title>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(537.91,98.84)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >parrhesia-memory</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb( 22, 163, 74)' d='M545.60,95.26 L584.61,95.26 M221.45,420.47 '/> <use xlink:href='#gpPt8' transform='translate(221.45,420.47) scale(4.12)' color='rgb( 22, 163, 74)'/>
<use xlink:href='#gpPt8' transform='translate(565.10,95.26) scale(4.12)' color='rgb( 22, 163, 74)'/>
</g>
</g>
<g id="gnuplot_plot_3a" ><title>nostr-rs-relay (avg)</title>
<g fill="none" color="white" stroke="rgb( 22, 163, 74)" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(537.91,115.34)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >nostr-rs-relay (avg)</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb(234, 88, 12)' stroke-dasharray='3.8,6.0' d='M545.60,111.76 L584.61,111.76 M221.45,351.41 '/> <use xlink:href='#gpPt4' transform='translate(221.45,351.41) scale(3.30)' color='rgb(234, 88, 12)'/>
<use xlink:href='#gpPt4' transform='translate(565.10,111.76) scale(3.30)' color='rgb(234, 88, 12)'/>
</g>
</g>
<g fill="none" color="white" stroke="rgb(234, 88, 12)" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M74.17,70.51 L74.17,420.94 L368.73,420.94 L368.73,70.51 L74.17,70.51 Z '/> <g transform="translate(17.58,245.73) rotate(270.00)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="middle">
<text><tspan font-family="sans" >TPS</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(221.45,49.34)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="middle">
<text><tspan font-family="sans" >Event Throughput (TPS) — higher is better</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M681.86,420.94 L968.73,420.94 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,420.94 L690.11,420.94 M968.73,420.94 L960.48,420.94 '/> <g transform="translate(674.17,424.52)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 2000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M681.86,370.88 L968.73,370.88 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,370.88 L690.11,370.88 M968.73,370.88 L960.48,370.88 '/> <g transform="translate(674.17,374.46)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 4000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M681.86,320.82 L968.73,320.82 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,320.82 L690.11,320.82 M968.73,320.82 L960.48,320.82 '/> <g transform="translate(674.17,324.40)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 6000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M681.86,270.76 L968.73,270.76 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,270.76 L690.11,270.76 M968.73,270.76 L960.48,270.76 '/> <g transform="translate(674.17,274.34)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 8000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M681.86,220.69 L968.73,220.69 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,220.69 L690.11,220.69 M968.73,220.69 L960.48,220.69 '/> <g transform="translate(674.17,224.27)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 10000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M681.86,170.63 L968.73,170.63 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,170.63 L690.11,170.63 M968.73,170.63 L960.48,170.63 '/> <g transform="translate(674.17,174.21)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 12000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M681.86,120.57 L968.73,120.57 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,120.57 L690.11,120.57 M968.73,120.57 L960.48,120.57 '/> <g transform="translate(674.17,124.15)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 14000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M681.86,70.51 L968.73,70.51 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,70.51 L690.11,70.51 M968.73,70.51 L960.48,70.51 '/> <g transform="translate(674.17,74.09)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 16000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M825.30,420.94 L825.30,412.69 M825.30,70.51 L825.30,78.76 '/> <g transform="translate(823.51,431.73) rotate(30.00)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="start">
<text><tspan font-family="sans" >v0.5.0</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,70.51 L681.86,420.94 L968.73,420.94 L968.73,70.51 L681.86,70.51 Z '/></g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g id="gnuplot_plot_1b" ><title>parrhesia-pg</title>
<g fill="none" color="white" stroke="black" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(1137.91,82.34)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >parrhesia-pg</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb( 37, 99, 235)' d='M1145.60,78.76 L1184.61,78.76 M825.30,310.40 '/> <use xlink:href='#gpPt6' transform='translate(825.30,310.40) scale(4.12)' color='rgb( 37, 99, 235)'/>
<use xlink:href='#gpPt6' transform='translate(1165.10,78.76) scale(4.12)' color='rgb( 37, 99, 235)'/>
</g>
</g>
<g id="gnuplot_plot_2b" ><title>parrhesia-memory</title>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(1137.91,98.84)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >parrhesia-memory</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb( 22, 163, 74)' d='M1145.60,95.26 L1184.61,95.26 M825.30,106.39 '/> <use xlink:href='#gpPt8' transform='translate(825.30,106.39) scale(4.12)' color='rgb( 22, 163, 74)'/>
<use xlink:href='#gpPt8' transform='translate(1165.10,95.26) scale(4.12)' color='rgb( 22, 163, 74)'/>
</g>
</g>
<g id="gnuplot_plot_3b" ><title>nostr-rs-relay (avg)</title>
<g fill="none" color="white" stroke="rgb( 22, 163, 74)" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(1137.91,115.34)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >nostr-rs-relay (avg)</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb(234, 88, 12)' stroke-dasharray='3.8,6.0' d='M1145.60,111.76 L1184.61,111.76 M825.30,395.02 '/> <use xlink:href='#gpPt4' transform='translate(825.30,395.02) scale(3.30)' color='rgb(234, 88, 12)'/>
<use xlink:href='#gpPt4' transform='translate(1165.10,111.76) scale(3.30)' color='rgb(234, 88, 12)'/>
</g>
</g>
<g fill="none" color="white" stroke="rgb(234, 88, 12)" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M681.86,70.51 L681.86,420.94 L968.73,420.94 L968.73,70.51 L681.86,70.51 Z '/> <g transform="translate(617.58,245.73) rotate(270.00)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="middle">
<text><tspan font-family="sans" >TPS</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(825.29,49.34)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="middle">
<text><tspan font-family="sans" >Req Throughput (TPS) — higher is better</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,860.44 L368.73,860.44 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,860.44 L97.80,860.44 M368.73,860.44 L360.48,860.44 '/> <g transform="translate(81.86,864.02)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 60000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,821.50 L368.73,821.50 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,821.50 L97.80,821.50 M368.73,821.50 L360.48,821.50 '/> <g transform="translate(81.86,825.08)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 70000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,782.56 L368.73,782.56 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,782.56 L97.80,782.56 M368.73,782.56 L360.48,782.56 '/> <g transform="translate(81.86,786.14)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 80000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,743.63 L368.73,743.63 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,743.63 L97.80,743.63 M368.73,743.63 L360.48,743.63 '/> <g transform="translate(81.86,747.21)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 90000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,704.69 L368.73,704.69 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,704.69 L97.80,704.69 M368.73,704.69 L360.48,704.69 '/> <g transform="translate(81.86,708.27)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 100000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,665.75 L368.73,665.75 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,665.75 L97.80,665.75 M368.73,665.75 L360.48,665.75 '/> <g transform="translate(81.86,669.33)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 110000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,626.81 L368.73,626.81 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,626.81 L97.80,626.81 M368.73,626.81 L360.48,626.81 '/> <g transform="translate(81.86,630.39)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 120000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,587.88 L368.73,587.88 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,587.88 L97.80,587.88 M368.73,587.88 L360.48,587.88 '/> <g transform="translate(81.86,591.46)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 130000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,548.94 L368.73,548.94 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,548.94 L97.80,548.94 M368.73,548.94 L360.48,548.94 '/> <g transform="translate(81.86,552.52)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 140000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M89.55,510.00 L368.73,510.00 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,510.00 L97.80,510.00 M368.73,510.00 L360.48,510.00 '/> <g transform="translate(81.86,513.58)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 150000</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M229.14,860.44 L229.14,852.19 M229.14,510.00 L229.14,518.25 '/> <g transform="translate(227.35,871.23) rotate(30.00)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="start">
<text><tspan font-family="sans" >v0.5.0</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,510.00 L89.55,860.44 L368.73,860.44 L368.73,510.00 L89.55,510.00 Z '/></g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g id="gnuplot_plot_1c" ><title>parrhesia-pg</title>
<g fill="none" color="white" stroke="black" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(537.91,521.83)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >parrhesia-pg</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb( 37, 99, 235)' d='M545.60,518.25 L584.61,518.25 M229.14,844.75 '/> <use xlink:href='#gpPt6' transform='translate(229.14,844.75) scale(4.12)' color='rgb( 37, 99, 235)'/>
<use xlink:href='#gpPt6' transform='translate(565.10,518.25) scale(4.12)' color='rgb( 37, 99, 235)'/>
</g>
</g>
<g id="gnuplot_plot_2c" ><title>parrhesia-memory</title>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(537.91,538.33)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >parrhesia-memory</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb( 22, 163, 74)' d='M545.60,534.75 L584.61,534.75 M229.14,729.39 '/> <use xlink:href='#gpPt8' transform='translate(229.14,729.39) scale(4.12)' color='rgb( 22, 163, 74)'/>
<use xlink:href='#gpPt8' transform='translate(565.10,534.75) scale(4.12)' color='rgb( 22, 163, 74)'/>
</g>
</g>
<g id="gnuplot_plot_3c" ><title>nostr-rs-relay (avg)</title>
<g fill="none" color="white" stroke="rgb( 22, 163, 74)" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(537.91,554.83)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >nostr-rs-relay (avg)</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb(234, 88, 12)' stroke-dasharray='3.8,6.0' d='M545.60,551.25 L584.61,551.25 M229.14,545.95 '/> <use xlink:href='#gpPt4' transform='translate(229.14,545.95) scale(3.30)' color='rgb(234, 88, 12)'/>
<use xlink:href='#gpPt4' transform='translate(565.10,551.25) scale(3.30)' color='rgb(234, 88, 12)'/>
</g>
</g>
<g fill="none" color="white" stroke="rgb(234, 88, 12)" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M89.55,510.00 L89.55,860.44 L368.73,860.44 L368.73,510.00 L89.55,510.00 Z '/> <g transform="translate(17.58,685.22) rotate(270.00)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="middle">
<text><tspan font-family="sans" >TPS</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(229.14,488.83)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="middle">
<text><tspan font-family="sans" >Echo Throughput (TPS) — higher is better</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M666.48,860.44 L968.73,860.44 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M666.48,860.44 L674.73,860.44 M968.73,860.44 L960.48,860.44 '/> <g transform="translate(658.79,864.02)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 7</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M666.48,790.35 L968.73,790.35 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M666.48,790.35 L674.73,790.35 M968.73,790.35 L960.48,790.35 '/> <g transform="translate(658.79,793.93)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 7.5</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M666.48,720.26 L968.73,720.26 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M666.48,720.26 L674.73,720.26 M968.73,720.26 L960.48,720.26 '/> <g transform="translate(658.79,723.84)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 8</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M666.48,650.18 L968.73,650.18 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M666.48,650.18 L674.73,650.18 M968.73,650.18 L960.48,650.18 '/> <g transform="translate(658.79,653.76)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 8.5</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M666.48,580.09 L968.73,580.09 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M666.48,580.09 L674.73,580.09 M968.73,580.09 L960.48,580.09 '/> <g transform="translate(658.79,583.67)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 9</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="gray" stroke="currentColor" stroke-width="0.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='gray' stroke-dasharray='2,4' class="gridline" d='M666.48,510.00 L968.73,510.00 '/></g>
<g fill="none" color="gray" stroke="gray" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M666.48,510.00 L674.73,510.00 M968.73,510.00 L960.48,510.00 '/> <g transform="translate(658.79,513.58)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" > 9.5</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M817.61,860.44 L817.61,852.19 M817.61,510.00 L817.61,518.25 '/> <g transform="translate(815.82,871.23) rotate(30.00)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="start">
<text><tspan font-family="sans" >v0.5.0</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M666.48,510.00 L666.48,860.44 L968.73,860.44 L968.73,510.00 L666.48,510.00 Z '/></g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g id="gnuplot_plot_1d" ><title>parrhesia-pg</title>
<g fill="none" color="white" stroke="black" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(1137.91,521.83)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >parrhesia-pg</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb( 37, 99, 235)' d='M1145.60,518.25 L1184.61,518.25 M817.61,533.36 '/> <use xlink:href='#gpPt6' transform='translate(817.61,533.36) scale(4.12)' color='rgb( 37, 99, 235)'/>
<use xlink:href='#gpPt6' transform='translate(1165.10,518.25) scale(4.12)' color='rgb( 37, 99, 235)'/>
</g>
</g>
<g id="gnuplot_plot_2d" ><title>parrhesia-memory</title>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(1137.91,538.33)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >parrhesia-memory</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb( 22, 163, 74)' d='M1145.60,534.75 L1184.61,534.75 M817.61,766.99 '/> <use xlink:href='#gpPt8' transform='translate(817.61,766.99) scale(4.12)' color='rgb( 22, 163, 74)'/>
<use xlink:href='#gpPt8' transform='translate(1165.10,534.75) scale(4.12)' color='rgb( 22, 163, 74)'/>
</g>
</g>
<g id="gnuplot_plot_3d" ><title>nostr-rs-relay (avg)</title>
<g fill="none" color="white" stroke="rgb( 22, 163, 74)" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(1137.91,554.83)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="end">
<text><tspan font-family="sans" >nostr-rs-relay (avg)</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.50" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='rgb(234, 88, 12)' stroke-dasharray='3.8,6.0' d='M1145.60,551.25 L1184.61,551.25 M817.61,860.44 '/> <use xlink:href='#gpPt4' transform='translate(817.61,860.44) scale(3.30)' color='rgb(234, 88, 12)'/>
<use xlink:href='#gpPt4' transform='translate(1165.10,551.25) scale(3.30)' color='rgb(234, 88, 12)'/>
</g>
</g>
<g fill="none" color="white" stroke="rgb(234, 88, 12)" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="2.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="black" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<path stroke='black' d='M666.48,510.00 L666.48,860.44 L968.73,860.44 L968.73,510.00 L666.48,510.00 Z '/> <g transform="translate(617.58,685.22) rotate(270.00)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="middle">
<text><tspan font-family="sans" >ms</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
<g transform="translate(817.60,488.83)" stroke="none" fill="black" font-family="sans" font-size="11.00" text-anchor="middle">
<text><tspan font-family="sans" >Connect Avg Latency (ms) — lower is better</tspan></text>
</g>
</g>
<g fill="none" color="black" stroke="currentColor" stroke-width="1.00" stroke-linecap="butt" stroke-linejoin="miter">
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 53 KiB

1
bench/history.jsonl Normal file
View File

@@ -0,0 +1 @@
{"timestamp":"2026-03-18T20:13:21Z","machine_id":"squirrel","git_tag":"v0.5.0","git_commit":"970cee2","runs":3,"servers":{"parrhesia-pg":{"connect_avg_ms":9.333333333333334,"connect_max_ms":12.333333333333334,"echo_tps":64030.333333333336,"echo_mibs":35.06666666666666,"event_tps":5015.333333333333,"event_mibs":3.4,"req_tps":6416.333333333333,"req_mibs":42.43333333333334},"parrhesia-memory":{"connect_avg_ms":7.666666666666667,"connect_max_ms":9.666666666666666,"echo_tps":93656.33333333333,"echo_mibs":51.26666666666667,"event_tps":1505.3333333333333,"event_mibs":1,"req_tps":14566.666666666666,"req_mibs":94.23333333333335},"nostr-rs-relay":{"connect_avg_ms":7,"connect_max_ms":10.333333333333334,"echo_tps":140767,"echo_mibs":77.06666666666666,"event_tps":2293.6666666666665,"event_mibs":1.5,"req_tps":3035.6666666666665,"req_mibs":19.23333333333333}}}

View File

@@ -1,10 +1,38 @@
import Config import Config
project_version =
case Mix.Project.config()[:version] do
version when is_binary(version) -> version
version -> to_string(version)
end
config :postgrex, :json_library, JSON config :postgrex, :json_library, JSON
config :parrhesia, config :parrhesia,
metadata: [
name: "Parrhesia",
version: project_version,
hide_version?: true
],
database: [
separate_read_pool?: config_env() != :test
],
moderation_cache_enabled: true, moderation_cache_enabled: true,
enable_partition_retention_worker: true,
relay_url: "ws://localhost:4413/relay", relay_url: "ws://localhost:4413/relay",
nip43: [
enabled: true,
invite_ttl_seconds: 900,
request_max_age_seconds: 300
],
nip66: [
enabled: true,
publish_interval_seconds: 900,
publish_monitor_announcement?: true,
timeout_ms: 5_000,
checks: [:open, :read, :nip11],
targets: []
],
identity: [ identity: [
path: nil, path: nil,
private_key: nil private_key: nil
@@ -18,6 +46,12 @@ config :parrhesia,
max_event_bytes: 262_144, max_event_bytes: 262_144,
max_filters_per_req: 16, max_filters_per_req: 16,
max_filter_limit: 500, max_filter_limit: 500,
max_tags_per_event: 256,
max_tag_values_per_filter: 128,
ip_max_event_ingest_per_window: 1_000,
ip_event_ingest_window_seconds: 1,
relay_max_event_ingest_per_window: 10_000,
relay_event_ingest_window_seconds: 1,
max_subscriptions_per_connection: 32, max_subscriptions_per_connection: 32,
max_event_future_skew_seconds: 900, max_event_future_skew_seconds: 900,
max_event_ingest_per_window: 120, max_event_ingest_per_window: 120,
@@ -61,6 +95,7 @@ config :parrhesia,
public: %{ public: %{
enabled: true, enabled: true,
bind: %{ip: {0, 0, 0, 0}, port: 4413}, bind: %{ip: {0, 0, 0, 0}, port: 4413},
max_connections: 20_000,
transport: %{scheme: :http, tls: %{mode: :disabled}}, transport: %{scheme: :http, tls: %{mode: :disabled}},
proxy: %{trusted_cidrs: [], honor_x_forwarded_for: true}, proxy: %{trusted_cidrs: [], honor_x_forwarded_for: true},
network: %{allow_all: true}, network: %{allow_all: true},
@@ -85,6 +120,7 @@ config :parrhesia,
max_partitions_to_drop_per_run: 1 max_partitions_to_drop_per_run: 1
], ],
features: [ features: [
verify_event_signatures_locked?: config_env() == :prod,
verify_event_signatures: true, verify_event_signatures: true,
nip_45_count: true, nip_45_count: true,
nip_50_search: true, nip_50_search: true,
@@ -92,13 +128,16 @@ config :parrhesia,
marmot_push_notifications: false marmot_push_notifications: false
], ],
storage: [ storage: [
backend: :postgres,
events: Parrhesia.Storage.Adapters.Postgres.Events, events: Parrhesia.Storage.Adapters.Postgres.Events,
acl: Parrhesia.Storage.Adapters.Postgres.ACL,
moderation: Parrhesia.Storage.Adapters.Postgres.Moderation, moderation: Parrhesia.Storage.Adapters.Postgres.Moderation,
groups: Parrhesia.Storage.Adapters.Postgres.Groups, groups: Parrhesia.Storage.Adapters.Postgres.Groups,
admin: Parrhesia.Storage.Adapters.Postgres.Admin admin: Parrhesia.Storage.Adapters.Postgres.Admin
] ]
config :parrhesia, Parrhesia.Repo, types: Parrhesia.PostgresTypes config :parrhesia, Parrhesia.Repo, types: Parrhesia.PostgresTypes
config :parrhesia, Parrhesia.ReadRepo, types: Parrhesia.PostgresTypes
config :parrhesia, ecto_repos: [Parrhesia.Repo] config :parrhesia, ecto_repos: [Parrhesia.Repo]

View File

@@ -23,3 +23,13 @@ config :parrhesia,
show_sensitive_data_on_connection_error: true, show_sensitive_data_on_connection_error: true,
pool_size: 10 pool_size: 10
] ++ repo_host_opts ] ++ repo_host_opts
config :parrhesia,
Parrhesia.ReadRepo,
[
username: System.get_env("PGUSER") || System.get_env("USER") || "agent",
password: System.get_env("PGPASSWORD"),
database: System.get_env("PGDATABASE") || "parrhesia_dev",
show_sensitive_data_on_connection_error: true,
pool_size: 10
] ++ repo_host_opts

View File

@@ -5,4 +5,9 @@ config :parrhesia, Parrhesia.Repo,
queue_target: 1_000, queue_target: 1_000,
queue_interval: 5_000 queue_interval: 5_000
config :parrhesia, Parrhesia.ReadRepo,
pool_size: 32,
queue_target: 1_000,
queue_interval: 5_000
# Production runtime configuration lives in config/runtime.exs. # Production runtime configuration lives in config/runtime.exs.

View File

@@ -35,6 +35,20 @@ bool_env = fn name, default ->
end end
end end
storage_backend_env = fn name, default ->
case System.get_env(name) do
nil ->
default
value ->
case String.downcase(String.trim(value)) do
"postgres" -> :postgres
"memory" -> :memory
_other -> raise "environment variable #{name} must be one of: postgres, memory"
end
end
end
csv_env = fn name, default -> csv_env = fn name, default ->
case System.get_env(name) do case System.get_env(name) do
nil -> nil ->
@@ -125,12 +139,12 @@ ipv4_env = fn name, default ->
end end
if config_env() == :prod do if config_env() == :prod do
database_url =
System.get_env("DATABASE_URL") ||
raise "environment variable DATABASE_URL is missing. Example: ecto://USER:PASS@HOST/DATABASE"
repo_defaults = Application.get_env(:parrhesia, Parrhesia.Repo, []) repo_defaults = Application.get_env(:parrhesia, Parrhesia.Repo, [])
read_repo_defaults = Application.get_env(:parrhesia, Parrhesia.ReadRepo, [])
relay_url_default = Application.get_env(:parrhesia, :relay_url) relay_url_default = Application.get_env(:parrhesia, :relay_url)
metadata_defaults = Application.get_env(:parrhesia, :metadata, [])
database_defaults = Application.get_env(:parrhesia, :database, [])
storage_defaults = Application.get_env(:parrhesia, :storage, [])
moderation_cache_enabled_default = moderation_cache_enabled_default =
Application.get_env(:parrhesia, :moderation_cache_enabled, true) Application.get_env(:parrhesia, :moderation_cache_enabled, true)
@@ -138,6 +152,9 @@ if config_env() == :prod do
enable_expiration_worker_default = enable_expiration_worker_default =
Application.get_env(:parrhesia, :enable_expiration_worker, true) Application.get_env(:parrhesia, :enable_expiration_worker, true)
enable_partition_retention_worker_default =
Application.get_env(:parrhesia, :enable_partition_retention_worker, true)
limits_defaults = Application.get_env(:parrhesia, :limits, []) limits_defaults = Application.get_env(:parrhesia, :limits, [])
policies_defaults = Application.get_env(:parrhesia, :policies, []) policies_defaults = Application.get_env(:parrhesia, :policies, [])
listeners_defaults = Application.get_env(:parrhesia, :listeners, %{}) listeners_defaults = Application.get_env(:parrhesia, :listeners, %{})
@@ -148,10 +165,41 @@ if config_env() == :prod do
default_pool_size = Keyword.get(repo_defaults, :pool_size, 32) default_pool_size = Keyword.get(repo_defaults, :pool_size, 32)
default_queue_target = Keyword.get(repo_defaults, :queue_target, 1_000) default_queue_target = Keyword.get(repo_defaults, :queue_target, 1_000)
default_queue_interval = Keyword.get(repo_defaults, :queue_interval, 5_000) default_queue_interval = Keyword.get(repo_defaults, :queue_interval, 5_000)
default_read_pool_size = Keyword.get(read_repo_defaults, :pool_size, default_pool_size)
default_read_queue_target = Keyword.get(read_repo_defaults, :queue_target, default_queue_target)
default_read_queue_interval =
Keyword.get(read_repo_defaults, :queue_interval, default_queue_interval)
default_storage_backend =
storage_defaults
|> Keyword.get(:backend, :postgres)
|> case do
:postgres -> :postgres
:memory -> :memory
other -> raise "unsupported storage backend default: #{inspect(other)}"
end
storage_backend = storage_backend_env.("PARRHESIA_STORAGE_BACKEND", default_storage_backend)
postgres_backend? = storage_backend == :postgres
separate_read_pool? =
postgres_backend? and Keyword.get(database_defaults, :separate_read_pool?, true)
database_url =
if postgres_backend? do
System.get_env("DATABASE_URL") ||
raise "environment variable DATABASE_URL is missing. Example: ecto://USER:PASS@HOST/DATABASE"
else
nil
end
pool_size = int_env.("POOL_SIZE", default_pool_size) pool_size = int_env.("POOL_SIZE", default_pool_size)
queue_target = int_env.("DB_QUEUE_TARGET_MS", default_queue_target) queue_target = int_env.("DB_QUEUE_TARGET_MS", default_queue_target)
queue_interval = int_env.("DB_QUEUE_INTERVAL_MS", default_queue_interval) queue_interval = int_env.("DB_QUEUE_INTERVAL_MS", default_queue_interval)
read_pool_size = int_env.("DB_READ_POOL_SIZE", default_read_pool_size)
read_queue_target = int_env.("DB_READ_QUEUE_TARGET_MS", default_read_queue_target)
read_queue_interval = int_env.("DB_READ_QUEUE_INTERVAL_MS", default_read_queue_interval)
limits = [ limits = [
max_frame_bytes: max_frame_bytes:
@@ -174,6 +222,36 @@ if config_env() == :prod do
"PARRHESIA_LIMITS_MAX_FILTER_LIMIT", "PARRHESIA_LIMITS_MAX_FILTER_LIMIT",
Keyword.get(limits_defaults, :max_filter_limit, 500) Keyword.get(limits_defaults, :max_filter_limit, 500)
), ),
max_tags_per_event:
int_env.(
"PARRHESIA_LIMITS_MAX_TAGS_PER_EVENT",
Keyword.get(limits_defaults, :max_tags_per_event, 256)
),
max_tag_values_per_filter:
int_env.(
"PARRHESIA_LIMITS_MAX_TAG_VALUES_PER_FILTER",
Keyword.get(limits_defaults, :max_tag_values_per_filter, 128)
),
ip_max_event_ingest_per_window:
int_env.(
"PARRHESIA_LIMITS_IP_MAX_EVENT_INGEST_PER_WINDOW",
Keyword.get(limits_defaults, :ip_max_event_ingest_per_window, 1_000)
),
ip_event_ingest_window_seconds:
int_env.(
"PARRHESIA_LIMITS_IP_EVENT_INGEST_WINDOW_SECONDS",
Keyword.get(limits_defaults, :ip_event_ingest_window_seconds, 1)
),
relay_max_event_ingest_per_window:
int_env.(
"PARRHESIA_LIMITS_RELAY_MAX_EVENT_INGEST_PER_WINDOW",
Keyword.get(limits_defaults, :relay_max_event_ingest_per_window, 10_000)
),
relay_event_ingest_window_seconds:
int_env.(
"PARRHESIA_LIMITS_RELAY_EVENT_INGEST_WINDOW_SECONDS",
Keyword.get(limits_defaults, :relay_event_ingest_window_seconds, 1)
),
max_subscriptions_per_connection: max_subscriptions_per_connection:
int_env.( int_env.(
"PARRHESIA_LIMITS_MAX_SUBSCRIPTIONS_PER_CONNECTION", "PARRHESIA_LIMITS_MAX_SUBSCRIPTIONS_PER_CONNECTION",
@@ -388,6 +466,11 @@ if config_env() == :prod do
ip: Map.get(public_bind_defaults, :ip, {0, 0, 0, 0}), ip: Map.get(public_bind_defaults, :ip, {0, 0, 0, 0}),
port: int_env.("PORT", Map.get(public_bind_defaults, :port, 4413)) port: int_env.("PORT", Map.get(public_bind_defaults, :port, 4413))
}, },
max_connections:
infinity_or_int_env.(
"PARRHESIA_PUBLIC_MAX_CONNECTIONS",
Map.get(public_listener_defaults, :max_connections, 20_000)
),
transport: %{ transport: %{
scheme: Map.get(public_transport_defaults, :scheme, :http), scheme: Map.get(public_transport_defaults, :scheme, :http),
tls: Map.get(public_transport_defaults, :tls, %{mode: :disabled}) tls: Map.get(public_transport_defaults, :tls, %{mode: :disabled})
@@ -471,6 +554,11 @@ if config_env() == :prod do
Map.get(metrics_listener_bind_defaults, :port, 9568) Map.get(metrics_listener_bind_defaults, :port, 9568)
) )
}, },
max_connections:
infinity_or_int_env.(
"PARRHESIA_METRICS_ENDPOINT_MAX_CONNECTIONS",
Map.get(metrics_listener_defaults, :max_connections, 1_024)
),
transport: %{ transport: %{
scheme: Map.get(metrics_listener_transport_defaults, :scheme, :http), scheme: Map.get(metrics_listener_transport_defaults, :scheme, :http),
tls: Map.get(metrics_listener_transport_defaults, :tls, %{mode: :disabled}) tls: Map.get(metrics_listener_transport_defaults, :tls, %{mode: :disabled})
@@ -553,11 +641,14 @@ if config_env() == :prod do
] ]
features = [ features = [
verify_event_signatures_locked?:
Keyword.get(features_defaults, :verify_event_signatures_locked?, false),
verify_event_signatures: verify_event_signatures:
bool_env.( if Keyword.get(features_defaults, :verify_event_signatures_locked?, false) do
"PARRHESIA_FEATURES_VERIFY_EVENT_SIGNATURES", true
else
Keyword.get(features_defaults, :verify_event_signatures, true) Keyword.get(features_defaults, :verify_event_signatures, true)
), end,
nip_45_count: nip_45_count:
bool_env.( bool_env.(
"PARRHESIA_FEATURES_NIP_45_COUNT", "PARRHESIA_FEATURES_NIP_45_COUNT",
@@ -580,14 +671,57 @@ if config_env() == :prod do
) )
] ]
storage =
case storage_backend do
:postgres ->
[
backend: :postgres,
events: Parrhesia.Storage.Adapters.Postgres.Events,
acl: Parrhesia.Storage.Adapters.Postgres.ACL,
moderation: Parrhesia.Storage.Adapters.Postgres.Moderation,
groups: Parrhesia.Storage.Adapters.Postgres.Groups,
admin: Parrhesia.Storage.Adapters.Postgres.Admin
]
:memory ->
[
backend: :memory,
events: Parrhesia.Storage.Adapters.Memory.Events,
acl: Parrhesia.Storage.Adapters.Memory.ACL,
moderation: Parrhesia.Storage.Adapters.Memory.Moderation,
groups: Parrhesia.Storage.Adapters.Memory.Groups,
admin: Parrhesia.Storage.Adapters.Memory.Admin
]
end
if postgres_backend? do
config :parrhesia, Parrhesia.Repo, config :parrhesia, Parrhesia.Repo,
url: database_url, url: database_url,
pool_size: pool_size, pool_size: pool_size,
queue_target: queue_target, queue_target: queue_target,
queue_interval: queue_interval queue_interval: queue_interval
config :parrhesia, Parrhesia.ReadRepo,
url: database_url,
pool_size: read_pool_size,
queue_target: read_queue_target,
queue_interval: read_queue_interval
end
config :parrhesia, config :parrhesia,
database: [
separate_read_pool?: separate_read_pool?
],
relay_url: string_env.("PARRHESIA_RELAY_URL", relay_url_default), relay_url: string_env.("PARRHESIA_RELAY_URL", relay_url_default),
metadata: [
name: Keyword.get(metadata_defaults, :name, "Parrhesia"),
version: Keyword.get(metadata_defaults, :version, "0.0.0"),
hide_version?:
bool_env.(
"PARRHESIA_METADATA_HIDE_VERSION",
Keyword.get(metadata_defaults, :hide_version?, true)
)
],
acl: [ acl: [
protected_filters: protected_filters:
json_env.( json_env.(
@@ -611,11 +745,17 @@ if config_env() == :prod do
bool_env.("PARRHESIA_MODERATION_CACHE_ENABLED", moderation_cache_enabled_default), bool_env.("PARRHESIA_MODERATION_CACHE_ENABLED", moderation_cache_enabled_default),
enable_expiration_worker: enable_expiration_worker:
bool_env.("PARRHESIA_ENABLE_EXPIRATION_WORKER", enable_expiration_worker_default), bool_env.("PARRHESIA_ENABLE_EXPIRATION_WORKER", enable_expiration_worker_default),
enable_partition_retention_worker:
bool_env.(
"PARRHESIA_ENABLE_PARTITION_RETENTION_WORKER",
enable_partition_retention_worker_default
),
listeners: listeners, listeners: listeners,
limits: limits, limits: limits,
policies: policies, policies: policies,
retention: retention, retention: retention,
features: features features: features,
storage: storage
case System.get_env("PARRHESIA_EXTRA_CONFIG") do case System.get_env("PARRHESIA_EXTRA_CONFIG") do
nil -> :ok nil -> :ok

View File

@@ -27,6 +27,7 @@ config :parrhesia, :listeners,
config :parrhesia, config :parrhesia,
enable_expiration_worker: false, enable_expiration_worker: false,
moderation_cache_enabled: false, moderation_cache_enabled: false,
nip66: [enabled: false],
identity: [ identity: [
path: Path.join(System.tmp_dir!(), "parrhesia_test_identity.json"), path: Path.join(System.tmp_dir!(), "parrhesia_test_identity.json"),
private_key: nil private_key: nil

View File

@@ -10,7 +10,7 @@
vips, vips,
}: let }: let
pname = "parrhesia"; pname = "parrhesia";
version = "0.5.0"; version = "0.6.0";
beamPackages = beam.packages.erlang_28.extend ( beamPackages = beam.packages.erlang_28.extend (
final: _prev: { final: _prev: {

View File

@@ -101,6 +101,8 @@ in {
nostr-bench nostr-bench
# Nostr reference servers # Nostr reference servers
nostr-rs-relay nostr-rs-relay
# Benchmark graph
gnuplot
] ]
++ lib.optionals pkgs.stdenv.hostPlatform.isx86_64 [ ++ lib.optionals pkgs.stdenv.hostPlatform.isx86_64 [
strfry strfry

View File

@@ -82,16 +82,20 @@ Configured WS/HTTP Listeners (Bandit/Plug)
## 4) OTP supervision design ## 4) OTP supervision design
`Parrhesia.Application` children (top-level): `Parrhesia.Runtime` children (top-level):
1. `Parrhesia.Telemetry` metric definitions/reporters 1. `Parrhesia.Telemetry` metric definitions/reporters
2. `Parrhesia.Config` runtime config cache (ETS-backed) 2. `Parrhesia.ConnectionStats` per-listener connection/subscription counters
3. `Parrhesia.Storage.Supervisor` adapter processes (`Repo`, pools) 3. `Parrhesia.Config` runtime config cache (ETS-backed)
4. `Parrhesia.Subscriptions.Supervisor` subscription index + fanout workers 4. `Parrhesia.Web.EventIngestLimiter` relay-wide event ingest rate limiter
5. `Parrhesia.Auth.Supervisor` AUTH challenge/session tracking 5. `Parrhesia.Web.IPEventIngestLimiter` per-IP event ingest rate limiter
6. `Parrhesia.Policy.Supervisor` rate limiters / ACL caches 6. `Parrhesia.Storage.Supervisor` adapter processes (`Repo`, pools)
7. `Parrhesia.Web.Endpoint` supervises configured WS + HTTP listeners 7. `Parrhesia.Subscriptions.Supervisor` subscription index + fanout workers
8. `Parrhesia.Tasks.Supervisor` background jobs (expiry purge, maintenance) 8. `Parrhesia.Auth.Supervisor` AUTH challenge/session tracking
9. `Parrhesia.Sync.Supervisor` outbound relay sync workers
10. `Parrhesia.Policy.Supervisor` rate limiters / ACL caches
11. `Parrhesia.Web.Endpoint` supervises configured WS + HTTP listeners
12. `Parrhesia.Tasks.Supervisor` background jobs (expiry purge, maintenance)
Failure model: Failure model:

View File

@@ -1,140 +0,0 @@
# Khatru-Inspired Runtime Improvements
This document collects refactoring and extension ideas learned from studying Khatru-style relay design.
It is intentionally **not** about the new public API surface or the sync ACL model. Those live in `docs/slop/LOCAL_API.md` and `docs/SYNC.md`.
The focus here is runtime shape, protocol behavior, and operator-visible relay features.
---
## 1. Why This Matters
Khatru appears mature mainly because it exposes clearer relay pipeline stages.
That gives three practical benefits:
- less policy drift between storage, websocket, and management code,
- easier feature addition without hard-coding more branches into one connection module,
- better composability for relay profiles with different trust and traffic models.
Parrhesia should borrow that clarity without copying Khatru's code-first hook model wholesale.
---
## 2. Proposed Runtime Refactors
### 2.1 Staged policy pipeline
Parrhesia should stop treating policy as one coarse `EventPolicy` module plus scattered special cases.
Recommended internal stages:
1. connection admission
2. authentication challenge and validation
3. publish/write authorization
4. query/count authorization
5. stream subscription authorization
6. negentropy authorization
7. response shaping
8. broadcast/fanout suppression
This is an internal runtime refactor. It does not imply a new public API.
### 2.2 Richer internal request context
The runtime should carry a structured request context through all stages.
Useful fields:
- authenticated pubkeys
- caller kind
- remote IP
- subscription id
- peer id
- negentropy session flag
- internal-call flag
This reduces ad-hoc branching and makes audit/telemetry more coherent.
### 2.3 Separate policy from storage presence tables
Moderation state should remain data.
Runtime enforcement should be a first-class layer that consumes that data, not a side effect of whether a table exists.
This is especially important for:
- blocked IP enforcement,
- pubkey allowlists,
- future kind- or tag-scoped restrictions.
---
## 3. Protocol and Relay Features
### 3.1 Real COUNT sketches
Parrhesia currently returns a synthetic `hll` payload for NIP-45-style count responses.
If approximate count exchange matters, implement a real reusable HLL sketch path instead of hashing `filters + count`.
### 3.2 Relay identity in NIP-11
Once Parrhesia owns a stable server identity, NIP-11 should expose the relay pubkey instead of returning `nil`.
This is useful beyond sync:
- operator visibility,
- relay fingerprinting,
- future trust tooling.
### 3.3 Connection-level IP enforcement
Blocked IP support should be enforced on actual connection admission, not only stored in management tables.
This should happen early, before expensive protocol handling.
### 3.4 Better response shaping
Introduce a narrow internal response shaping layer for cases where returned events or counts need controlled rewriting or suppression.
Examples:
- hide fields for specific relay profiles,
- suppress rebroadcast of locally-ingested remote sync traffic,
- shape relay notices consistently.
This should stay narrow and deterministic. It should not become arbitrary app semantics.
---
## 4. Suggested Extension Points
These should be internal runtime seams, not necessarily public interfaces:
- `ConnectionPolicy`
- `AuthPolicy`
- `ReadPolicy`
- `WritePolicy`
- `NegentropyPolicy`
- `ResponsePolicy`
- `BroadcastPolicy`
They may initially be plain modules with well-defined callbacks or functions.
The point is not pluggability for its own sake. The point is to make policy stages explicit and testable.
---
## 5. Near-Term Priority
Recommended order:
1. enforce blocked IPs and any future connection-gating on the real connection path
2. split the current websocket flow into explicit read/write/negentropy policy stages
3. enrich runtime request context and telemetry metadata
4. expose relay pubkey in NIP-11 once identity lands
5. replace fake HLL payloads with a real approximate-count implementation if NIP-45 support matters operationally
This keeps the runtime improvements incremental and independent from the ongoing API and ACL implementation.

147
docs/LOCAL_API.md Normal file
View File

@@ -0,0 +1,147 @@
# Parrhesia Local API
Parrhesia can run as a normal standalone relay application, but it also exposes a stable
in-process API for Elixir callers that want to embed the relay inside a larger OTP system.
This document describes that embedding surface. The runtime is still alpha, so treat the API
as usable but not yet frozen.
## What embedding means today
Embedding currently means:
- the host app adds `:parrhesia` as a dependency and OTP application
- the host app provides `config :parrhesia, ...` explicitly
- the host app migrates the Parrhesia database schema
- callers interact with the relay through `Parrhesia.API.*`
Current operational assumptions:
- Parrhesia runs one runtime per BEAM node
- core processes use global module names such as `Parrhesia.Config` and `Parrhesia.Web.Endpoint`
- the config defaults in this repo's `config/*.exs` are not imported automatically by a host app
If you want multiple isolated relay instances inside one VM, Parrhesia does not support that
cleanly yet.
## Minimal host setup
Add the dependency in your host app:
```elixir
defp deps do
[
{:parrhesia, path: "../parrhesia"}
]
end
```
Configure the runtime in your host app. At minimum you should carry over:
```elixir
import Config
config :postgrex, :json_library, JSON
config :parrhesia,
relay_url: "wss://relay.example.com/relay",
listeners: %{},
storage: [backend: :postgres]
config :parrhesia, Parrhesia.Repo,
url: System.fetch_env!("DATABASE_URL"),
pool_size: 10,
types: Parrhesia.PostgresTypes
config :parrhesia, Parrhesia.ReadRepo,
url: System.fetch_env!("DATABASE_URL"),
pool_size: 10,
types: Parrhesia.PostgresTypes
config :parrhesia, ecto_repos: [Parrhesia.Repo]
```
Notes:
- Set `listeners: %{}` if you only want the in-process API and no HTTP/WebSocket ingress.
- If you do want ingress, copy the listener shape from the config reference in
[README.md](../README.md).
- Production runtime overrides still use the `PARRHESIA_*` environment variables described in
[README.md](../README.md).
Migrate before serving traffic:
```elixir
Parrhesia.Release.migrate()
```
In development, `mix ecto.migrate -r Parrhesia.Repo` works too.
## Starting the runtime
In the common case, letting OTP start the `:parrhesia` application is enough.
If you need to start the runtime explicitly under your own supervision tree, use
`Parrhesia.Runtime`:
```elixir
children = [
{Parrhesia.Runtime, name: Parrhesia.Supervisor}
]
```
## Primary modules
The in-process surface is centered on these modules:
- `Parrhesia.API.Events` for publish, query, and count
- `Parrhesia.API.Stream` for REQ-like local subscriptions
- `Parrhesia.API.Auth` for event validation and NIP-98 auth parsing
- `Parrhesia.API.Admin` for management operations
- `Parrhesia.API.Identity` for relay-owned key management
- `Parrhesia.API.ACL` for protected sync ACLs
- `Parrhesia.API.Sync` for outbound relay sync management
Generated ExDoc groups these modules under `Embedded API`.
## Request context
Most calls take a `Parrhesia.API.RequestContext`. This carries authenticated pubkeys and
caller metadata through policy checks.
```elixir
%Parrhesia.API.RequestContext{
caller: :local,
authenticated_pubkeys: MapSet.new()
}
```
If your host app has already authenticated a user or peer, put that pubkey into
`authenticated_pubkeys` before calling the API.
## Example
```elixir
alias Parrhesia.API.Events
alias Parrhesia.API.RequestContext
alias Parrhesia.API.Stream
context = %RequestContext{caller: :local}
{:ok, publish_result} = Events.publish(event, context: context)
{:ok, events} = Events.query([%{"kinds" => [1]}], context: context)
{:ok, ref} = Stream.subscribe(self(), "local-sub", [%{"kinds" => [1]}], context: context)
receive do
{:parrhesia, :event, ^ref, "local-sub", event} -> event
{:parrhesia, :eose, ^ref, "local-sub"} -> :ok
end
:ok = Stream.unsubscribe(ref)
```
## Where to look next
- [README.md](../README.md) for setup and the full config reference
- [docs/SYNC.md](./SYNC.md) for relay-to-relay sync semantics
- module docs under `Parrhesia.API.*` for per-function behavior

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +1,27 @@
defmodule Parrhesia do defmodule Parrhesia do
@moduledoc """ @moduledoc """
Documentation for `Parrhesia`. Parrhesia is a Nostr relay runtime that can run standalone or as an embedded OTP service.
For embedded use, the main developer-facing surface is `Parrhesia.API.*`.
Start with:
- `Parrhesia.API.Events`
- `Parrhesia.API.Stream`
- `Parrhesia.API.Admin`
- `Parrhesia.API.Identity`
- `Parrhesia.API.ACL`
- `Parrhesia.API.Sync`
The host application is responsible for:
- setting `config :parrhesia, ...`
- migrating the configured Parrhesia repos
- deciding whether to expose listeners or use only the in-process API
See `README.md` and `docs/LOCAL_API.md` for the embedding model and configuration guide.
""" """
@doc """ @doc false
Hello world.
## Examples
iex> Parrhesia.hello()
:world
"""
def hello do def hello do
:world :world
end end

View File

@@ -1,12 +1,37 @@
defmodule Parrhesia.API.ACL do defmodule Parrhesia.API.ACL do
@moduledoc """ @moduledoc """
Public ACL API and rule matching for protected sync traffic. Public ACL API and rule matching for protected sync traffic.
ACL checks are only applied when the requested subject overlaps with
`config :parrhesia, :acl, protected_filters: [...]`.
The intended flow is:
1. mark a subset of sync traffic as protected with `protected_filters`
2. persist pubkey-based grants with `grant/2`
3. call `check/3` during sync reads and writes
Unprotected subjects always return `:ok`.
""" """
alias Parrhesia.API.RequestContext alias Parrhesia.API.RequestContext
alias Parrhesia.Protocol.Filter alias Parrhesia.Protocol.Filter
alias Parrhesia.Storage alias Parrhesia.Storage
@doc """
Persists an ACL rule.
A typical rule looks like:
```elixir
%{
principal_type: :pubkey,
principal: "...64 hex chars...",
capability: :sync_read,
match: %{"kinds" => [5000], "#r" => ["tribes.accounts.user"]}
}
```
"""
@spec grant(map(), keyword()) :: :ok | {:error, term()} @spec grant(map(), keyword()) :: :ok | {:error, term()}
def grant(rule, _opts \\ []) do def grant(rule, _opts \\ []) do
with {:ok, _stored_rule} <- Storage.acl().put_rule(%{}, normalize_rule(rule)) do with {:ok, _stored_rule} <- Storage.acl().put_rule(%{}, normalize_rule(rule)) do
@@ -14,16 +39,39 @@ defmodule Parrhesia.API.ACL do
end end
end end
@doc """
Deletes ACL rules matching the given selector.
The selector is passed through to the configured storage adapter, which typically accepts an
id-based selector such as `%{id: rule_id}`.
"""
@spec revoke(map(), keyword()) :: :ok | {:error, term()} @spec revoke(map(), keyword()) :: :ok | {:error, term()}
def revoke(rule, _opts \\ []) do def revoke(rule, _opts \\ []) do
Storage.acl().delete_rule(%{}, normalize_delete_selector(rule)) Storage.acl().delete_rule(%{}, normalize_delete_selector(rule))
end end
@doc """
Lists persisted ACL rules.
Supported filters are:
- `:principal_type`
- `:principal`
- `:capability`
"""
@spec list(keyword()) :: {:ok, [map()]} | {:error, term()} @spec list(keyword()) :: {:ok, [map()]} | {:error, term()}
def list(opts \\ []) do def list(opts \\ []) do
Storage.acl().list_rules(%{}, normalize_list_opts(opts)) Storage.acl().list_rules(%{}, normalize_list_opts(opts))
end end
@doc """
Authorizes a protected sync read or write subject for the given request context.
Supported capabilities are `:sync_read` and `:sync_write`.
`opts[:context]` defaults to an empty `Parrhesia.API.RequestContext`, which means protected
subjects will fail with `{:error, :auth_required}` until authenticated pubkeys are present.
"""
@spec check(atom(), map(), keyword()) :: :ok | {:error, term()} @spec check(atom(), map(), keyword()) :: :ok | {:error, term()}
def check(capability, subject, opts \\ []) def check(capability, subject, opts \\ [])
@@ -44,6 +92,9 @@ defmodule Parrhesia.API.ACL do
def check(_capability, _subject, _opts), do: {:error, :invalid_acl_capability} def check(_capability, _subject, _opts), do: {:error, :invalid_acl_capability}
@doc """
Returns `true` when a filter overlaps the configured protected read surface.
"""
@spec protected_read?(map()) :: boolean() @spec protected_read?(map()) :: boolean()
def protected_read?(filter) when is_map(filter) do def protected_read?(filter) when is_map(filter) do
case protected_filters() do case protected_filters() do
@@ -57,6 +108,9 @@ defmodule Parrhesia.API.ACL do
def protected_read?(_filter), do: false def protected_read?(_filter), do: false
@doc """
Returns `true` when an event matches the configured protected write surface.
"""
@spec protected_write?(map()) :: boolean() @spec protected_write?(map()) :: boolean()
def protected_write?(event) when is_map(event) do def protected_write?(event) when is_map(event) do
case protected_filters() do case protected_filters() do

View File

@@ -1,6 +1,14 @@
defmodule Parrhesia.API.Admin do defmodule Parrhesia.API.Admin do
@moduledoc """ @moduledoc """
Public management API facade. Public management API facade.
This module exposes the DX-friendly control plane for administrative tasks. It wraps
storage-backed management methods and a set of built-in helpers for ACL, identity, sync,
and listener management.
`execute/3` accepts the same method names used by NIP-86 style management endpoints, while
the dedicated functions (`stats/1`, `health/1`, `list_audit_logs/1`) are easier to call
from Elixir code.
""" """
alias Parrhesia.API.ACL alias Parrhesia.API.ACL
@@ -26,6 +34,22 @@ defmodule Parrhesia.API.Admin do
sync_sync_now sync_sync_now
) )
@doc """
Executes a management method by name.
Built-in methods include:
- `supportedmethods`
- `stats`
- `health`
- `list_audit_logs`
- `acl_grant`, `acl_revoke`, `acl_list`
- `identity_get`, `identity_ensure`, `identity_import`, `identity_rotate`
- `listener_reload`
- `sync_*`
Unknown methods are delegated to the configured `Parrhesia.Storage.Admin` implementation.
"""
@spec execute(String.t() | atom(), map(), keyword()) :: {:ok, map()} | {:error, term()} @spec execute(String.t() | atom(), map(), keyword()) :: {:ok, map()} | {:error, term()}
def execute(method, params, opts \\ []) def execute(method, params, opts \\ [])
@@ -41,6 +65,9 @@ defmodule Parrhesia.API.Admin do
def execute(method, _params, _opts), def execute(method, _params, _opts),
do: {:error, {:unsupported_method, normalize_method_name(method)}} do: {:error, {:unsupported_method, normalize_method_name(method)}}
@doc """
Returns aggregate relay stats plus nested sync stats.
"""
@spec stats(keyword()) :: {:ok, map()} | {:error, term()} @spec stats(keyword()) :: {:ok, map()} | {:error, term()}
def stats(opts \\ []) do def stats(opts \\ []) do
with {:ok, relay_stats} <- relay_stats(), with {:ok, relay_stats} <- relay_stats(),
@@ -49,6 +76,12 @@ defmodule Parrhesia.API.Admin do
end end
end end
@doc """
Returns the overall management health payload.
The top-level `"status"` is currently derived from sync health, while relay-specific health
details remain delegated to storage-backed management methods.
"""
@spec health(keyword()) :: {:ok, map()} | {:error, term()} @spec health(keyword()) :: {:ok, map()} | {:error, term()}
def health(opts \\ []) do def health(opts \\ []) do
with {:ok, sync_health} <- Sync.sync_health(opts) do with {:ok, sync_health} <- Sync.sync_health(opts) do
@@ -60,6 +93,12 @@ defmodule Parrhesia.API.Admin do
end end
end end
@doc """
Lists persisted audit log entries from the configured admin storage backend.
Supported options are storage-adapter specific. The built-in admin execution path forwards
`:limit`, `:method`, and `:actor_pubkey`.
"""
@spec list_audit_logs(keyword()) :: {:ok, [map()]} | {:error, term()} @spec list_audit_logs(keyword()) :: {:ok, [map()]} | {:error, term()}
def list_audit_logs(opts \\ []) do def list_audit_logs(opts \\ []) do
Storage.admin().list_audit_logs(%{}, opts) Storage.admin().list_audit_logs(%{}, opts)

View File

@@ -1,6 +1,15 @@
defmodule Parrhesia.API.Auth do defmodule Parrhesia.API.Auth do
@moduledoc """ @moduledoc """
Shared auth and event validation helpers. Public helpers for event validation and NIP-98 HTTP authentication.
This module is intended for callers that need a programmatic API surface:
- `validate_event/1` returns validator reason atoms.
- `compute_event_id/1` computes the canonical Nostr event id.
- `validate_nip98/3` and `validate_nip98/4` turn an `Authorization` header into a
shared auth context that can be reused by the rest of the API surface.
For transport-facing validation messages, see `Parrhesia.Protocol.validate_event/1`.
""" """
alias Parrhesia.API.Auth.Context alias Parrhesia.API.Auth.Context
@@ -8,18 +17,46 @@ defmodule Parrhesia.API.Auth do
alias Parrhesia.Auth.Nip98 alias Parrhesia.Auth.Nip98
alias Parrhesia.Protocol.EventValidator alias Parrhesia.Protocol.EventValidator
@doc """
Validates a Nostr event and returns validator-friendly error atoms.
This is the low-level validation entrypoint used by the API surface. Unlike
`Parrhesia.Protocol.validate_event/1`, it preserves the raw validator reason so callers
can branch on it directly.
"""
@spec validate_event(map()) :: :ok | {:error, term()} @spec validate_event(map()) :: :ok | {:error, term()}
def validate_event(event), do: EventValidator.validate(event) def validate_event(event), do: EventValidator.validate(event)
@doc """
Computes the canonical Nostr event id for an event payload.
The event does not need to be persisted first. This is useful when building or signing
events locally.
"""
@spec compute_event_id(map()) :: String.t() @spec compute_event_id(map()) :: String.t()
def compute_event_id(event), do: EventValidator.compute_id(event) def compute_event_id(event), do: EventValidator.compute_id(event)
@doc """
Validates a NIP-98 `Authorization` header using default options.
"""
@spec validate_nip98(String.t() | nil, String.t(), String.t()) :: @spec validate_nip98(String.t() | nil, String.t(), String.t()) ::
{:ok, Context.t()} | {:error, term()} {:ok, Context.t()} | {:error, term()}
def validate_nip98(authorization, method, url) do def validate_nip98(authorization, method, url) do
validate_nip98(authorization, method, url, []) validate_nip98(authorization, method, url, [])
end end
@doc """
Validates a NIP-98 `Authorization` header and returns a shared auth context.
The returned `Parrhesia.API.Auth.Context` includes:
- the decoded auth event
- the authenticated pubkey
- a `Parrhesia.API.RequestContext` with `caller: :http`
Supported options are forwarded to `Parrhesia.Auth.Nip98.validate_authorization_header/4`,
including `:max_age_seconds` and `:replay_cache`.
"""
@spec validate_nip98(String.t() | nil, String.t(), String.t(), keyword()) :: @spec validate_nip98(String.t() | nil, String.t(), String.t(), keyword()) ::
{:ok, Context.t()} | {:error, term()} {:ok, Context.t()} | {:error, term()}
def validate_nip98(authorization, method, url, opts) def validate_nip98(authorization, method, url, opts)

View File

@@ -1,6 +1,10 @@
defmodule Parrhesia.API.Auth.Context do defmodule Parrhesia.API.Auth.Context do
@moduledoc """ @moduledoc """
Authenticated request details returned by shared auth helpers. Authenticated request details returned by shared auth helpers.
This is the higher-level result returned by `Parrhesia.API.Auth.validate_nip98/3` and
`validate_nip98/4`. The nested `request_context` is ready to be passed into the rest of the
public API surface.
""" """
alias Parrhesia.API.RequestContext alias Parrhesia.API.RequestContext

View File

@@ -1,17 +1,28 @@
defmodule Parrhesia.API.Events do defmodule Parrhesia.API.Events do
@moduledoc """ @moduledoc """
Canonical event publish, query, and count API. Canonical event publish, query, and count API.
This is the main in-process API for working with Nostr events. It applies the same core
validation and policy checks used by the relay edge, but without going through a socket or
HTTP transport.
All public functions expect `opts[:context]` to contain a `Parrhesia.API.RequestContext`.
That context drives authorization, caller attribution, and downstream policy behavior.
`publish/2` intentionally returns `{:ok, %PublishResult{accepted: false}}` for policy and
storage rejections so callers can mirror relay `OK` semantics without treating a rejected
event as a process error.
""" """
alias Parrhesia.API.Events.PublishResult alias Parrhesia.API.Events.PublishResult
alias Parrhesia.API.RequestContext alias Parrhesia.API.RequestContext
alias Parrhesia.Fanout.Dispatcher
alias Parrhesia.Fanout.MultiNode alias Parrhesia.Fanout.MultiNode
alias Parrhesia.Groups.Flow alias Parrhesia.NIP43
alias Parrhesia.Policy.EventPolicy alias Parrhesia.Policy.EventPolicy
alias Parrhesia.Protocol alias Parrhesia.Protocol
alias Parrhesia.Protocol.Filter alias Parrhesia.Protocol.Filter
alias Parrhesia.Storage alias Parrhesia.Storage
alias Parrhesia.Subscriptions.Index
alias Parrhesia.Telemetry alias Parrhesia.Telemetry
@default_max_event_bytes 262_144 @default_max_event_bytes 262_144
@@ -29,26 +40,53 @@ defmodule Parrhesia.API.Events do
449 449
]) ])
@doc """
Validates, authorizes, persists, and fans out an event.
Required options:
- `:context` - a `Parrhesia.API.RequestContext`
Supported options:
- `:max_event_bytes` - overrides the configured max encoded event size
- `:path`, `:private_key`, `:configured_private_key` - forwarded to the NIP-43 helper flow
Return semantics:
- `{:ok, %PublishResult{accepted: true}}` for accepted events
- `{:ok, %PublishResult{accepted: false}}` for rejected or duplicate events
- `{:error, :invalid_context}` only when the call itself is malformed
"""
@spec publish(map(), keyword()) :: {:ok, PublishResult.t()} | {:error, term()} @spec publish(map(), keyword()) :: {:ok, PublishResult.t()} | {:error, term()}
def publish(event, opts \\ []) def publish(event, opts \\ [])
def publish(event, opts) when is_map(event) and is_list(opts) do def publish(event, opts) when is_map(event) and is_list(opts) do
started_at = System.monotonic_time() started_at = System.monotonic_time()
event_id = Map.get(event, "id", "") event_id = Map.get(event, "id", "")
telemetry_metadata = telemetry_metadata_for_event(event)
with {:ok, context} <- fetch_context(opts), with {:ok, context} <- fetch_context(opts),
:ok <- validate_event_payload_size(event, max_event_bytes(opts)), :ok <- validate_event_payload_size(event, max_event_bytes(opts)),
:ok <- Protocol.validate_event(event), :ok <- Protocol.validate_event(event),
:ok <- EventPolicy.authorize_write(event, context.authenticated_pubkeys, context), :ok <- EventPolicy.authorize_write(event, context.authenticated_pubkeys, context),
:ok <- maybe_process_group_event(event), {:ok, publish_state} <- NIP43.prepare_publish(event, nip43_opts(opts, context)),
{:ok, _stored, message} <- persist_event(event) do {:ok, _stored, message} <- persist_event(event) do
Telemetry.emit( Telemetry.emit(
[:parrhesia, :ingest, :stop], [:parrhesia, :ingest, :stop],
%{duration: System.monotonic_time() - started_at}, %{duration: System.monotonic_time() - started_at},
telemetry_metadata_for_event(event) telemetry_metadata
) )
fanout_event(event) emit_ingest_result(telemetry_metadata, :accepted, :accepted)
message =
case NIP43.finalize_publish(event, publish_state, nip43_opts(opts, context)) do
{:ok, override} when is_binary(override) -> override
:ok -> message
end
Dispatcher.dispatch(event)
maybe_publish_multi_node(event) maybe_publish_multi_node(event)
{:ok, {:ok,
@@ -60,9 +98,12 @@ defmodule Parrhesia.API.Events do
}} }}
else else
{:error, :invalid_context} = error -> {:error, :invalid_context} = error ->
emit_ingest_result(telemetry_metadata, :rejected, :invalid_context)
error error
{:error, reason} -> {:error, reason} ->
emit_ingest_result(telemetry_metadata, :rejected, reason)
{:ok, {:ok,
%PublishResult{ %PublishResult{
event_id: event_id, event_id: event_id,
@@ -75,47 +116,96 @@ defmodule Parrhesia.API.Events do
def publish(_event, _opts), do: {:error, :invalid_event} def publish(_event, _opts), do: {:error, :invalid_event}
@doc """
Queries stored events plus any dynamic NIP-43 events visible to the caller.
Required options:
- `:context` - a `Parrhesia.API.RequestContext`
Supported options:
- `:max_filter_limit` - overrides the configured per-filter limit
- `:validate_filters?` - skips filter validation when `false`
- `:authorize_read?` - skips read policy checks when `false`
The skip flags are primarily for internal composition, such as `Parrhesia.API.Stream`.
External callers should normally leave them enabled.
"""
@spec query([map()], keyword()) :: {:ok, [map()]} | {:error, term()} @spec query([map()], keyword()) :: {:ok, [map()]} | {:error, term()}
def query(filters, opts \\ []) def query(filters, opts \\ [])
def query(filters, opts) when is_list(filters) and is_list(opts) do def query(filters, opts) when is_list(filters) and is_list(opts) do
started_at = System.monotonic_time() started_at = System.monotonic_time()
telemetry_metadata = telemetry_metadata_for_filters(filters, :query)
with {:ok, context} <- fetch_context(opts), with {:ok, context} <- fetch_context(opts),
:ok <- maybe_validate_filters(filters, opts), :ok <- maybe_validate_filters(filters, opts),
:ok <- maybe_authorize_read(filters, context, opts), :ok <- maybe_authorize_read(filters, context, opts),
{:ok, events} <- Storage.events().query(%{}, filters, storage_query_opts(context, opts)) do {:ok, events} <- Storage.events().query(%{}, filters, storage_query_opts(context, opts)) do
events = NIP43.dynamic_events(filters, nip43_opts(opts, context)) ++ events
Telemetry.emit( Telemetry.emit(
[:parrhesia, :query, :stop], [:parrhesia, :query, :stop],
%{duration: System.monotonic_time() - started_at}, %{duration: System.monotonic_time() - started_at, result_count: length(events)},
telemetry_metadata_for_filters(filters) telemetry_metadata
) )
emit_query_result(telemetry_metadata, :ok)
{:ok, events} {:ok, events}
else
{:error, reason} = error ->
emit_query_result(telemetry_metadata, :error, reason)
error
end end
end end
def query(_filters, _opts), do: {:error, :invalid_filters} def query(_filters, _opts), do: {:error, :invalid_filters}
@doc """
Counts events matching the given filters.
Required options:
- `:context` - a `Parrhesia.API.RequestContext`
Supported options:
- `:validate_filters?` - skips filter validation when `false`
- `:authorize_read?` - skips read policy checks when `false`
- `:options` - when set to a map, returns a NIP-45-style payload instead of a bare integer
When `opts[:options]` is a map, the result shape is `%{"count" => count, "approximate" => false}`.
If `opts[:options]["hll"]` is `true` and the feature is enabled, an `"hll"` field is included.
"""
@spec count([map()], keyword()) :: {:ok, non_neg_integer() | map()} | {:error, term()} @spec count([map()], keyword()) :: {:ok, non_neg_integer() | map()} | {:error, term()}
def count(filters, opts \\ []) def count(filters, opts \\ [])
def count(filters, opts) when is_list(filters) and is_list(opts) do def count(filters, opts) when is_list(filters) and is_list(opts) do
started_at = System.monotonic_time() started_at = System.monotonic_time()
telemetry_metadata = telemetry_metadata_for_filters(filters, :count)
with {:ok, context} <- fetch_context(opts), with {:ok, context} <- fetch_context(opts),
:ok <- maybe_validate_filters(filters, opts), :ok <- maybe_validate_filters(filters, opts),
:ok <- maybe_authorize_read(filters, context, opts), :ok <- maybe_authorize_read(filters, context, opts),
{:ok, count} <- {:ok, count} <-
Storage.events().count(%{}, filters, requester_pubkeys: requester_pubkeys(context)), Storage.events().count(%{}, filters, requester_pubkeys: requester_pubkeys(context)),
count <- count + NIP43.dynamic_count(filters, nip43_opts(opts, context)),
{:ok, result} <- maybe_build_count_result(filters, count, Keyword.get(opts, :options)) do {:ok, result} <- maybe_build_count_result(filters, count, Keyword.get(opts, :options)) do
Telemetry.emit( Telemetry.emit(
[:parrhesia, :query, :stop], [:parrhesia, :query, :stop],
%{duration: System.monotonic_time() - started_at}, %{duration: System.monotonic_time() - started_at, result_count: count},
telemetry_metadata_for_filters(filters) telemetry_metadata
) )
emit_query_result(telemetry_metadata, :ok)
{:ok, result} {:ok, result}
else
{:error, reason} = error ->
emit_query_result(telemetry_metadata, :error, reason)
error
end end
end end
@@ -184,14 +274,6 @@ defmodule Parrhesia.API.Events do
|> Base.encode64() |> Base.encode64()
end end
defp maybe_process_group_event(event) do
if Flow.group_related_kind?(Map.get(event, "kind")) do
Flow.handle_event(event)
else
:ok
end
end
defp persist_event(event) do defp persist_event(event) do
kind = Map.get(event, "kind") kind = Map.get(event, "kind")
@@ -230,20 +312,6 @@ defmodule Parrhesia.API.Events do
end end
end end
defp fanout_event(event) do
case Index.candidate_subscription_keys(event) do
candidates when is_list(candidates) ->
Enum.each(candidates, fn {owner_pid, subscription_id} ->
send(owner_pid, {:fanout_event, subscription_id, event})
end)
_other ->
:ok
end
catch
:exit, _reason -> :ok
end
defp maybe_publish_multi_node(event) do defp maybe_publish_multi_node(event) do
MultiNode.publish(event) MultiNode.publish(event)
:ok :ok
@@ -255,8 +323,8 @@ defmodule Parrhesia.API.Events do
%{traffic_class: traffic_class_for_event(event)} %{traffic_class: traffic_class_for_event(event)}
end end
defp telemetry_metadata_for_filters(filters) do defp telemetry_metadata_for_filters(filters, operation) do
%{traffic_class: traffic_class_for_filters(filters)} %{traffic_class: traffic_class_for_filters(filters), operation: operation}
end end
defp traffic_class_for_filters(filters) do defp traffic_class_for_filters(filters) do
@@ -289,6 +357,30 @@ defmodule Parrhesia.API.Events do
defp traffic_class_for_event(_event), do: :generic defp traffic_class_for_event(_event), do: :generic
defp emit_ingest_result(metadata, outcome, reason) do
Telemetry.emit(
[:parrhesia, :ingest, :result],
%{count: 1},
Map.merge(metadata, %{outcome: outcome, reason: normalize_reason(reason)})
)
end
defp emit_query_result(metadata, outcome, reason \\ nil) do
Telemetry.emit(
[:parrhesia, :query, :result],
%{count: 1},
Map.merge(
metadata,
%{outcome: outcome, reason: normalize_reason(reason || outcome)}
)
)
end
defp normalize_reason(reason) when is_atom(reason), do: reason
defp normalize_reason(reason) when is_binary(reason), do: reason
defp normalize_reason(nil), do: :none
defp normalize_reason(_reason), do: :unknown
defp fetch_context(opts) do defp fetch_context(opts) do
case Keyword.get(opts, :context) do case Keyword.get(opts, :context) do
%RequestContext{} = context -> {:ok, context} %RequestContext{} = context -> {:ok, context}
@@ -296,6 +388,11 @@ defmodule Parrhesia.API.Events do
end end
end end
defp nip43_opts(opts, %RequestContext{} = context) do
[context: context, relay_url: Application.get_env(:parrhesia, :relay_url)]
|> Kernel.++(Keyword.take(opts, [:path, :private_key, :configured_private_key]))
end
defp error_message_for_publish_failure(:duplicate_event), defp error_message_for_publish_failure(:duplicate_event),
do: "duplicate: event already stored" do: "duplicate: event already stored"

View File

@@ -1,6 +1,14 @@
defmodule Parrhesia.API.Events.PublishResult do defmodule Parrhesia.API.Events.PublishResult do
@moduledoc """ @moduledoc """
Result shape for event publish attempts. Result shape for event publish attempts.
This mirrors relay `OK` semantics:
- `accepted: true` means the event was accepted
- `accepted: false` means the event was rejected or identified as a duplicate
The surrounding call still returns `{:ok, result}` in both cases so callers can surface the
rejection message without treating it as a transport or process failure.
""" """
defstruct [:event_id, :accepted, :message, :reason] defstruct [:event_id, :accepted, :message, :reason]

View File

@@ -1,15 +1,40 @@
defmodule Parrhesia.API.Identity do defmodule Parrhesia.API.Identity do
@moduledoc """ @moduledoc """
Server-auth identity management. Server-auth identity management.
Parrhesia uses a single server identity for flows that need the relay to sign events or
prove control of a pubkey.
Identity resolution follows this order:
1. `opts[:private_key]` or `opts[:configured_private_key]`
2. `Application.get_env(:parrhesia, :identity)`
3. the persisted file on disk
Supported options across this module:
- `:path` - overrides the identity file path
- `:private_key` / `:configured_private_key` - uses an explicit hex secret key
A configured private key is treated as read-only input and therefore cannot be rotated.
""" """
alias Parrhesia.API.Auth alias Parrhesia.API.Auth
@typedoc """
Public identity metadata returned to callers.
"""
@type identity_metadata :: %{ @type identity_metadata :: %{
pubkey: String.t(), pubkey: String.t(),
source: :configured | :persisted | :generated | :imported source: :configured | :persisted | :generated | :imported
} }
@doc """
Returns the current server identity metadata.
This does not generate a new identity. If no configured or persisted identity exists, it
returns `{:error, :identity_not_found}`.
"""
@spec get(keyword()) :: {:ok, identity_metadata()} | {:error, term()} @spec get(keyword()) :: {:ok, identity_metadata()} | {:error, term()}
def get(opts \\ []) do def get(opts \\ []) do
with {:ok, identity} <- fetch_existing_identity(opts) do with {:ok, identity} <- fetch_existing_identity(opts) do
@@ -17,6 +42,9 @@ defmodule Parrhesia.API.Identity do
end end
end end
@doc """
Returns the current identity, generating and persisting one when necessary.
"""
@spec ensure(keyword()) :: {:ok, identity_metadata()} | {:error, term()} @spec ensure(keyword()) :: {:ok, identity_metadata()} | {:error, term()}
def ensure(opts \\ []) do def ensure(opts \\ []) do
with {:ok, identity} <- ensure_identity(opts) do with {:ok, identity} <- ensure_identity(opts) do
@@ -24,6 +52,12 @@ defmodule Parrhesia.API.Identity do
end end
end end
@doc """
Imports an explicit secret key and persists it as the server identity.
The input map must contain `:secret_key` or `"secret_key"` as a 64-character lowercase or
uppercase hex string.
"""
@spec import(map(), keyword()) :: {:ok, identity_metadata()} | {:error, term()} @spec import(map(), keyword()) :: {:ok, identity_metadata()} | {:error, term()}
def import(identity, opts \\ []) def import(identity, opts \\ [])
@@ -37,6 +71,12 @@ defmodule Parrhesia.API.Identity do
def import(_identity, _opts), do: {:error, :invalid_identity} def import(_identity, _opts), do: {:error, :invalid_identity}
@doc """
Generates and persists a fresh server identity.
Rotation is rejected with `{:error, :configured_identity_cannot_rotate}` when the active
identity comes from configuration rather than the persisted file.
"""
@spec rotate(keyword()) :: {:ok, identity_metadata()} | {:error, term()} @spec rotate(keyword()) :: {:ok, identity_metadata()} | {:error, term()}
def rotate(opts \\ []) do def rotate(opts \\ []) do
with :ok <- ensure_rotation_allowed(opts), with :ok <- ensure_rotation_allowed(opts),
@@ -46,6 +86,18 @@ defmodule Parrhesia.API.Identity do
end end
end end
@doc """
Signs an event with the current server identity.
The incoming event must already include the fields required to compute a Nostr id:
- `"created_at"`
- `"kind"`
- `"tags"`
- `"content"`
On success the returned event includes `"pubkey"`, `"id"`, and `"sig"`.
"""
@spec sign_event(map(), keyword()) :: {:ok, map()} | {:error, term()} @spec sign_event(map(), keyword()) :: {:ok, map()} | {:error, term()}
def sign_event(event, opts \\ []) def sign_event(event, opts \\ [])
@@ -59,6 +111,9 @@ defmodule Parrhesia.API.Identity do
def sign_event(_event, _opts), do: {:error, :invalid_event} def sign_event(_event, _opts), do: {:error, :invalid_event}
@doc """
Returns the default filesystem path for the persisted server identity.
"""
def default_path do def default_path do
Path.join([default_data_dir(), "server_identity.json"]) Path.join([default_data_dir(), "server_identity.json"])
end end

View File

@@ -1,6 +1,15 @@
defmodule Parrhesia.API.RequestContext do defmodule Parrhesia.API.RequestContext do
@moduledoc """ @moduledoc """
Shared request context used across API and policy surfaces. Shared request context used across API and policy surfaces.
This struct carries caller identity and transport metadata through authorization and storage
boundaries.
The most important field for external callers is `authenticated_pubkeys`. For example:
- `Parrhesia.API.Events` uses it for read and write policy checks
- `Parrhesia.API.Stream` uses it for subscription authorization
- `Parrhesia.API.ACL` uses it when evaluating protected sync traffic
""" """
defstruct authenticated_pubkeys: MapSet.new(), defstruct authenticated_pubkeys: MapSet.new(),
@@ -23,6 +32,11 @@ defmodule Parrhesia.API.RequestContext do
metadata: map() metadata: map()
} }
@doc """
Merges arbitrary metadata into the context.
Existing keys are overwritten by the incoming map.
"""
@spec put_metadata(t(), map()) :: t() @spec put_metadata(t(), map()) :: t()
def put_metadata(%__MODULE__{} = context, metadata) when is_map(metadata) do def put_metadata(%__MODULE__{} = context, metadata) when is_map(metadata) do
%__MODULE__{context | metadata: Map.merge(context.metadata, metadata)} %__MODULE__{context | metadata: Map.merge(context.metadata, metadata)}

View File

@@ -1,6 +1,15 @@
defmodule Parrhesia.API.Stream do defmodule Parrhesia.API.Stream do
@moduledoc """ @moduledoc """
In-process subscription API with relay-equivalent catch-up and live fanout semantics. In-process subscription API with relay-equivalent catch-up and live fanout semantics.
Subscriptions are process-local bridges. After subscribing, the caller receives messages in
the same order a relay client would expect:
- `{:parrhesia, :event, ref, subscription_id, event}` for catch-up and live events
- `{:parrhesia, :eose, ref, subscription_id}` after the initial replay finishes
This API requires a `Parrhesia.API.RequestContext` so read policies are applied exactly as
they would be for a transport-backed subscriber.
""" """
alias Parrhesia.API.Events alias Parrhesia.API.Events
@@ -9,6 +18,16 @@ defmodule Parrhesia.API.Stream do
alias Parrhesia.Policy.EventPolicy alias Parrhesia.Policy.EventPolicy
alias Parrhesia.Protocol.Filter alias Parrhesia.Protocol.Filter
@doc """
Starts an in-process subscription for a subscriber pid.
`opts[:context]` must be a `Parrhesia.API.RequestContext`.
On success the returned reference is both:
- the subscription handle used by `unsubscribe/1`
- the value embedded in emitted subscriber messages
"""
@spec subscribe(pid(), String.t(), [map()], keyword()) :: {:ok, reference()} | {:error, term()} @spec subscribe(pid(), String.t(), [map()], keyword()) :: {:ok, reference()} | {:error, term()}
def subscribe(subscriber, subscription_id, filters, opts \\ []) def subscribe(subscriber, subscription_id, filters, opts \\ [])
@@ -42,6 +61,11 @@ defmodule Parrhesia.API.Stream do
def subscribe(_subscriber, _subscription_id, _filters, _opts), def subscribe(_subscriber, _subscription_id, _filters, _opts),
do: {:error, :invalid_subscription} do: {:error, :invalid_subscription}
@doc """
Stops a subscription previously created with `subscribe/4`.
This function is idempotent. Unknown or already-stopped references return `:ok`.
"""
@spec unsubscribe(reference()) :: :ok @spec unsubscribe(reference()) :: :ok
def unsubscribe(ref) when is_reference(ref) do def unsubscribe(ref) when is_reference(ref) do
case Registry.lookup(Parrhesia.API.Stream.Registry, ref) do case Registry.lookup(Parrhesia.API.Stream.Registry, ref) do

View File

@@ -1,10 +1,11 @@
defmodule Parrhesia.API.Stream.Subscription do defmodule Parrhesia.API.Stream.Subscription do
@moduledoc false @moduledoc false
use GenServer use GenServer, restart: :temporary
alias Parrhesia.Protocol.Filter alias Parrhesia.Protocol.Filter
alias Parrhesia.Subscriptions.Index alias Parrhesia.Subscriptions.Index
alias Parrhesia.Telemetry
defstruct [ defstruct [
:ref, :ref,
@@ -57,6 +58,7 @@ defmodule Parrhesia.API.Stream.Subscription do
buffered_events: [] buffered_events: []
} }
Telemetry.emit_process_mailbox_depth(:subscription)
{:ok, state} {:ok, state}
else else
{:error, reason} -> {:stop, reason} {:error, reason} -> {:stop, reason}
@@ -72,20 +74,27 @@ defmodule Parrhesia.API.Stream.Subscription do
end) end)
{:reply, :ok, %__MODULE__{state | ready?: true, buffered_events: []}} {:reply, :ok, %__MODULE__{state | ready?: true, buffered_events: []}}
|> emit_mailbox_depth()
end end
@impl true @impl true
def handle_info({:fanout_event, subscription_id, event}, %__MODULE__{} = state) def handle_info({:fanout_event, subscription_id, event}, %__MODULE__{} = state)
when is_binary(subscription_id) and is_map(event) do when is_binary(subscription_id) and is_map(event) do
handle_fanout_event(state, subscription_id, event) state
|> handle_fanout_event(subscription_id, event)
|> emit_mailbox_depth()
end end
def handle_info({:DOWN, monitor_ref, :process, subscriber, _reason}, %__MODULE__{} = state) def handle_info({:DOWN, monitor_ref, :process, subscriber, _reason}, %__MODULE__{} = state)
when monitor_ref == state.subscriber_monitor_ref and subscriber == state.subscriber do when monitor_ref == state.subscriber_monitor_ref and subscriber == state.subscriber do
{:stop, :normal, state} {:stop, :normal, state}
|> emit_mailbox_depth()
end end
def handle_info(_message, %__MODULE__{} = state), do: {:noreply, state} def handle_info(_message, %__MODULE__{} = state) do
{:noreply, state}
|> emit_mailbox_depth()
end
@impl true @impl true
def terminate(reason, %__MODULE__{} = state) do def terminate(reason, %__MODULE__{} = state) do
@@ -175,4 +184,9 @@ defmodule Parrhesia.API.Stream.Subscription do
{:noreply, %__MODULE__{state | buffered_events: buffered_events}} {:noreply, %__MODULE__{state | buffered_events: buffered_events}}
end end
end end
defp emit_mailbox_depth(result) do
Telemetry.emit_process_mailbox_depth(:subscription)
result
end
end end

View File

@@ -1,12 +1,45 @@
defmodule Parrhesia.API.Sync do defmodule Parrhesia.API.Sync do
@moduledoc """ @moduledoc """
Sync server control-plane API. Sync server control-plane API.
This module manages outbound relay sync definitions and exposes runtime status for each
configured sync worker.
The main entrypoint is `put_server/2`. Accepted server maps are normalized into a stable
internal shape and persisted by the sync manager. The expected input shape is:
```elixir
%{
"id" => "tribes-primary",
"url" => "wss://relay-a.example/relay",
"enabled?" => true,
"auth_pubkey" => "...64 hex chars...",
"filters" => [%{"kinds" => [5000]}],
"mode" => "req_stream",
"overlap_window_seconds" => 300,
"auth" => %{"type" => "nip42"},
"tls" => %{
"mode" => "required",
"hostname" => "relay-a.example",
"pins" => [%{"type" => "spki_sha256", "value" => "..."}]
},
"metadata" => %{}
}
```
Most functions accept `:manager` or `:name` in `opts` to target a non-default manager.
""" """
alias Parrhesia.API.Sync.Manager alias Parrhesia.API.Sync.Manager
@typedoc """
Normalized sync server configuration returned by the sync manager.
"""
@type server :: map() @type server :: map()
@doc """
Creates or replaces a sync server definition.
"""
@spec put_server(map(), keyword()) :: {:ok, server()} | {:error, term()} @spec put_server(map(), keyword()) :: {:ok, server()} | {:error, term()}
def put_server(server, opts \\ []) def put_server(server, opts \\ [])
@@ -16,6 +49,9 @@ defmodule Parrhesia.API.Sync do
def put_server(_server, _opts), do: {:error, :invalid_server} def put_server(_server, _opts), do: {:error, :invalid_server}
@doc """
Removes a stored sync server definition and stops its worker if it is running.
"""
@spec remove_server(String.t(), keyword()) :: :ok | {:error, term()} @spec remove_server(String.t(), keyword()) :: :ok | {:error, term()}
def remove_server(server_id, opts \\ []) def remove_server(server_id, opts \\ [])
@@ -25,6 +61,11 @@ defmodule Parrhesia.API.Sync do
def remove_server(_server_id, _opts), do: {:error, :invalid_server_id} def remove_server(_server_id, _opts), do: {:error, :invalid_server_id}
@doc """
Fetches a single normalized sync server definition.
Returns `:error` when the server id is unknown.
"""
@spec get_server(String.t(), keyword()) :: {:ok, server()} | :error | {:error, term()} @spec get_server(String.t(), keyword()) :: {:ok, server()} | :error | {:error, term()}
def get_server(server_id, opts \\ []) def get_server(server_id, opts \\ [])
@@ -34,11 +75,17 @@ defmodule Parrhesia.API.Sync do
def get_server(_server_id, _opts), do: {:error, :invalid_server_id} def get_server(_server_id, _opts), do: {:error, :invalid_server_id}
@doc """
Lists all configured sync servers, including their runtime state.
"""
@spec list_servers(keyword()) :: {:ok, [server()]} | {:error, term()} @spec list_servers(keyword()) :: {:ok, [server()]} | {:error, term()}
def list_servers(opts \\ []) when is_list(opts) do def list_servers(opts \\ []) when is_list(opts) do
Manager.list_servers(manager_name(opts)) Manager.list_servers(manager_name(opts))
end end
@doc """
Marks a sync server as running and reconciles its worker state.
"""
@spec start_server(String.t(), keyword()) :: :ok | {:error, term()} @spec start_server(String.t(), keyword()) :: :ok | {:error, term()}
def start_server(server_id, opts \\ []) def start_server(server_id, opts \\ [])
@@ -48,6 +95,9 @@ defmodule Parrhesia.API.Sync do
def start_server(_server_id, _opts), do: {:error, :invalid_server_id} def start_server(_server_id, _opts), do: {:error, :invalid_server_id}
@doc """
Stops a sync server and records a disconnect timestamp in runtime state.
"""
@spec stop_server(String.t(), keyword()) :: :ok | {:error, term()} @spec stop_server(String.t(), keyword()) :: :ok | {:error, term()}
def stop_server(server_id, opts \\ []) def stop_server(server_id, opts \\ [])
@@ -57,6 +107,9 @@ defmodule Parrhesia.API.Sync do
def stop_server(_server_id, _opts), do: {:error, :invalid_server_id} def stop_server(_server_id, _opts), do: {:error, :invalid_server_id}
@doc """
Triggers an immediate sync run for a server.
"""
@spec sync_now(String.t(), keyword()) :: :ok | {:error, term()} @spec sync_now(String.t(), keyword()) :: :ok | {:error, term()}
def sync_now(server_id, opts \\ []) def sync_now(server_id, opts \\ [])
@@ -66,6 +119,11 @@ defmodule Parrhesia.API.Sync do
def sync_now(_server_id, _opts), do: {:error, :invalid_server_id} def sync_now(_server_id, _opts), do: {:error, :invalid_server_id}
@doc """
Returns runtime counters and timestamps for a single sync server.
Returns `:error` when the server id is unknown.
"""
@spec server_stats(String.t(), keyword()) :: {:ok, map()} | :error | {:error, term()} @spec server_stats(String.t(), keyword()) :: {:ok, map()} | :error | {:error, term()}
def server_stats(server_id, opts \\ []) def server_stats(server_id, opts \\ [])
@@ -75,16 +133,25 @@ defmodule Parrhesia.API.Sync do
def server_stats(_server_id, _opts), do: {:error, :invalid_server_id} def server_stats(_server_id, _opts), do: {:error, :invalid_server_id}
@doc """
Returns aggregate counters across all configured sync servers.
"""
@spec sync_stats(keyword()) :: {:ok, map()} | {:error, term()} @spec sync_stats(keyword()) :: {:ok, map()} | {:error, term()}
def sync_stats(opts \\ []) when is_list(opts) do def sync_stats(opts \\ []) when is_list(opts) do
Manager.sync_stats(manager_name(opts)) Manager.sync_stats(manager_name(opts))
end end
@doc """
Returns a health summary for the sync subsystem.
"""
@spec sync_health(keyword()) :: {:ok, map()} | {:error, term()} @spec sync_health(keyword()) :: {:ok, map()} | {:error, term()}
def sync_health(opts \\ []) when is_list(opts) do def sync_health(opts \\ []) when is_list(opts) do
Manager.sync_health(manager_name(opts)) Manager.sync_health(manager_name(opts))
end end
@doc """
Returns the default filesystem path for persisted sync server state.
"""
def default_path do def default_path do
Path.join([default_data_dir(), "sync_servers.json"]) Path.join([default_data_dir(), "sync_servers.json"])
end end

View File

@@ -74,6 +74,7 @@ defmodule Parrhesia.API.Sync.Manager do
{:ok, normalized_server} -> {:ok, normalized_server} ->
updated_state = updated_state =
state state
|> stop_worker_if_running(normalized_server.id)
|> put_server_state(normalized_server) |> put_server_state(normalized_server)
|> persist_and_reconcile!(normalized_server.id) |> persist_and_reconcile!(normalized_server.id)
@@ -248,9 +249,7 @@ defmodule Parrhesia.API.Sync.Manager do
state state
desired_running?(state, server_id) -> desired_running?(state, server_id) ->
state maybe_start_worker(state, server_id)
|> stop_worker_if_running(server_id)
|> maybe_start_worker(server_id)
true -> true ->
stop_worker_if_running(state, server_id) stop_worker_if_running(state, server_id)

View File

@@ -5,19 +5,6 @@ defmodule Parrhesia.Application do
@impl true @impl true
def start(_type, _args) do def start(_type, _args) do
children = [ Parrhesia.Runtime.start_link(name: Parrhesia.Supervisor)
Parrhesia.Telemetry,
Parrhesia.Config,
Parrhesia.Storage.Supervisor,
Parrhesia.Subscriptions.Supervisor,
Parrhesia.Auth.Supervisor,
Parrhesia.Sync.Supervisor,
Parrhesia.Policy.Supervisor,
Parrhesia.Web.Endpoint,
Parrhesia.Tasks.Supervisor
]
opts = [strategy: :one_for_one, name: Parrhesia.Supervisor]
Supervisor.start_link(children, opts)
end end
end end

View File

@@ -3,6 +3,7 @@ defmodule Parrhesia.Auth.Nip98 do
Minimal NIP-98 HTTP auth validation. Minimal NIP-98 HTTP auth validation.
""" """
alias Parrhesia.Auth.Nip98ReplayCache
alias Parrhesia.Protocol.EventValidator alias Parrhesia.Protocol.EventValidator
@max_age_seconds 60 @max_age_seconds 60
@@ -23,7 +24,8 @@ defmodule Parrhesia.Auth.Nip98 do
with {:ok, event_json} <- decode_base64(encoded_event), with {:ok, event_json} <- decode_base64(encoded_event),
{:ok, event} <- JSON.decode(event_json), {:ok, event} <- JSON.decode(event_json),
:ok <- validate_event_shape(event, opts), :ok <- validate_event_shape(event, opts),
:ok <- validate_http_binding(event, method, url) do :ok <- validate_http_binding(event, method, url),
:ok <- consume_replay_token(event, opts) do
{:ok, event} {:ok, event}
else else
{:error, reason} -> {:error, reason} {:error, reason} -> {:error, reason}
@@ -95,4 +97,14 @@ defmodule Parrhesia.Auth.Nip98 do
true -> :ok true -> :ok
end end
end end
defp consume_replay_token(%{"id" => event_id, "created_at" => created_at}, opts)
when is_binary(event_id) and is_integer(created_at) do
case Keyword.get(opts, :replay_cache, Nip98ReplayCache) do
nil -> :ok
replay_cache -> Nip98ReplayCache.consume(replay_cache, event_id, created_at, opts)
end
end
defp consume_replay_token(_event, _opts), do: {:error, :invalid_event}
end end

View File

@@ -0,0 +1,56 @@
defmodule Parrhesia.Auth.Nip98ReplayCache do
@moduledoc """
Tracks recently accepted NIP-98 auth event ids to prevent replay.
"""
use GenServer
@default_max_age_seconds 60
@spec start_link(keyword()) :: GenServer.on_start()
def start_link(opts \\ []) do
case Keyword.get(opts, :name, __MODULE__) do
nil -> GenServer.start_link(__MODULE__, opts)
name -> GenServer.start_link(__MODULE__, opts, name: name)
end
end
@spec consume(GenServer.server(), String.t(), integer(), keyword()) ::
:ok | {:error, :replayed_auth_event}
def consume(server \\ __MODULE__, event_id, created_at, opts \\ [])
when is_binary(event_id) and is_integer(created_at) and is_list(opts) do
GenServer.call(server, {:consume, event_id, created_at, opts})
end
@impl true
def init(_opts) do
{:ok, %{entries: %{}}}
end
@impl true
def handle_call({:consume, event_id, created_at, opts}, _from, state) do
now_ms = System.monotonic_time(:millisecond)
entries = prune_expired(state.entries, now_ms)
case Map.has_key?(entries, event_id) do
true ->
{:reply, {:error, :replayed_auth_event}, %{state | entries: entries}}
false ->
expires_at_ms = replay_expiration_ms(now_ms, created_at, opts)
next_entries = Map.put(entries, event_id, expires_at_ms)
{:reply, :ok, %{state | entries: next_entries}}
end
end
defp prune_expired(entries, now_ms) do
Map.reject(entries, fn {_event_id, expires_at_ms} -> expires_at_ms <= now_ms end)
end
defp replay_expiration_ms(now_ms, created_at, opts) do
max_age_seconds = Keyword.get(opts, :max_age_seconds, max_age_seconds())
max(now_ms, created_at * 1000) + max_age_seconds * 1000
end
defp max_age_seconds, do: @default_max_age_seconds
end

View File

@@ -13,6 +13,7 @@ defmodule Parrhesia.Auth.Supervisor do
def init(_init_arg) do def init(_init_arg) do
children = [ children = [
{Parrhesia.Auth.Challenges, name: Parrhesia.Auth.Challenges}, {Parrhesia.Auth.Challenges, name: Parrhesia.Auth.Challenges},
{Parrhesia.Auth.Nip98ReplayCache, name: Parrhesia.Auth.Nip98ReplayCache},
{Parrhesia.API.Identity.Manager, []} {Parrhesia.API.Identity.Manager, []}
] ]

View File

@@ -1,6 +1,9 @@
defmodule Parrhesia.Config do defmodule Parrhesia.Config do
@moduledoc """ @moduledoc """
Runtime configuration cache backed by ETS. Runtime configuration cache backed by ETS.
The application environment is copied into ETS at startup so hot-path reads do not need to
traverse the application environment repeatedly.
""" """
use GenServer use GenServer
@@ -8,6 +11,9 @@ defmodule Parrhesia.Config do
@table __MODULE__ @table __MODULE__
@root_key :config @root_key :config
@doc """
Starts the config cache server.
"""
def start_link(init_arg \\ []) do def start_link(init_arg \\ []) do
GenServer.start_link(__MODULE__, init_arg, name: __MODULE__) GenServer.start_link(__MODULE__, init_arg, name: __MODULE__)
end end
@@ -26,6 +32,9 @@ defmodule Parrhesia.Config do
{:ok, %{}} {:ok, %{}}
end end
@doc """
Returns the cached top-level Parrhesia application config.
"""
@spec all() :: map() | keyword() @spec all() :: map() | keyword()
def all do def all do
case :ets.lookup(@table, @root_key) do case :ets.lookup(@table, @root_key) do
@@ -34,6 +43,11 @@ defmodule Parrhesia.Config do
end end
end end
@doc """
Reads a nested config value by path.
The path may traverse maps or keyword lists. Missing paths return `default`.
"""
@spec get([atom()], term()) :: term() @spec get([atom()], term()) :: term()
def get(path, default \\ nil) when is_list(path) do def get(path, default \\ nil) when is_list(path) do
case fetch(path) do case fetch(path) do

View File

@@ -0,0 +1,89 @@
defmodule Parrhesia.ConnectionStats do
@moduledoc """
Per-listener connection and subscription counters.
Tracks active connection and subscription counts per listener and emits
`[:parrhesia, :listener, :population]` telemetry events on each change.
"""
use GenServer
alias Parrhesia.Telemetry
defstruct connections: %{}, subscriptions: %{}
@type state :: %__MODULE__{
connections: %{(atom() | String.t()) => non_neg_integer()},
subscriptions: %{(atom() | String.t()) => non_neg_integer()}
}
@spec start_link(keyword()) :: GenServer.on_start()
def start_link(opts \\ []) do
name = Keyword.get(opts, :name, __MODULE__)
GenServer.start_link(__MODULE__, %__MODULE__{}, name: name)
end
@spec connection_open(atom() | String.t()) :: :ok
def connection_open(listener_id), do: cast({:connection_open, listener_id})
@spec connection_close(atom() | String.t()) :: :ok
def connection_close(listener_id), do: cast({:connection_close, listener_id})
@spec subscriptions_change(atom() | String.t(), integer()) :: :ok
def subscriptions_change(listener_id, delta) when is_integer(delta) do
cast({:subscriptions_change, listener_id, delta})
end
@impl true
def init(%__MODULE__{} = state), do: {:ok, state}
@impl true
def handle_cast({:connection_open, listener_id}, %__MODULE__{} = state) do
listener_id = normalize_listener_id(listener_id)
next_state = %{state | connections: increment(state.connections, listener_id, 1)}
emit_population(listener_id, next_state)
{:noreply, next_state}
end
def handle_cast({:connection_close, listener_id}, %__MODULE__{} = state) do
listener_id = normalize_listener_id(listener_id)
next_state = %{state | connections: increment(state.connections, listener_id, -1)}
emit_population(listener_id, next_state)
{:noreply, next_state}
end
def handle_cast({:subscriptions_change, listener_id, delta}, %__MODULE__{} = state) do
listener_id = normalize_listener_id(listener_id)
next_state = %{state | subscriptions: increment(state.subscriptions, listener_id, delta)}
emit_population(listener_id, next_state)
{:noreply, next_state}
end
defp cast(message) do
GenServer.cast(__MODULE__, message)
:ok
catch
:exit, {:noproc, _details} -> :ok
:exit, {:normal, _details} -> :ok
end
defp increment(counts, key, delta) do
current = Map.get(counts, key, 0)
Map.put(counts, key, max(current + delta, 0))
end
defp emit_population(listener_id, %__MODULE__{} = state) do
Telemetry.emit(
[:parrhesia, :listener, :population],
%{
connections: Map.get(state.connections, listener_id, 0),
subscriptions: Map.get(state.subscriptions, listener_id, 0)
},
%{listener_id: listener_id}
)
end
defp normalize_listener_id(listener_id) when is_atom(listener_id), do: listener_id
defp normalize_listener_id(listener_id) when is_binary(listener_id), do: listener_id
defp normalize_listener_id(_listener_id), do: :unknown
end

View File

@@ -0,0 +1,46 @@
defmodule Parrhesia.Fanout.Dispatcher do
@moduledoc """
Asynchronous local fanout dispatcher.
"""
use GenServer
alias Parrhesia.Subscriptions.Index
@spec start_link(keyword()) :: GenServer.on_start()
def start_link(opts \\ []) do
name = Keyword.get(opts, :name, __MODULE__)
GenServer.start_link(__MODULE__, :ok, name: name)
end
@spec dispatch(map()) :: :ok
def dispatch(event), do: dispatch(__MODULE__, event)
@spec dispatch(GenServer.server(), map()) :: :ok
def dispatch(server, event) when is_map(event) do
GenServer.cast(server, {:dispatch, event})
end
@impl true
def init(:ok), do: {:ok, %{}}
@impl true
def handle_cast({:dispatch, event}, state) do
dispatch_to_candidates(event)
{:noreply, state}
end
defp dispatch_to_candidates(event) do
case Index.candidate_subscription_keys(event) do
candidates when is_list(candidates) ->
Enum.each(candidates, fn {owner_pid, subscription_id} ->
send(owner_pid, {:fanout_event, subscription_id, event})
end)
_other ->
:ok
end
catch
:exit, _reason -> :ok
end
end

View File

@@ -5,7 +5,7 @@ defmodule Parrhesia.Fanout.MultiNode do
use GenServer use GenServer
alias Parrhesia.Subscriptions.Index alias Parrhesia.Fanout.Dispatcher
@group __MODULE__ @group __MODULE__
@@ -44,11 +44,7 @@ defmodule Parrhesia.Fanout.MultiNode do
@impl true @impl true
def handle_info({:remote_fanout_event, event}, state) do def handle_info({:remote_fanout_event, event}, state) do
Index.candidate_subscription_keys(event) Dispatcher.dispatch(event)
|> Enum.each(fn {owner_pid, subscription_id} ->
send(owner_pid, {:fanout_event, subscription_id, event})
end)
{:noreply, state} {:noreply, state}
end end

View File

@@ -1,52 +1,62 @@
defmodule Parrhesia.Groups.Flow do defmodule Parrhesia.Groups.Flow do
@moduledoc """ @moduledoc """
Minimal group and membership flow handling for NIP-29/NIP-43 related kinds. Relay access membership projection backed by the shared group storage adapter.
""" """
alias Parrhesia.Storage alias Parrhesia.Storage
@membership_request_kind 8_000 @relay_access_group_id "__relay_access__"
@membership_approval_kind 8_001 @add_user_kind 8_000
@relay_metadata_kind 28_934 @remove_user_kind 8_001
@relay_admins_kind 28_935 @join_request_kind 28_934
@relay_rules_kind 28_936 @invite_request_kind 28_935
@membership_event_kind 13_534 @leave_request_kind 28_936
@membership_list_kind 13_534
@spec handle_event(map()) :: :ok | {:error, term()} @spec handle_event(map()) :: :ok | {:error, term()}
def handle_event(event) when is_map(event) do def handle_event(event) when is_map(event) do
case Map.get(event, "kind") do case Map.get(event, "kind") do
@membership_request_kind -> upsert_membership(event, "requested") @join_request_kind -> put_member(event, membership_pubkey_from_event(event))
@membership_approval_kind -> upsert_membership(event, "member") @leave_request_kind -> delete_member(event, membership_pubkey_from_event(event))
@membership_event_kind -> upsert_membership(event, "member") @add_user_kind -> put_member(event, tagged_pubkey(event, "p"))
@relay_metadata_kind -> :ok @remove_user_kind -> delete_member(event, tagged_pubkey(event, "p"))
@relay_admins_kind -> :ok @membership_list_kind -> replace_membership_snapshot(event)
@relay_rules_kind -> :ok @invite_request_kind -> :ok
_other -> :ok _other -> :ok
end end
end end
@spec group_related_kind?(non_neg_integer()) :: boolean() @spec relay_access_kind?(non_neg_integer()) :: boolean()
def group_related_kind?(kind) def relay_access_kind?(kind)
when kind in [ when kind in [
@membership_request_kind, @add_user_kind,
@membership_approval_kind, @remove_user_kind,
@relay_metadata_kind, @join_request_kind,
@relay_admins_kind, @invite_request_kind,
@relay_rules_kind, @leave_request_kind,
@membership_event_kind @membership_list_kind
], ],
do: true do: true
def group_related_kind?(_kind), do: false def relay_access_kind?(_kind), do: false
defp upsert_membership(event, role) do @spec get_membership(binary()) :: {:ok, map() | nil} | {:error, term()}
with {:ok, group_id} <- group_id_from_event(event), def get_membership(pubkey) when is_binary(pubkey) do
{:ok, pubkey} <- pubkey_from_event(event) do Storage.groups().get_membership(%{}, @relay_access_group_id, pubkey)
end
@spec list_memberships() :: {:ok, [map()]} | {:error, term()}
def list_memberships do
Storage.groups().list_memberships(%{}, @relay_access_group_id)
end
defp put_member(event, {:ok, pubkey}) do
with {:ok, metadata} <- membership_metadata(event) do
Storage.groups().put_membership(%{}, %{ Storage.groups().put_membership(%{}, %{
group_id: group_id, group_id: @relay_access_group_id,
pubkey: pubkey, pubkey: pubkey,
role: role, role: "member",
metadata: %{"source_kind" => Map.get(event, "kind")} metadata: metadata
}) })
|> case do |> case do
{:ok, _membership} -> :ok {:ok, _membership} -> :ok
@@ -55,21 +65,85 @@ defmodule Parrhesia.Groups.Flow do
end end
end end
defp group_id_from_event(event) do defp put_member(_event, {:error, reason}), do: {:error, reason}
group_id =
defp delete_member(_event, {:ok, pubkey}) do
Storage.groups().delete_membership(%{}, @relay_access_group_id, pubkey)
end
defp delete_member(_event, {:error, reason}), do: {:error, reason}
defp replace_membership_snapshot(event) do
with {:ok, tagged_members} <- tagged_pubkeys(event, "member"),
{:ok, existing_memberships} <- list_memberships() do
incoming_pubkeys = MapSet.new(tagged_members)
existing_pubkeys = MapSet.new(Enum.map(existing_memberships, & &1.pubkey))
remove_members =
existing_pubkeys
|> MapSet.difference(incoming_pubkeys)
|> MapSet.to_list()
add_members =
incoming_pubkeys
|> MapSet.to_list()
:ok = remove_memberships(remove_members)
add_memberships(event, add_members)
else
{:error, reason} -> {:error, reason}
end
end
defp membership_pubkey_from_event(%{"pubkey" => pubkey}) when is_binary(pubkey),
do: {:ok, pubkey}
defp membership_pubkey_from_event(_event), do: {:error, :missing_pubkey}
defp tagged_pubkey(event, tag_name) do
event
|> tagged_pubkeys(tag_name)
|> case do
{:ok, [pubkey]} -> {:ok, pubkey}
{:ok, []} -> {:error, :missing_pubkey}
{:ok, _pubkeys} -> {:error, :invalid_pubkey}
end
end
defp tagged_pubkeys(event, tag_name) do
pubkeys =
event event
|> Map.get("tags", []) |> Map.get("tags", [])
|> Enum.find_value(fn |> Enum.flat_map(fn
["h", value | _rest] when is_binary(value) and value != "" -> value [^tag_name, pubkey | _rest] when is_binary(pubkey) and pubkey != "" -> [pubkey]
_tag -> nil _tag -> []
end) end)
case group_id do {:ok, Enum.uniq(pubkeys)}
nil -> {:error, :missing_group_id}
value -> {:ok, value}
end
end end
defp pubkey_from_event(%{"pubkey" => pubkey}) when is_binary(pubkey), do: {:ok, pubkey} defp membership_metadata(event) do
defp pubkey_from_event(_event), do: {:error, :missing_pubkey} {:ok,
%{
"source_kind" => Map.get(event, "kind"),
"source_event_id" => Map.get(event, "id")
}}
end
defp remove_memberships(pubkeys) when is_list(pubkeys) do
Enum.each(pubkeys, fn pubkey ->
:ok = Storage.groups().delete_membership(%{}, @relay_access_group_id, pubkey)
end)
:ok
end
defp add_memberships(event, pubkeys) when is_list(pubkeys) do
Enum.reduce_while(pubkeys, :ok, fn pubkey, :ok ->
case put_member(event, {:ok, pubkey}) do
:ok -> {:cont, :ok}
{:error, _reason} = error -> {:halt, error}
end
end)
end
end end

48
lib/parrhesia/http.ex Normal file
View File

@@ -0,0 +1,48 @@
defmodule Parrhesia.HTTP do
@moduledoc false
alias Parrhesia.Metadata
@default_headers [{"user-agent", Metadata.user_agent()}]
@spec default_headers() :: [{String.t(), String.t()}]
def default_headers, do: @default_headers
@spec get(Keyword.t()) :: {:ok, Req.Response.t()} | {:error, Exception.t()}
def get(options) when is_list(options) do
Req.get(put_default_headers(options))
end
@spec post(Keyword.t()) :: {:ok, Req.Response.t()} | {:error, Exception.t()}
def post(options) when is_list(options) do
Req.post(put_default_headers(options))
end
@spec put_default_headers(Keyword.t()) :: Keyword.t()
def put_default_headers(options) when is_list(options) do
Keyword.update(options, :headers, @default_headers, &merge_headers(&1, @default_headers))
end
defp merge_headers(headers, defaults) do
existing_names =
headers
|> List.wrap()
|> Enum.reduce(MapSet.new(), fn
{name, _value}, acc -> MapSet.put(acc, normalize_header_name(name))
_other, acc -> acc
end)
headers ++
Enum.reject(defaults, fn {name, _value} ->
MapSet.member?(existing_names, normalize_header_name(name))
end)
end
defp normalize_header_name(name) when is_atom(name) do
name
|> Atom.to_string()
|> String.downcase()
end
defp normalize_header_name(name) when is_binary(name), do: String.downcase(name)
end

29
lib/parrhesia/metadata.ex Normal file
View File

@@ -0,0 +1,29 @@
defmodule Parrhesia.Metadata do
@moduledoc false
@metadata Application.compile_env(:parrhesia, :metadata, [])
@name Keyword.get(@metadata, :name, "Parrhesia")
@version Keyword.get(@metadata, :version, "0.0.0")
@hide_version? Keyword.get(@metadata, :hide_version?, true)
@spec name() :: String.t()
def name, do: @name
@spec version() :: String.t()
def version, do: @version
@spec hide_version?() :: boolean()
def hide_version?, do: @hide_version?
@spec name_and_version() :: String.t()
def name_and_version, do: "#{@name}/#{@version}"
@spec user_agent() :: String.t()
def user_agent do
if hide_version?() do
name()
else
name_and_version()
end
end
end

389
lib/parrhesia/nip43.ex Normal file
View File

@@ -0,0 +1,389 @@
defmodule Parrhesia.NIP43 do
@moduledoc false
alias Parrhesia.API.Events
alias Parrhesia.API.Identity
alias Parrhesia.API.RequestContext
alias Parrhesia.Groups.Flow
alias Parrhesia.Protocol
alias Parrhesia.Protocol.Filter
@join_request_kind 28_934
@invite_request_kind 28_935
@leave_request_kind 28_936
@add_user_kind 8_000
@remove_user_kind 8_001
@membership_list_kind 13_534
@claim_token_kind 31_943
@default_invite_ttl_seconds 900
@type publish_state ::
:ok
| %{action: :join, duplicate?: boolean(), message: String.t()}
| %{action: :leave, duplicate?: boolean(), message: String.t()}
@spec enabled?(keyword()) :: boolean()
def enabled?(opts \\ []) do
config(opts)
|> Keyword.get(:enabled, true)
|> Kernel.==(true)
end
@spec prepare_publish(map(), keyword()) :: {:ok, publish_state()} | {:error, term()}
def prepare_publish(event, opts \\ []) when is_map(event) and is_list(opts) do
if enabled?(opts) do
prepare_enabled_publish(event, opts)
else
prepare_disabled_publish(event)
end
end
@spec finalize_publish(map(), publish_state(), keyword()) :: :ok | {:ok, String.t()}
def finalize_publish(event, publish_state, opts \\ [])
def finalize_publish(event, :ok, _opts) when is_map(event) do
case Map.get(event, "kind") do
kind when kind in [@add_user_kind, @remove_user_kind, @membership_list_kind] ->
Flow.handle_event(event)
_other ->
:ok
end
end
def finalize_publish(event, %{action: :join, duplicate?: true, message: message}, _opts)
when is_map(event) do
{:ok, message}
end
def finalize_publish(event, %{action: :join, duplicate?: false, message: message}, opts)
when is_map(event) do
opts = Keyword.put_new(opts, :now, Map.get(event, "created_at"))
:ok = Flow.handle_event(event)
publish_membership_events(Map.get(event, "pubkey"), :add, opts)
{:ok, message}
end
def finalize_publish(event, %{action: :leave, duplicate?: true, message: message}, _opts)
when is_map(event) do
{:ok, message}
end
def finalize_publish(event, %{action: :leave, duplicate?: false, message: message}, opts)
when is_map(event) do
opts = Keyword.put_new(opts, :now, Map.get(event, "created_at"))
:ok = Flow.handle_event(event)
publish_membership_events(Map.get(event, "pubkey"), :remove, opts)
{:ok, message}
end
@spec dynamic_events([map()], keyword()) :: [map()]
def dynamic_events(filters, opts \\ []) when is_list(filters) and is_list(opts) do
if enabled?(opts) and requests_invite?(filters) do
filters
|> build_invite_event(opts)
|> maybe_wrap_event()
else
[]
end
end
@spec dynamic_count([map()], keyword()) :: non_neg_integer()
def dynamic_count(filters, opts \\ []) do
filters
|> dynamic_events(opts)
|> length()
end
defp prepare_enabled_publish(%{"kind" => @join_request_kind, "pubkey" => pubkey} = event, opts)
when is_binary(pubkey) do
with {:ok, _claim} <- validate_claim_from_event(event),
{:ok, membership} <- Flow.get_membership(pubkey) do
if membership_active?(membership) do
{:ok,
%{
action: :join,
duplicate?: true,
message: "duplicate: you are already a member of this relay."
}}
else
{:ok,
%{
action: :join,
duplicate?: false,
message: "info: welcome to #{relay_url(opts)}!"
}}
end
end
end
defp prepare_enabled_publish(%{"kind" => @leave_request_kind, "pubkey" => pubkey}, _opts)
when is_binary(pubkey) do
with {:ok, membership} <- Flow.get_membership(pubkey) do
if membership_active?(membership) do
{:ok, %{action: :leave, duplicate?: false, message: "info: membership revoked."}}
else
{:ok,
%{
action: :leave,
duplicate?: true,
message: "duplicate: you are not a member of this relay."
}}
end
end
end
defp prepare_enabled_publish(%{"kind" => @invite_request_kind}, _opts) do
{:error, "restricted: kind 28935 invite claims are generated via REQ"}
end
defp prepare_enabled_publish(%{"kind" => kind, "pubkey" => pubkey}, _opts)
when kind in [@add_user_kind, @remove_user_kind, @membership_list_kind] and
is_binary(pubkey) do
case relay_pubkey() do
{:ok, ^pubkey} -> {:ok, :ok}
{:ok, _other} -> {:error, "restricted: relay access metadata must be relay-signed"}
{:error, _reason} -> {:error, "error: relay identity unavailable"}
end
end
defp prepare_enabled_publish(_event, _opts), do: {:ok, :ok}
defp prepare_disabled_publish(%{"kind" => kind})
when kind in [
@join_request_kind,
@invite_request_kind,
@leave_request_kind,
@add_user_kind,
@remove_user_kind,
@membership_list_kind
] do
{:error, "blocked: NIP-43 relay access requests are disabled"}
end
defp prepare_disabled_publish(_event), do: {:ok, :ok}
defp build_invite_event(filters, opts) do
now = Keyword.get(opts, :now, System.system_time(:second))
identity_opts = identity_opts(opts)
with {:ok, claim} <- issue_claim(now, opts),
{:ok, signed_event} <-
%{
"created_at" => now,
"kind" => @invite_request_kind,
"tags" => [["-"], ["claim", claim]],
"content" => ""
}
|> Identity.sign_event(identity_opts),
true <- Filter.matches_any?(signed_event, filters) do
{:ok, signed_event}
else
_other -> :error
end
end
defp maybe_wrap_event({:ok, event}), do: [event]
defp maybe_wrap_event(_other), do: []
defp requests_invite?(filters) do
Enum.any?(filters, fn filter ->
case Map.get(filter, "kinds") do
kinds when is_list(kinds) -> @invite_request_kind in kinds
_other -> false
end
end)
end
defp issue_claim(now, opts) do
ttl_seconds =
config(opts)
|> Keyword.get(:invite_ttl_seconds, @default_invite_ttl_seconds)
|> normalize_positive_integer(@default_invite_ttl_seconds)
identity_opts = identity_opts(opts)
token_event = %{
"created_at" => now,
"kind" => @claim_token_kind,
"tags" => [["exp", Integer.to_string(now + ttl_seconds)]],
"content" => Base.encode16(:crypto.strong_rand_bytes(16), case: :lower)
}
with {:ok, signed_token} <- Identity.sign_event(token_event, identity_opts) do
signed_token
|> JSON.encode!()
|> Base.url_encode64(padding: false)
|> then(&{:ok, &1})
end
end
defp validate_claim_from_event(event) do
claim =
event
|> Map.get("tags", [])
|> Enum.find_value(fn
["claim", value | _rest] when is_binary(value) and value != "" -> value
_tag -> nil
end)
case claim do
nil -> {:error, "restricted: that is an invalid invite code."}
value -> validate_claim(value)
end
end
defp validate_claim(claim) when is_binary(claim) do
with {:ok, payload} <- Base.url_decode64(claim, padding: false),
{:ok, decoded} <- JSON.decode(payload),
:ok <- Protocol.validate_event(decoded),
:ok <- validate_claim_token(decoded) do
{:ok, decoded}
else
{:error, :expired_claim} ->
{:error, "restricted: that invite code is expired."}
_other ->
{:error, "restricted: that is an invalid invite code."}
end
end
defp validate_claim(_claim), do: {:error, "restricted: that is an invalid invite code."}
defp validate_claim_token(%{
"kind" => @claim_token_kind,
"pubkey" => pubkey,
"tags" => tags
}) do
with {:ok, relay_pubkey} <- relay_pubkey(),
true <- pubkey == relay_pubkey,
{:ok, expires_at} <- fetch_expiration(tags),
true <- expires_at >= System.system_time(:second) do
:ok
else
false -> {:error, :invalid_claim}
{:error, _reason} -> {:error, :invalid_claim}
end
end
defp validate_claim_token(_event), do: {:error, :invalid_claim}
defp fetch_expiration(tags) when is_list(tags) do
case Enum.find(tags, &match?(["exp", _value | _rest], &1)) do
["exp", value | _rest] ->
parse_expiration(value)
_other ->
{:error, :invalid_claim}
end
end
defp parse_expiration(value) when is_binary(value) do
case Integer.parse(value) do
{expires_at, ""} when expires_at > 0 -> validate_expiration(expires_at)
_other -> {:error, :invalid_claim}
end
end
defp parse_expiration(_value), do: {:error, :invalid_claim}
defp validate_expiration(expires_at) when is_integer(expires_at) do
if expires_at >= System.system_time(:second) do
{:ok, expires_at}
else
{:error, :expired_claim}
end
end
defp validate_expiration(_expires_at), do: {:error, :expired_claim}
defp publish_membership_events(member_pubkey, action, opts) when is_binary(member_pubkey) do
now = Keyword.get(opts, :now, System.system_time(:second))
identity_opts = identity_opts(opts)
context = Keyword.get(opts, :context, %RequestContext{})
action
|> build_membership_delta_event(member_pubkey, now)
|> sign_and_publish(context, identity_opts)
current_membership_snapshot(now)
|> sign_and_publish(context, identity_opts)
:ok
end
defp build_membership_delta_event(:add, member_pubkey, now) do
%{
"created_at" => now,
"kind" => @add_user_kind,
"tags" => [["-"], ["p", member_pubkey]],
"content" => ""
}
end
defp build_membership_delta_event(:remove, member_pubkey, now) do
%{
"created_at" => now,
"kind" => @remove_user_kind,
"tags" => [["-"], ["p", member_pubkey]],
"content" => ""
}
end
defp current_membership_snapshot(now) do
tags =
case Flow.list_memberships() do
{:ok, memberships} ->
[["-"] | Enum.map(memberships, &["member", &1.pubkey])]
{:error, _reason} ->
[["-"]]
end
%{
"created_at" => now,
"kind" => @membership_list_kind,
"tags" => tags,
"content" => ""
}
end
defp sign_and_publish(unsigned_event, context, identity_opts) do
with {:ok, signed_event} <- Identity.sign_event(unsigned_event, identity_opts),
{:ok, %{accepted: true}} <- Events.publish(signed_event, context: context) do
:ok
else
_other -> :ok
end
end
defp membership_active?(nil), do: false
defp membership_active?(%{role: "member"}), do: true
defp membership_active?(_membership), do: false
defp relay_pubkey do
case Identity.get() do
{:ok, %{pubkey: pubkey}} when is_binary(pubkey) -> {:ok, pubkey}
{:error, reason} -> {:error, reason}
end
end
defp relay_url(opts) do
Keyword.get(opts, :relay_url, Application.get_env(:parrhesia, :relay_url))
end
defp identity_opts(opts) do
opts
|> Keyword.take([:path, :private_key, :configured_private_key])
end
defp config(opts) do
case Keyword.get(opts, :config) do
config when is_list(config) -> config
_other -> Application.get_env(:parrhesia, :nip43, [])
end
end
defp normalize_positive_integer(value, _default) when is_integer(value) and value > 0, do: value
defp normalize_positive_integer(_value, default), do: default
end

400
lib/parrhesia/nip66.ex Normal file
View File

@@ -0,0 +1,400 @@
defmodule Parrhesia.NIP66 do
@moduledoc false
alias Parrhesia.API.Events
alias Parrhesia.API.Identity
alias Parrhesia.API.RequestContext
alias Parrhesia.NIP66.Probe
alias Parrhesia.Web.Listener
alias Parrhesia.Web.RelayInfo
@default_publish_interval_seconds 900
@default_timeout_ms 5_000
@default_checks [:open, :read, :nip11]
@allowed_requirement_keys MapSet.new(~w[auth writes pow payment])
@spec enabled?(keyword()) :: boolean()
def enabled?(opts \\ []) do
config = config(opts)
config_enabled?(config) and active_targets(config, listeners(opts)) != []
end
@spec publish_snapshot(keyword()) :: {:ok, [map()]}
def publish_snapshot(opts \\ []) when is_list(opts) do
config = config(opts)
targets = active_targets(config, listeners(opts))
if config_enabled?(config) and targets != [] do
probe_fun = Keyword.get(opts, :probe_fun, &Probe.probe/3)
context = Keyword.get(opts, :context, %RequestContext{})
now = Keyword.get(opts, :now, System.system_time(:second))
identity_opts = identity_opts(opts)
events =
maybe_publish_monitor_announcement(config, now, context, identity_opts)
|> Kernel.++(
publish_discovery_events(targets, config, probe_fun, now, context, identity_opts)
)
{:ok, events}
else
{:ok, []}
end
end
@spec publish_interval_ms(keyword()) :: pos_integer()
def publish_interval_ms(opts \\ []) when is_list(opts) do
config = config(opts)
config
|> Keyword.get(:publish_interval_seconds, @default_publish_interval_seconds)
|> normalize_positive_integer(@default_publish_interval_seconds)
|> Kernel.*(1_000)
end
defp maybe_publish_monitor_announcement(config, now, context, identity_opts) do
if Keyword.get(config, :publish_monitor_announcement?, true) do
config
|> build_monitor_announcement(now)
|> sign_and_publish(context, identity_opts)
|> maybe_wrap_event()
else
[]
end
end
defp publish_discovery_events(targets, config, probe_fun, now, context, identity_opts) do
probe_opts = [
timeout_ms:
config
|> Keyword.get(:timeout_ms, @default_timeout_ms)
|> normalize_positive_integer(@default_timeout_ms),
checks: normalize_checks(Keyword.get(config, :checks, @default_checks))
]
Enum.flat_map(targets, fn target ->
probe_result =
case probe_fun.(target, probe_opts, identity_opts) do
{:ok, result} when is_map(result) -> result
_other -> %{checks: [], metrics: %{}, relay_info: nil, relay_info_body: nil}
end
target
|> build_discovery_event(now, probe_result, identity_opts)
|> sign_and_publish(context, identity_opts)
|> maybe_wrap_event()
end)
end
defp sign_and_publish(event, context, identity_opts) do
with {:ok, signed_event} <- Identity.sign_event(event, identity_opts),
{:ok, %{accepted: true}} <- Events.publish(signed_event, context: context) do
{:ok, signed_event}
else
_other -> :error
end
end
defp maybe_wrap_event({:ok, event}), do: [event]
defp maybe_wrap_event(_other), do: []
defp build_monitor_announcement(config, now) do
checks = normalize_checks(Keyword.get(config, :checks, @default_checks))
timeout_ms = Keyword.get(config, :timeout_ms, @default_timeout_ms)
frequency = Keyword.get(config, :publish_interval_seconds, @default_publish_interval_seconds)
tags =
[
[
"frequency",
Integer.to_string(
normalize_positive_integer(frequency, @default_publish_interval_seconds)
)
]
] ++
Enum.map(checks, fn check ->
["timeout", Atom.to_string(check), Integer.to_string(timeout_ms)]
end) ++
Enum.map(checks, fn check -> ["c", Atom.to_string(check)] end) ++
maybe_geohash_tag(config)
%{
"created_at" => now,
"kind" => 10_166,
"tags" => tags,
"content" => ""
}
end
defp build_discovery_event(target, now, probe_result, identity_opts) do
relay_info = probe_result[:relay_info] || local_relay_info(target.listener, identity_opts)
content = probe_result[:relay_info_body] || JSON.encode!(relay_info)
tags =
[["d", target.relay_url]]
|> append_network_tag(target)
|> append_relay_type_tag(target)
|> append_geohash_tag(target)
|> append_topic_tags(target)
|> Kernel.++(nip_tags(relay_info))
|> Kernel.++(requirement_tags(relay_info))
|> Kernel.++(rtt_tags(probe_result[:metrics] || %{}))
%{
"created_at" => now,
"kind" => 30_166,
"tags" => tags,
"content" => content
}
end
defp nip_tags(relay_info) do
relay_info
|> Map.get("supported_nips", [])
|> Enum.map(&["N", Integer.to_string(&1)])
end
defp requirement_tags(relay_info) do
limitation = Map.get(relay_info, "limitation", %{})
[
requirement_value("auth", Map.get(limitation, "auth_required", false)),
requirement_value("writes", Map.get(limitation, "restricted_writes", false)),
requirement_value("pow", Map.get(limitation, "min_pow_difficulty", 0) > 0),
requirement_value("payment", Map.get(limitation, "payment_required", false))
]
|> Enum.filter(&MapSet.member?(@allowed_requirement_keys, String.trim_leading(&1, "!")))
|> Enum.map(&["R", &1])
end
defp requirement_value(name, true), do: name
defp requirement_value(name, false), do: "!" <> name
defp rtt_tags(metrics) when is_map(metrics) do
[]
|> maybe_put_metric_tag("rtt-open", Map.get(metrics, :rtt_open_ms))
|> maybe_put_metric_tag("rtt-read", Map.get(metrics, :rtt_read_ms))
|> maybe_put_metric_tag("rtt-write", Map.get(metrics, :rtt_write_ms))
end
defp append_network_tag(tags, target) do
case target.network do
nil -> tags
value -> tags ++ [["n", value]]
end
end
defp append_relay_type_tag(tags, target) do
case target.relay_type do
nil -> tags
value -> tags ++ [["T", value]]
end
end
defp append_geohash_tag(tags, target) do
case target.geohash do
nil -> tags
value -> tags ++ [["g", value]]
end
end
defp append_topic_tags(tags, target) do
tags ++ Enum.map(target.topics, &["t", &1])
end
defp maybe_put_metric_tag(tags, _name, nil), do: tags
defp maybe_put_metric_tag(tags, name, value) when is_integer(value) and value >= 0 do
tags ++ [[name, Integer.to_string(value)]]
end
defp maybe_put_metric_tag(tags, _name, _value), do: tags
defp local_relay_info(listener, identity_opts) do
relay_info = RelayInfo.document(listener)
case Identity.get(identity_opts) do
{:ok, %{pubkey: pubkey}} ->
relay_info
|> Map.put("pubkey", pubkey)
|> Map.put("self", pubkey)
{:error, _reason} ->
relay_info
end
end
defp maybe_geohash_tag(config) do
case fetch_value(config, :geohash) do
value when is_binary(value) and value != "" -> [["g", value]]
_other -> []
end
end
defp active_targets(config, listeners) do
listeners_by_id = Map.new(listeners, &{&1.id, &1})
raw_targets =
case Keyword.get(config, :targets, []) do
[] -> [default_target()]
targets when is_list(targets) -> targets
_other -> []
end
Enum.flat_map(raw_targets, fn raw_target ->
case normalize_target(raw_target, listeners_by_id) do
{:ok, target} -> [target]
:error -> []
end
end)
end
defp normalize_target(target, listeners_by_id) when is_map(target) or is_list(target) do
listener_id = fetch_value(target, :listener) || :public
relay_url = fetch_value(target, :relay_url) || Application.get_env(:parrhesia, :relay_url)
with %{} = listener <- Map.get(listeners_by_id, normalize_listener_id(listener_id)),
true <- listener.enabled and Listener.feature_enabled?(listener, :nostr),
{:ok, normalized_relay_url} <- normalize_relay_url(relay_url) do
{:ok,
%{
listener: listener,
relay_url: normalized_relay_url,
network: normalize_network(fetch_value(target, :network), normalized_relay_url),
relay_type: normalize_optional_string(fetch_value(target, :relay_type)),
geohash: normalize_optional_string(fetch_value(target, :geohash)),
topics: normalize_string_list(fetch_value(target, :topics))
}}
else
_other -> :error
end
end
defp normalize_target(_target, _listeners_by_id), do: :error
defp normalize_relay_url(relay_url) when is_binary(relay_url) and relay_url != "" do
case URI.parse(relay_url) do
%URI{scheme: scheme, host: host} = uri
when scheme in ["ws", "wss"] and is_binary(host) and host != "" ->
normalized_uri = %URI{
uri
| scheme: String.downcase(scheme),
host: String.downcase(host),
path: normalize_path(uri.path),
query: nil,
fragment: nil,
port: normalize_port(uri.port, scheme)
}
{:ok, URI.to_string(normalized_uri)}
_other ->
:error
end
end
defp normalize_relay_url(_relay_url), do: :error
defp normalize_path(nil), do: "/"
defp normalize_path(""), do: "/"
defp normalize_path(path), do: path
defp normalize_port(80, "ws"), do: nil
defp normalize_port(443, "wss"), do: nil
defp normalize_port(port, _scheme), do: port
defp normalize_network(value, _relay_url)
when is_binary(value) and value in ["clearnet", "tor", "i2p", "loki"],
do: value
defp normalize_network(_value, relay_url) do
relay_url
|> URI.parse()
|> Map.get(:host)
|> infer_network()
end
defp infer_network(host) when is_binary(host) do
cond do
String.ends_with?(host, ".onion") -> "tor"
String.ends_with?(host, ".i2p") -> "i2p"
true -> "clearnet"
end
end
defp infer_network(_host), do: "clearnet"
defp normalize_checks(checks) when is_list(checks) do
checks
|> Enum.map(&normalize_check/1)
|> Enum.reject(&is_nil/1)
|> Enum.uniq()
end
defp normalize_checks(_checks), do: @default_checks
defp normalize_check(:open), do: :open
defp normalize_check("open"), do: :open
defp normalize_check(:read), do: :read
defp normalize_check("read"), do: :read
defp normalize_check(:nip11), do: :nip11
defp normalize_check("nip11"), do: :nip11
defp normalize_check(_check), do: nil
defp listeners(opts) do
case Keyword.get(opts, :listeners) do
listeners when is_list(listeners) -> listeners
_other -> Listener.all()
end
end
defp identity_opts(opts) do
opts
|> Keyword.take([:path, :private_key, :configured_private_key])
end
defp config(opts) do
case Keyword.get(opts, :config) do
config when is_list(config) -> config
_other -> Application.get_env(:parrhesia, :nip66, [])
end
end
defp config_enabled?(config), do: Keyword.get(config, :enabled, true)
defp default_target do
%{listener: :public, relay_url: Application.get_env(:parrhesia, :relay_url)}
end
defp normalize_listener_id(value) when is_atom(value), do: value
defp normalize_listener_id(value) when is_binary(value) do
String.to_existing_atom(value)
rescue
ArgumentError -> :public
end
defp normalize_listener_id(_value), do: :public
defp normalize_positive_integer(value, _default) when is_integer(value) and value > 0, do: value
defp normalize_positive_integer(_value, default), do: default
defp normalize_optional_string(value) when is_binary(value) and value != "", do: value
defp normalize_optional_string(_value), do: nil
defp normalize_string_list(values) when is_list(values) do
Enum.filter(values, &(is_binary(&1) and &1 != ""))
end
defp normalize_string_list(_values), do: []
defp fetch_value(map, key) when is_map(map) do
Map.get(map, key) || Map.get(map, Atom.to_string(key))
end
defp fetch_value(list, key) when is_list(list) do
if Keyword.keyword?(list), do: Keyword.get(list, key), else: nil
end
defp fetch_value(_container, _key), do: nil
end

View File

@@ -0,0 +1,218 @@
defmodule Parrhesia.NIP66.Probe do
@moduledoc false
alias Parrhesia.HTTP
alias Parrhesia.Sync.Transport.WebSockexClient
@type result :: %{
checks: [atom()],
metrics: map(),
relay_info: map() | nil,
relay_info_body: String.t() | nil
}
@spec probe(map(), keyword(), keyword()) :: {:ok, result()}
def probe(target, opts \\ [], publish_opts \\ [])
def probe(target, opts, _publish_opts) when is_map(target) and is_list(opts) do
timeout_ms = Keyword.get(opts, :timeout_ms, 5_000)
checks = normalize_checks(Keyword.get(opts, :checks, [:open, :read, :nip11]))
initial = %{checks: [], metrics: %{}, relay_info: nil, relay_info_body: nil}
result =
Enum.reduce(checks, initial, fn check, acc ->
merge_probe_result(acc, check_result(check, target, timeout_ms))
end)
{:ok, result}
end
def probe(_target, _opts, _publish_opts),
do: {:ok, %{checks: [], metrics: %{}, relay_info: nil, relay_info_body: nil}}
defp merge_probe_result(acc, %{check: check, metric_key: metric_key, metric_value: metric_value}) do
acc
|> Map.update!(:checks, &[check | &1])
|> Map.update!(:metrics, &Map.put(&1, metric_key, metric_value))
end
defp merge_probe_result(acc, %{
check: check,
relay_info: relay_info,
relay_info_body: relay_info_body
}) do
acc
|> Map.update!(:checks, &[check | &1])
|> Map.put(:relay_info, relay_info)
|> Map.put(:relay_info_body, relay_info_body)
end
defp merge_probe_result(acc, :skip), do: acc
defp merge_probe_result(acc, {:error, _reason}), do: acc
defp check_result(:open, target, timeout_ms) do
case measure_websocket_connect(Map.fetch!(target, :relay_url), timeout_ms) do
{:ok, metric_value} ->
%{check: :open, metric_key: :rtt_open_ms, metric_value: metric_value}
{:error, reason} ->
{:error, reason}
end
end
defp check_result(:read, %{listener: listener} = target, timeout_ms) do
if listener.auth.nip42_required do
:skip
else
case measure_websocket_read(Map.fetch!(target, :relay_url), timeout_ms) do
{:ok, metric_value} ->
%{check: :read, metric_key: :rtt_read_ms, metric_value: metric_value}
{:error, reason} ->
{:error, reason}
end
end
end
defp check_result(:nip11, target, timeout_ms) do
case fetch_nip11(Map.fetch!(target, :relay_url), timeout_ms) do
{:ok, relay_info, relay_info_body, _metric_value} ->
%{check: :nip11, relay_info: relay_info, relay_info_body: relay_info_body}
{:error, reason} ->
{:error, reason}
end
end
defp check_result(_check, _target, _timeout_ms), do: :skip
defp measure_websocket_connect(relay_url, timeout_ms) do
with {:ok, websocket} <- connect(relay_url, timeout_ms),
{:ok, metric_value} <- await_connected(websocket, timeout_ms) do
:ok = WebSockexClient.close(websocket)
{:ok, metric_value}
end
end
defp measure_websocket_read(relay_url, timeout_ms) do
with {:ok, websocket} <- connect(relay_url, timeout_ms),
{:ok, started_at} <- await_connected_started_at(websocket, timeout_ms),
:ok <- WebSockexClient.send_json(websocket, ["COUNT", "nip66-probe", %{"kinds" => [1]}]),
{:ok, metric_value} <- await_count_response(websocket, timeout_ms, started_at) do
:ok = WebSockexClient.close(websocket)
{:ok, metric_value}
end
end
defp connect(relay_url, timeout_ms) do
server = %{url: relay_url, tls: tls_config(relay_url)}
WebSockexClient.connect(self(), server, websocket_opts: [timeout: timeout_ms, protocols: nil])
end
defp await_connected(websocket, timeout_ms) do
with {:ok, started_at} <- await_connected_started_at(websocket, timeout_ms) do
{:ok, monotonic_duration_ms(started_at)}
end
end
defp await_connected_started_at(websocket, timeout_ms) do
started_at = System.monotonic_time()
receive do
{:sync_transport, ^websocket, :connected, _metadata} -> {:ok, started_at}
{:sync_transport, ^websocket, :disconnected, reason} -> {:error, reason}
after
timeout_ms -> {:error, :timeout}
end
end
defp await_count_response(websocket, timeout_ms, started_at) do
receive do
{:sync_transport, ^websocket, :frame, ["COUNT", "nip66-probe", _payload]} ->
{:ok, monotonic_duration_ms(started_at)}
{:sync_transport, ^websocket, :frame, ["CLOSED", "nip66-probe", _message]} ->
{:error, :closed}
{:sync_transport, ^websocket, :disconnected, reason} ->
{:error, reason}
after
timeout_ms -> {:error, :timeout}
end
end
defp fetch_nip11(relay_url, timeout_ms) do
started_at = System.monotonic_time()
case HTTP.get(
url: relay_info_url(relay_url),
headers: [{"accept", "application/nostr+json"}],
decode_body: false,
connect_options: [timeout: timeout_ms],
receive_timeout: timeout_ms
) do
{:ok, %Req.Response{status: 200, body: body}} when is_binary(body) ->
case JSON.decode(body) do
{:ok, relay_info} when is_map(relay_info) ->
{:ok, relay_info, body, monotonic_duration_ms(started_at)}
{:error, reason} ->
{:error, reason}
_other ->
{:error, :invalid_relay_info}
end
{:ok, %Req.Response{status: status}} ->
{:error, {:relay_info_request_failed, status}}
{:error, reason} ->
{:error, reason}
end
end
defp relay_info_url(relay_url) do
relay_url
|> URI.parse()
|> Map.update!(:scheme, fn
"wss" -> "https"
"ws" -> "http"
end)
|> URI.to_string()
end
defp tls_config(relay_url) do
case URI.parse(relay_url) do
%URI{scheme: "wss", host: host} when is_binary(host) and host != "" ->
%{mode: :required, hostname: host, pins: []}
_other ->
%{mode: :disabled}
end
end
defp normalize_checks(checks) when is_list(checks) do
checks
|> Enum.map(&normalize_check/1)
|> Enum.reject(&is_nil/1)
|> Enum.uniq()
end
defp normalize_checks(_checks), do: []
defp normalize_check(:open), do: :open
defp normalize_check("open"), do: :open
defp normalize_check(:read), do: :read
defp normalize_check("read"), do: :read
defp normalize_check(:nip11), do: :nip11
defp normalize_check("nip11"), do: :nip11
defp normalize_check(_check), do: nil
defp monotonic_duration_ms(started_at) do
System.monotonic_time()
|> Kernel.-(started_at)
|> System.convert_time_unit(:native, :millisecond)
end
end

View File

@@ -686,7 +686,14 @@ defmodule Parrhesia.Policy.EventPolicy do
_tag -> false _tag -> false
end) end)
if protected? do cond do
not protected? ->
:ok
nip43_relay_access_kind?(Map.get(event, "kind")) ->
:ok
true ->
pubkey = Map.get(event, "pubkey") pubkey = Map.get(event, "pubkey")
cond do cond do
@@ -694,11 +701,14 @@ defmodule Parrhesia.Policy.EventPolicy do
MapSet.member?(authenticated_pubkeys, pubkey) -> :ok MapSet.member?(authenticated_pubkeys, pubkey) -> :ok
true -> {:error, :protected_event_pubkey_mismatch} true -> {:error, :protected_event_pubkey_mismatch}
end end
else
:ok
end end
end end
defp nip43_relay_access_kind?(kind) when kind in [8_000, 8_001, 13_534, 28_934, 28_935, 28_936],
do: true
defp nip43_relay_access_kind?(_kind), do: false
defp config_bool([scope, key], default) do defp config_bool([scope, key], default) do
case Application.get_env(:parrhesia, scope, []) |> Keyword.get(key, default) do case Application.get_env(:parrhesia, scope, []) |> Keyword.get(key, default) do
true -> true true -> true

View File

@@ -0,0 +1,73 @@
defmodule Parrhesia.PostgresRepos do
@moduledoc false
alias Parrhesia.Config
alias Parrhesia.ReadRepo
alias Parrhesia.Repo
@spec write() :: module()
def write, do: Repo
@spec read() :: module()
def read do
if separate_read_pool_enabled?() and is_pid(Process.whereis(ReadRepo)) do
ReadRepo
else
Repo
end
end
@spec started_repos() :: [module()]
def started_repos do
cond do
not postgres_enabled?() ->
[]
separate_read_pool_enabled?() ->
[Repo, ReadRepo]
true ->
[Repo]
end
end
@spec postgres_enabled?() :: boolean()
def postgres_enabled? do
case Process.whereis(Config) do
pid when is_pid(pid) ->
Config.get([:storage, :backend], storage_backend_default()) == :postgres
nil ->
storage_backend_default() == :postgres
end
end
@spec separate_read_pool_enabled?() :: boolean()
def separate_read_pool_enabled? do
case {postgres_enabled?(), Process.whereis(Config)} do
{false, _pid} ->
false
{true, pid} when is_pid(pid) ->
Config.get(
[:database, :separate_read_pool?],
application_default(:separate_read_pool?, false)
)
{true, nil} ->
application_default(:separate_read_pool?, false)
end
end
defp application_default(key, default) do
:parrhesia
|> Application.get_env(:database, [])
|> Keyword.get(key, default)
end
defp storage_backend_default do
:parrhesia
|> Application.get_env(:storage, [])
|> Keyword.get(:backend, :postgres)
end
end

View File

@@ -1 +1,4 @@
Postgrex.Types.define(Parrhesia.PostgresTypes, [], json: JSON) Postgrex.Types.define(Parrhesia.PostgresTypes, [],
json: JSON,
moduledoc: "Custom Postgrex type definitions used by `Parrhesia.Repo` and `Parrhesia.ReadRepo`."
)

View File

@@ -1,6 +1,15 @@
defmodule Parrhesia.Protocol do defmodule Parrhesia.Protocol do
@moduledoc """ @moduledoc """
Nostr protocol message decode/encode helpers. Nostr protocol message decode/encode helpers.
This module is transport-oriented: it turns websocket payloads into structured tuples and
back again.
For programmatic API calls inside the application, prefer the `Parrhesia.API.*` modules.
In particular:
- `validate_event/1` returns user-facing error strings
- `Parrhesia.API.Auth.validate_event/1` returns machine-friendly validator atoms
""" """
alias Parrhesia.Protocol.EventValidator alias Parrhesia.Protocol.EventValidator
@@ -41,6 +50,9 @@ defmodule Parrhesia.Protocol do
@count_options_keys MapSet.new(["hll", "approximate"]) @count_options_keys MapSet.new(["hll", "approximate"])
@doc """
Decodes a client websocket payload into a structured protocol tuple.
"""
@spec decode_client(binary()) :: {:ok, client_message()} | {:error, decode_error()} @spec decode_client(binary()) :: {:ok, client_message()} | {:error, decode_error()}
def decode_client(payload) when is_binary(payload) do def decode_client(payload) when is_binary(payload) do
with {:ok, decoded} <- decode_json(payload) do with {:ok, decoded} <- decode_json(payload) do
@@ -48,6 +60,9 @@ defmodule Parrhesia.Protocol do
end end
end end
@doc """
Validates an event and returns relay-facing error strings.
"""
@spec validate_event(event()) :: :ok | {:error, String.t()} @spec validate_event(event()) :: :ok | {:error, String.t()}
def validate_event(event) do def validate_event(event) do
case EventValidator.validate(event) do case EventValidator.validate(event) do
@@ -56,6 +71,9 @@ defmodule Parrhesia.Protocol do
end end
end end
@doc """
Encodes a relay message tuple into the JSON frame sent to clients.
"""
@spec encode_relay(relay_message()) :: binary() @spec encode_relay(relay_message()) :: binary()
def encode_relay(message) do def encode_relay(message) do
message message
@@ -63,6 +81,9 @@ defmodule Parrhesia.Protocol do
|> JSON.encode!() |> JSON.encode!()
end end
@doc """
Converts a decode error into the relay notice string that should be sent to a client.
"""
@spec decode_error_notice(decode_error()) :: String.t() @spec decode_error_notice(decode_error()) :: String.t()
def decode_error_notice(reason) do def decode_error_notice(reason) do
case reason do case reason do

View File

@@ -6,6 +6,14 @@ defmodule Parrhesia.Protocol.EventValidator do
@required_fields ~w[id pubkey created_at kind tags content sig] @required_fields ~w[id pubkey created_at kind tags content sig]
@max_kind 65_535 @max_kind 65_535
@default_max_event_future_skew_seconds 900 @default_max_event_future_skew_seconds 900
@default_max_tags_per_event 256
@default_nip43_request_max_age_seconds 300
@verify_event_signatures_locked Application.compile_env(
:parrhesia,
[:features, :verify_event_signatures_locked?],
false
)
@supported_mls_ciphersuites MapSet.new(~w[0x0001 0x0002 0x0003 0x0004 0x0005 0x0006 0x0007]) @supported_mls_ciphersuites MapSet.new(~w[0x0001 0x0002 0x0003 0x0004 0x0005 0x0006 0x0007])
@required_mls_extensions MapSet.new(["0xf2ee", "0x000a"]) @required_mls_extensions MapSet.new(["0xf2ee", "0x000a"])
@supported_keypackage_ref_sizes [32, 48, 64] @supported_keypackage_ref_sizes [32, 48, 64]
@@ -17,6 +25,7 @@ defmodule Parrhesia.Protocol.EventValidator do
| :invalid_created_at | :invalid_created_at
| :created_at_too_far_in_future | :created_at_too_far_in_future
| :invalid_kind | :invalid_kind
| :too_many_tags
| :invalid_tags | :invalid_tags
| :invalid_content | :invalid_content
| :invalid_sig | :invalid_sig
@@ -44,6 +53,22 @@ defmodule Parrhesia.Protocol.EventValidator do
| :missing_marmot_group_tag | :missing_marmot_group_tag
| :invalid_marmot_group_tag | :invalid_marmot_group_tag
| :invalid_marmot_group_content | :invalid_marmot_group_content
| :missing_nip66_d_tag
| :invalid_nip66_d_tag
| :invalid_nip66_discovery_tag
| :missing_nip66_frequency_tag
| :invalid_nip66_frequency_tag
| :invalid_nip66_timeout_tag
| :invalid_nip66_check_tag
| :missing_nip43_protected_tag
| :missing_nip43_claim_tag
| :invalid_nip43_claim_tag
| :missing_nip43_member_tag
| :invalid_nip43_member_tag
| :missing_nip43_pubkey_tag
| :invalid_nip43_pubkey_tag
| :stale_nip43_join_request
| :stale_nip43_leave_request
@spec validate(map()) :: :ok | {:error, error_reason()} @spec validate(map()) :: :ok | {:error, error_reason()}
def validate(event) when is_map(event) do def validate(event) when is_map(event) do
@@ -87,6 +112,7 @@ defmodule Parrhesia.Protocol.EventValidator do
created_at_too_far_in_future: created_at_too_far_in_future:
"invalid: event creation date is too far off from the current time", "invalid: event creation date is too far off from the current time",
invalid_kind: "invalid: kind must be an integer between 0 and 65535", invalid_kind: "invalid: kind must be an integer between 0 and 65535",
too_many_tags: "invalid: event tags exceed configured limit",
invalid_tags: "invalid: tags must be an array of non-empty string arrays", invalid_tags: "invalid: tags must be an array of non-empty string arrays",
invalid_content: "invalid: content must be a string", invalid_content: "invalid: content must be a string",
invalid_sig: "invalid: sig must be 64-byte lowercase hex", invalid_sig: "invalid: sig must be 64-byte lowercase hex",
@@ -127,7 +153,35 @@ defmodule Parrhesia.Protocol.EventValidator do
missing_marmot_group_tag: "invalid: kind 445 must include at least one h tag with a group id", missing_marmot_group_tag: "invalid: kind 445 must include at least one h tag with a group id",
invalid_marmot_group_tag: invalid_marmot_group_tag:
"invalid: kind 445 h tags must contain 32-byte lowercase hex group ids", "invalid: kind 445 h tags must contain 32-byte lowercase hex group ids",
invalid_marmot_group_content: "invalid: kind 445 content must be non-empty base64" invalid_marmot_group_content: "invalid: kind 445 content must be non-empty base64",
missing_nip66_d_tag:
"invalid: kind 30166 must include a single [\"d\", <normalized ws/wss url or relay pubkey>] tag",
invalid_nip66_d_tag:
"invalid: kind 30166 must include a single [\"d\", <normalized ws/wss url or relay pubkey>] tag",
invalid_nip66_discovery_tag: "invalid: kind 30166 includes malformed NIP-66 discovery tags",
missing_nip66_frequency_tag:
"invalid: kind 10166 must include a single [\"frequency\", <seconds>] tag",
invalid_nip66_frequency_tag:
"invalid: kind 10166 must include a single [\"frequency\", <seconds>] tag",
invalid_nip66_timeout_tag:
"invalid: kind 10166 timeout tags must be [\"timeout\", <check>, <ms>]",
invalid_nip66_check_tag: "invalid: kind 10166 c tags must contain lowercase check names",
missing_nip43_protected_tag:
"invalid: NIP-43 events must include a NIP-70 protected [\"-\"] tag",
missing_nip43_claim_tag:
"invalid: kinds 28934 and 28935 must include a single [\"claim\", <invite code>] tag",
invalid_nip43_claim_tag:
"invalid: kinds 28934 and 28935 must include a single [\"claim\", <invite code>] tag",
missing_nip43_member_tag:
"invalid: kind 13534 must include at least one [\"member\", <hex pubkey>] tag",
invalid_nip43_member_tag:
"invalid: kind 13534 member tags must contain lowercase hex pubkeys",
missing_nip43_pubkey_tag:
"invalid: kinds 8000 and 8001 must include a single [\"p\", <hex pubkey>] tag",
invalid_nip43_pubkey_tag:
"invalid: kinds 8000 and 8001 must include a single [\"p\", <hex pubkey>] tag",
stale_nip43_join_request: "invalid: kind 28934 created_at must be recent",
stale_nip43_leave_request: "invalid: kind 28936 created_at must be recent"
} }
@spec error_message(error_reason()) :: String.t() @spec error_message(error_reason()) :: String.t()
@@ -169,16 +223,25 @@ defmodule Parrhesia.Protocol.EventValidator do
defp validate_kind(kind) when is_integer(kind) and kind >= 0 and kind <= @max_kind, do: :ok defp validate_kind(kind) when is_integer(kind) and kind >= 0 and kind <= @max_kind, do: :ok
defp validate_kind(_kind), do: {:error, :invalid_kind} defp validate_kind(_kind), do: {:error, :invalid_kind}
defp validate_tags(tags) when is_list(tags) do defp validate_tags(tags) when is_list(tags), do: validate_tags(tags, max_tags_per_event(), 0)
if Enum.all?(tags, &valid_tag?/1) do
:ok defp validate_tags(_tags), do: {:error, :invalid_tags}
else
defp validate_tags([], _max_tags, _count), do: :ok
defp validate_tags([tag | rest], max_tags, count) do
cond do
count + 1 > max_tags ->
{:error, :too_many_tags}
valid_tag?(tag) ->
validate_tags(rest, max_tags, count + 1)
true ->
{:error, :invalid_tags} {:error, :invalid_tags}
end end
end end
defp validate_tags(_tags), do: {:error, :invalid_tags}
defp validate_content(content) when is_binary(content), do: :ok defp validate_content(content) when is_binary(content), do: :ok
defp validate_content(_content), do: {:error, :invalid_content} defp validate_content(_content), do: {:error, :invalid_content}
@@ -197,7 +260,7 @@ defmodule Parrhesia.Protocol.EventValidator do
end end
defp validate_signature(event) do defp validate_signature(event) do
if verify_event_signatures?() do if @verify_event_signatures_locked or verify_event_signatures?() do
verify_signature(event) verify_signature(event)
else else
:ok :ok
@@ -240,6 +303,27 @@ defmodule Parrhesia.Protocol.EventValidator do
defp validate_kind_specific(%{"kind" => 1059} = event), defp validate_kind_specific(%{"kind" => 1059} = event),
do: validate_giftwrap_event(event) do: validate_giftwrap_event(event)
defp validate_kind_specific(%{"kind" => 30_166} = event),
do: validate_nip66_discovery_event(event)
defp validate_kind_specific(%{"kind" => 10_166} = event),
do: validate_nip66_monitor_announcement(event)
defp validate_kind_specific(%{"kind" => 13_534} = event),
do: validate_nip43_membership_list(event)
defp validate_kind_specific(%{"kind" => kind} = event) when kind in [8_000, 8_001],
do: validate_nip43_membership_delta(event)
defp validate_kind_specific(%{"kind" => 28_934} = event),
do: validate_nip43_join_request(event)
defp validate_kind_specific(%{"kind" => 28_935} = event),
do: validate_nip43_invite_response(event)
defp validate_kind_specific(%{"kind" => 28_936} = event),
do: validate_nip43_leave_request(event)
defp validate_kind_specific(_event), do: :ok defp validate_kind_specific(_event), do: :ok
defp validate_marmot_keypackage_event(event) do defp validate_marmot_keypackage_event(event) do
@@ -313,6 +397,184 @@ defmodule Parrhesia.Protocol.EventValidator do
end end
end end
defp validate_nip66_discovery_event(event) do
tags = Map.get(event, "tags", [])
with :ok <- validate_nip66_d_tag(tags),
:ok <-
validate_optional_single_string_tag_with_predicate(
tags,
"n",
:invalid_nip66_discovery_tag,
&(&1 in ["clearnet", "tor", "i2p", "loki"])
),
:ok <-
validate_optional_single_string_tag_with_predicate(
tags,
"T",
:invalid_nip66_discovery_tag,
&valid_pascal_case?/1
),
:ok <-
validate_optional_single_string_tag_with_predicate(
tags,
"g",
:invalid_nip66_discovery_tag,
&non_empty_string?/1
),
:ok <-
validate_optional_repeated_tag(
tags,
"N",
&positive_integer_string?/1,
:invalid_nip66_discovery_tag
),
:ok <-
validate_optional_repeated_tag(
tags,
"R",
&valid_nip66_requirement_value?/1,
:invalid_nip66_discovery_tag
),
:ok <-
validate_optional_repeated_tag(
tags,
"k",
&valid_nip66_kind_value?/1,
:invalid_nip66_discovery_tag
),
:ok <-
validate_optional_repeated_tag(
tags,
"t",
&non_empty_string?/1,
:invalid_nip66_discovery_tag
),
:ok <-
validate_optional_single_string_tag_with_predicate(
tags,
"rtt-open",
:invalid_nip66_discovery_tag,
&positive_integer_string?/1
),
:ok <-
validate_optional_single_string_tag_with_predicate(
tags,
"rtt-read",
:invalid_nip66_discovery_tag,
&positive_integer_string?/1
) do
validate_optional_single_string_tag_with_predicate(
tags,
"rtt-write",
:invalid_nip66_discovery_tag,
&positive_integer_string?/1
)
end
end
defp validate_nip66_monitor_announcement(event) do
tags = Map.get(event, "tags", [])
with :ok <-
validate_single_string_tag_with_predicate(
tags,
"frequency",
:missing_nip66_frequency_tag,
:invalid_nip66_frequency_tag,
&positive_integer_string?/1
),
:ok <- validate_optional_repeated_timeout_tags(tags),
:ok <-
validate_optional_repeated_tag(
tags,
"c",
&valid_nip66_check_name?/1,
:invalid_nip66_check_tag
) do
validate_optional_single_string_tag_with_predicate(
tags,
"g",
:invalid_nip66_discovery_tag,
&non_empty_string?/1
)
end
end
defp validate_nip43_membership_list(event) do
tags = Map.get(event, "tags", [])
case validate_protected_tag(tags) do
:ok -> validate_optional_repeated_pubkey_tag(tags, "member", :invalid_nip43_member_tag)
{:error, _reason} = error -> error
end
end
defp validate_nip43_membership_delta(event) do
tags = Map.get(event, "tags", [])
case validate_protected_tag(tags) do
:ok ->
validate_single_pubkey_tag(
tags,
"p",
:missing_nip43_pubkey_tag,
:invalid_nip43_pubkey_tag
)
{:error, _reason} = error ->
error
end
end
defp validate_nip43_join_request(event) do
tags = Map.get(event, "tags", [])
case validate_protected_tag(tags) do
:ok ->
with :ok <-
validate_single_string_tag_with_predicate(
tags,
"claim",
:missing_nip43_claim_tag,
:invalid_nip43_claim_tag,
&non_empty_string?/1
) do
validate_recent_created_at(event, :stale_nip43_join_request)
end
{:error, _reason} = error ->
error
end
end
defp validate_nip43_invite_response(event) do
tags = Map.get(event, "tags", [])
case validate_protected_tag(tags) do
:ok ->
validate_single_string_tag_with_predicate(
tags,
"claim",
:missing_nip43_claim_tag,
:invalid_nip43_claim_tag,
&non_empty_string?/1
)
{:error, _reason} = error ->
error
end
end
defp validate_nip43_leave_request(event) do
tags = Map.get(event, "tags", [])
case validate_protected_tag(tags) do
:ok -> validate_recent_created_at(event, :stale_nip43_leave_request)
{:error, _reason} = error -> error
end
end
defp validate_non_empty_base64_content(event), defp validate_non_empty_base64_content(event),
do: validate_non_empty_base64_content(event, :invalid_marmot_keypackage_content) do: validate_non_empty_base64_content(event, :invalid_marmot_keypackage_content)
@@ -394,6 +656,25 @@ defmodule Parrhesia.Protocol.EventValidator do
end end
end end
defp validate_optional_single_string_tag_with_predicate(
tags,
tag_name,
invalid_error,
predicate
)
when is_function(predicate, 1) do
case Enum.filter(tags, &match_tag_name?(&1, tag_name)) do
[] ->
:ok
[[^tag_name, value]] ->
if predicate.(value), do: :ok, else: {:error, invalid_error}
_other ->
{:error, invalid_error}
end
end
defp validate_mls_extensions_tag(tags) do defp validate_mls_extensions_tag(tags) do
with {:ok, ["mls_extensions" | extensions]} <- with {:ok, ["mls_extensions" | extensions]} <-
fetch_single_tag(tags, "mls_extensions", :missing_marmot_extensions_tag), fetch_single_tag(tags, "mls_extensions", :missing_marmot_extensions_tag),
@@ -432,6 +713,89 @@ defmodule Parrhesia.Protocol.EventValidator do
end end
end end
defp validate_nip66_d_tag(tags) do
with {:ok, ["d", value]} <- fetch_single_tag(tags, "d", :missing_nip66_d_tag),
true <- valid_websocket_url?(value) or lowercase_hex?(value, 32) do
:ok
else
{:ok, _invalid_tag_shape} -> {:error, :invalid_nip66_d_tag}
false -> {:error, :invalid_nip66_d_tag}
{:error, _reason} = error -> error
end
end
defp validate_optional_repeated_timeout_tags(tags) do
timeout_tags = Enum.filter(tags, &match_tag_name?(&1, "timeout"))
if Enum.all?(timeout_tags, &valid_nip66_timeout_tag?/1) do
:ok
else
{:error, :invalid_nip66_timeout_tag}
end
end
defp validate_optional_repeated_tag(tags, tag_name, predicate, invalid_error)
when is_function(predicate, 1) do
tags
|> Enum.filter(&match_tag_name?(&1, tag_name))
|> Enum.reduce_while(:ok, fn
[^tag_name, value], :ok ->
if predicate.(value), do: {:cont, :ok}, else: {:halt, {:error, invalid_error}}
_other, :ok ->
{:halt, {:error, invalid_error}}
end)
end
defp validate_protected_tag(tags) do
if Enum.any?(tags, &match?(["-"], &1)) do
:ok
else
{:error, :missing_nip43_protected_tag}
end
end
defp validate_single_pubkey_tag(tags, tag_name, missing_error, invalid_error) do
case fetch_single_tag(tags, tag_name, missing_error) do
{:ok, [^tag_name, value]} ->
if lowercase_hex?(value, 32) do
:ok
else
{:error, invalid_error}
end
{:ok, _invalid_tag_shape} ->
{:error, invalid_error}
{:error, _reason} = error ->
error
end
end
defp validate_optional_repeated_pubkey_tag(tags, tag_name, invalid_error) do
matching_tags = Enum.filter(tags, &match_tag_name?(&1, tag_name))
if Enum.all?(matching_tags, fn
[^tag_name, pubkey | _rest] -> lowercase_hex?(pubkey, 32)
_other -> false
end) do
:ok
else
{:error, invalid_error}
end
end
defp validate_recent_created_at(%{"created_at" => created_at}, error_reason)
when is_integer(created_at) do
if created_at >= System.system_time(:second) - nip43_request_max_age_seconds() do
:ok
else
{:error, error_reason}
end
end
defp validate_recent_created_at(_event, error_reason), do: {:error, error_reason}
defp fetch_single_tag(tags, tag_name, missing_error) do defp fetch_single_tag(tags, tag_name, missing_error) do
case Enum.filter(tags, &match_tag_name?(&1, tag_name)) do case Enum.filter(tags, &match_tag_name?(&1, tag_name)) do
[tag] -> {:ok, tag} [tag] -> {:ok, tag}
@@ -488,6 +852,49 @@ defmodule Parrhesia.Protocol.EventValidator do
defp valid_websocket_url?(_url), do: false defp valid_websocket_url?(_url), do: false
defp valid_nip66_timeout_tag?(["timeout", milliseconds]),
do: positive_integer_string?(milliseconds)
defp valid_nip66_timeout_tag?(["timeout", check, milliseconds]) do
valid_nip66_check_name?(check) and positive_integer_string?(milliseconds)
end
defp valid_nip66_timeout_tag?(_tag), do: false
defp valid_nip66_requirement_value?(value) when is_binary(value) do
normalized = String.trim_leading(value, "!")
normalized in ["auth", "writes", "pow", "payment"]
end
defp valid_nip66_requirement_value?(_value), do: false
defp valid_nip66_kind_value?(<<"!", rest::binary>>), do: positive_integer_string?(rest)
defp valid_nip66_kind_value?(value), do: positive_integer_string?(value)
defp valid_nip66_check_name?(value) when is_binary(value) do
String.match?(value, ~r/^[a-z0-9-]+$/)
end
defp valid_nip66_check_name?(_value), do: false
defp valid_pascal_case?(value) when is_binary(value) do
String.match?(value, ~r/^[A-Z][A-Za-z0-9]*$/)
end
defp valid_pascal_case?(_value), do: false
defp positive_integer_string?(value) when is_binary(value) do
case Integer.parse(value) do
{integer, ""} when integer >= 0 -> true
_other -> false
end
end
defp positive_integer_string?(_value), do: false
defp non_empty_string?(value) when is_binary(value), do: value != ""
defp non_empty_string?(_value), do: false
defp valid_keypackage_ref?(value) when is_binary(value) do defp valid_keypackage_ref?(value) when is_binary(value) do
Enum.any?(@supported_keypackage_ref_sizes, &lowercase_hex?(value, &1)) Enum.any?(@supported_keypackage_ref_sizes, &lowercase_hex?(value, &1))
end end
@@ -510,4 +917,17 @@ defmodule Parrhesia.Protocol.EventValidator do
|> Application.get_env(:limits, []) |> Application.get_env(:limits, [])
|> Keyword.get(:max_event_future_skew_seconds, @default_max_event_future_skew_seconds) |> Keyword.get(:max_event_future_skew_seconds, @default_max_event_future_skew_seconds)
end end
defp max_tags_per_event do
case Application.get_env(:parrhesia, :limits, []) |> Keyword.get(:max_tags_per_event) do
value when is_integer(value) and value > 0 -> value
_other -> @default_max_tags_per_event
end
end
defp nip43_request_max_age_seconds do
:parrhesia
|> Application.get_env(:nip43, [])
|> Keyword.get(:request_max_age_seconds, @default_nip43_request_max_age_seconds)
end
end end

View File

@@ -5,6 +5,7 @@ defmodule Parrhesia.Protocol.Filter do
@max_kind 65_535 @max_kind 65_535
@default_max_filters_per_req 16 @default_max_filters_per_req 16
@default_max_tag_values_per_filter 128
@type validation_error :: @type validation_error ::
:invalid_filters :invalid_filters
@@ -19,6 +20,7 @@ defmodule Parrhesia.Protocol.Filter do
| :invalid_until | :invalid_until
| :invalid_limit | :invalid_limit
| :invalid_search | :invalid_search
| :too_many_tag_values
| :invalid_tag_filter | :invalid_tag_filter
@allowed_keys MapSet.new(["ids", "authors", "kinds", "since", "until", "limit", "search"]) @allowed_keys MapSet.new(["ids", "authors", "kinds", "since", "until", "limit", "search"])
@@ -36,6 +38,7 @@ defmodule Parrhesia.Protocol.Filter do
invalid_until: "invalid: until must be a non-negative integer", invalid_until: "invalid: until must be a non-negative integer",
invalid_limit: "invalid: limit must be a positive integer", invalid_limit: "invalid: limit must be a positive integer",
invalid_search: "invalid: search must be a non-empty string", invalid_search: "invalid: search must be a non-empty string",
too_many_tag_values: "invalid: tag filters exceed configured value limit",
invalid_tag_filter: invalid_tag_filter:
"invalid: tag filters must use #<single-letter> with non-empty string arrays" "invalid: tag filters must use #<single-letter> with non-empty string arrays"
} }
@@ -178,19 +181,33 @@ defmodule Parrhesia.Protocol.Filter do
filter filter
|> Enum.filter(fn {key, _value} -> valid_tag_filter_key?(key) end) |> Enum.filter(fn {key, _value} -> valid_tag_filter_key?(key) end)
|> Enum.reduce_while(:ok, fn {_key, values}, :ok -> |> Enum.reduce_while(:ok, fn {_key, values}, :ok ->
if valid_tag_filter_values?(values) do case validate_tag_filter_values(values) do
{:cont, :ok} :ok -> {:cont, :ok}
else {:error, reason} -> {:halt, {:error, reason}}
{:halt, {:error, :invalid_tag_filter}}
end end
end) end)
end end
defp valid_tag_filter_values?(values) when is_list(values) do defp validate_tag_filter_values(values) when is_list(values),
values != [] and Enum.all?(values, &is_binary/1) do: validate_tag_filter_values(values, max_tag_values_per_filter(), 0)
end
defp valid_tag_filter_values?(_values), do: false defp validate_tag_filter_values(_values), do: {:error, :invalid_tag_filter}
defp validate_tag_filter_values([], _max_values, 0), do: {:error, :invalid_tag_filter}
defp validate_tag_filter_values([], _max_values, _count), do: :ok
defp validate_tag_filter_values([value | rest], max_values, count) do
cond do
count + 1 > max_values ->
{:error, :too_many_tag_values}
is_binary(value) ->
validate_tag_filter_values(rest, max_values, count + 1)
true ->
{:error, :invalid_tag_filter}
end
end
defp filter_predicates(event, filter) do defp filter_predicates(event, filter) do
[ [
@@ -278,4 +295,12 @@ defmodule Parrhesia.Protocol.Filter do
|> Application.get_env(:limits, []) |> Application.get_env(:limits, [])
|> Keyword.get(:max_filters_per_req, @default_max_filters_per_req) |> Keyword.get(:max_filters_per_req, @default_max_filters_per_req)
end end
defp max_tag_values_per_filter do
case Application.get_env(:parrhesia, :limits, [])
|> Keyword.get(:max_tag_values_per_filter) do
value when is_integer(value) and value > 0 -> value
_other -> @default_max_tag_values_per_filter
end
end
end end

View File

@@ -0,0 +1,9 @@
defmodule Parrhesia.ReadRepo do
@moduledoc """
PostgreSQL repository dedicated to read-heavy workloads when a separate read pool is enabled.
"""
use Ecto.Repo,
otp_app: :parrhesia,
adapter: Ecto.Adapters.Postgres
end

View File

@@ -1,10 +1,18 @@
defmodule Parrhesia.Release do defmodule Parrhesia.Release do
@moduledoc """ @moduledoc """
Helpers for running Ecto tasks from a production release. Helpers for running Ecto tasks from a production release.
Intended for use from a release `eval` command where Mix is not available:
bin/parrhesia eval "Parrhesia.Release.migrate()"
bin/parrhesia eval "Parrhesia.Release.rollback(Parrhesia.Repo, 20260101000000)"
""" """
@app :parrhesia @app :parrhesia
@doc """
Runs all pending Ecto migrations for every configured repo.
"""
def migrate do def migrate do
load_app() load_app()
@@ -16,6 +24,9 @@ defmodule Parrhesia.Release do
end end
end end
@doc """
Rolls back the given `repo` to the specified migration `version`.
"""
def rollback(repo, version) when is_atom(repo) and is_integer(version) do def rollback(repo, version) when is_atom(repo) and is_integer(version) do
load_app() load_app()

View File

@@ -1,6 +1,9 @@
defmodule Parrhesia.Repo do defmodule Parrhesia.Repo do
@moduledoc """ @moduledoc """
PostgreSQL repository for storage adapter persistence. PostgreSQL repository for write traffic and storage adapter persistence.
Separated from `Parrhesia.ReadRepo` so that ingest writes and read-heavy
queries use independent connection pools.
""" """
use Ecto.Repo, use Ecto.Repo,

52
lib/parrhesia/runtime.ex Normal file
View File

@@ -0,0 +1,52 @@
defmodule Parrhesia.Runtime do
@moduledoc """
Top-level Parrhesia supervisor.
In normal standalone use, the `:parrhesia` application starts this supervisor automatically.
Host applications can also embed it directly under their own supervision tree:
children = [
{Parrhesia.Runtime, name: Parrhesia.Supervisor}
]
Parrhesia currently assumes a single runtime per BEAM node and uses globally registered
process names for core services.
"""
use Supervisor
@doc """
Starts the Parrhesia runtime supervisor.
Accepts a `:name` option (defaults to `Parrhesia.Supervisor`).
"""
def start_link(opts \\ []) do
name = Keyword.get(opts, :name, Parrhesia.Supervisor)
Supervisor.start_link(__MODULE__, opts, name: name)
end
@impl true
def init(_opts) do
Supervisor.init(children(), strategy: :one_for_one)
end
@doc """
Returns the list of child specifications started by the runtime supervisor.
"""
def children do
[
Parrhesia.Telemetry,
Parrhesia.ConnectionStats,
Parrhesia.Config,
Parrhesia.Web.EventIngestLimiter,
Parrhesia.Web.IPEventIngestLimiter,
Parrhesia.Storage.Supervisor,
Parrhesia.Subscriptions.Supervisor,
Parrhesia.Auth.Supervisor,
Parrhesia.Sync.Supervisor,
Parrhesia.Policy.Supervisor,
Parrhesia.Web.Endpoint,
Parrhesia.Tasks.Supervisor
]
end
end

View File

@@ -4,6 +4,9 @@ defmodule Parrhesia.Storage do
Domain/runtime code should resolve behavior modules through this module instead of Domain/runtime code should resolve behavior modules through this module instead of
depending on concrete adapter implementations directly. depending on concrete adapter implementations directly.
Each accessor validates that the configured module is loaded and declares the expected
behaviour before returning it.
""" """
@default_modules [ @default_modules [
@@ -14,18 +17,33 @@ defmodule Parrhesia.Storage do
admin: Parrhesia.Storage.Adapters.Postgres.Admin admin: Parrhesia.Storage.Adapters.Postgres.Admin
] ]
@doc """
Returns the configured events storage module.
"""
@spec events() :: module() @spec events() :: module()
def events, do: fetch_module!(:events, Parrhesia.Storage.Events) def events, do: fetch_module!(:events, Parrhesia.Storage.Events)
@doc """
Returns the configured moderation storage module.
"""
@spec moderation() :: module() @spec moderation() :: module()
def moderation, do: fetch_module!(:moderation, Parrhesia.Storage.Moderation) def moderation, do: fetch_module!(:moderation, Parrhesia.Storage.Moderation)
@doc """
Returns the configured ACL storage module.
"""
@spec acl() :: module() @spec acl() :: module()
def acl, do: fetch_module!(:acl, Parrhesia.Storage.ACL) def acl, do: fetch_module!(:acl, Parrhesia.Storage.ACL)
@doc """
Returns the configured groups storage module.
"""
@spec groups() :: module() @spec groups() :: module()
def groups, do: fetch_module!(:groups, Parrhesia.Storage.Groups) def groups, do: fetch_module!(:groups, Parrhesia.Storage.Groups)
@doc """
Returns the configured admin storage module.
"""
@spec admin() :: module() @spec admin() :: module()
def admin, do: fetch_module!(:admin, Parrhesia.Storage.Admin) def admin, do: fetch_module!(:admin, Parrhesia.Storage.Admin)

View File

@@ -6,6 +6,9 @@ defmodule Parrhesia.Storage.Adapters.Memory.Admin do
alias Parrhesia.Storage.Adapters.Memory.Store alias Parrhesia.Storage.Adapters.Memory.Store
@behaviour Parrhesia.Storage.Admin @behaviour Parrhesia.Storage.Admin
@default_limit 100
@max_limit 1_000
@max_audit_logs 1_000
@impl true @impl true
def execute(_context, method, _params) do def execute(_context, method, _params) do
@@ -17,18 +20,59 @@ defmodule Parrhesia.Storage.Adapters.Memory.Admin do
@impl true @impl true
def append_audit_log(_context, audit_entry) when is_map(audit_entry) do def append_audit_log(_context, audit_entry) when is_map(audit_entry) do
Store.update(fn state -> update_in(state.audit_logs, &[audit_entry | &1]) end) Store.update(fn state ->
update_in(state.audit_logs, fn logs ->
[audit_entry | logs] |> Enum.take(@max_audit_logs)
end)
end)
:ok :ok
end end
def append_audit_log(_context, _audit_entry), do: {:error, :invalid_audit_entry} def append_audit_log(_context, _audit_entry), do: {:error, :invalid_audit_entry}
@impl true @impl true
def list_audit_logs(_context, _opts) do def list_audit_logs(_context, opts) when is_list(opts) do
{:ok, Store.get(fn state -> Enum.reverse(state.audit_logs) end)} limit = normalize_limit(Keyword.get(opts, :limit, @default_limit))
method = normalize_method_filter(Keyword.get(opts, :method))
actor_pubkey = Keyword.get(opts, :actor_pubkey)
logs =
Store.get(fn state ->
state.audit_logs
|> Enum.filter(&matches_filters?(&1, method, actor_pubkey))
|> Enum.take(limit)
end)
{:ok, logs}
end end
def list_audit_logs(_context, _opts), do: {:error, :invalid_opts}
defp normalize_method(method) when is_binary(method), do: method defp normalize_method(method) when is_binary(method), do: method
defp normalize_method(method) when is_atom(method), do: Atom.to_string(method) defp normalize_method(method) when is_atom(method), do: Atom.to_string(method)
defp normalize_method(method), do: inspect(method) defp normalize_method(method), do: inspect(method)
defp normalize_limit(limit) when is_integer(limit) and limit > 0, do: min(limit, @max_limit)
defp normalize_limit(_limit), do: @default_limit
defp normalize_method_filter(nil), do: nil
defp normalize_method_filter(method), do: normalize_method(method)
defp matches_method?(_entry, nil), do: true
defp matches_method?(entry, method) do
normalize_method(Map.get(entry, :method) || Map.get(entry, "method")) == method
end
defp matches_actor_pubkey?(_entry, nil), do: true
defp matches_actor_pubkey?(entry, actor_pubkey) do
Map.get(entry, :actor_pubkey) == actor_pubkey or
Map.get(entry, "actor_pubkey") == actor_pubkey
end
defp matches_filters?(entry, method, actor_pubkey) do
matches_method?(entry, method) and matches_actor_pubkey?(entry, actor_pubkey)
end
end end

View File

@@ -12,71 +12,75 @@ defmodule Parrhesia.Storage.Adapters.Memory.Events do
def put_event(_context, event) do def put_event(_context, event) do
event_id = Map.fetch!(event, "id") event_id = Map.fetch!(event, "id")
result = case Store.put_event(event_id, event) do
Store.get_and_update(fn state -> :ok -> {:ok, event}
if Map.has_key?(state.events, event_id) do {:error, :duplicate_event} -> {:error, :duplicate_event}
{{:error, :duplicate_event}, state}
else
next_state = put_in(state.events[event_id], event)
{{:ok, event}, next_state}
end end
end)
result
end end
@impl true @impl true
def get_event(_context, event_id) do def get_event(_context, event_id) do
deleted? = Store.get(fn state -> MapSet.member?(state.deleted, event_id) end) case Store.get_event(event_id) do
{:ok, _event, true} -> {:ok, nil}
if deleted? do {:ok, event, false} -> {:ok, event}
{:ok, nil} :error -> {:ok, nil}
else
{:ok, Store.get(fn state -> Map.get(state.events, event_id) end)}
end end
end end
@impl true @impl true
def query(_context, filters, opts) do def query(_context, filters, opts) do
with :ok <- Filter.validate_filters(filters) do with :ok <- Filter.validate_filters(filters) do
state = Store.get(& &1)
requester_pubkeys = Keyword.get(opts, :requester_pubkeys, []) requester_pubkeys = Keyword.get(opts, :requester_pubkeys, [])
events = events =
state.events filters
|> Map.values() |> Enum.flat_map(&matching_events_for_filter(&1, requester_pubkeys, opts))
|> Enum.filter(fn event -> |> deduplicate_events()
not MapSet.member?(state.deleted, event["id"]) and |> sort_events()
Filter.matches_any?(event, filters) and |> maybe_apply_query_limit(opts)
giftwrap_visible_to_requester?(event, requester_pubkeys)
end)
{:ok, events} {:ok, events}
end end
end end
@impl true @impl true
def query_event_refs(context, filters, opts) do def query_event_refs(_context, filters, opts) do
with {:ok, events} <- query(context, filters, opts) do with :ok <- Filter.validate_filters(filters) do
requester_pubkeys = Keyword.get(opts, :requester_pubkeys, [])
query_opts = Keyword.put(opts, :apply_filter_limits?, false)
{_, refs} =
reduce_unique_matching_events(
filters,
requester_pubkeys,
query_opts,
{MapSet.new(), []},
&append_unique_event_ref/2
)
refs = refs =
events refs |> Enum.sort(&(compare_event_refs(&1, &2) != :gt)) |> maybe_limit_event_refs(opts)
|> Enum.map(fn event ->
%{
created_at: Map.fetch!(event, "created_at"),
id: Base.decode16!(Map.fetch!(event, "id"), case: :mixed)
}
end)
|> Enum.sort(&(compare_event_refs(&1, &2) != :gt))
|> maybe_limit_event_refs(opts)
{:ok, refs} {:ok, refs}
end end
end end
@impl true @impl true
def count(context, filters, opts) do def count(_context, filters, opts) do
with {:ok, events} <- query(context, filters, opts) do with :ok <- Filter.validate_filters(filters) do
{:ok, length(events)} requester_pubkeys = Keyword.get(opts, :requester_pubkeys, [])
query_opts = Keyword.put(opts, :apply_filter_limits?, false)
{_seen_ids, count} =
reduce_unique_matching_events(
filters,
requester_pubkeys,
query_opts,
{MapSet.new(), 0},
&count_unique_event/2
)
{:ok, count}
end end
end end
@@ -107,22 +111,14 @@ defmodule Parrhesia.Storage.Adapters.Memory.Events do
end) end)
coordinate_delete_ids = coordinate_delete_ids =
Store.get(fn state -> delete_coordinates
state.events |> coordinate_delete_candidates(deleter_pubkey)
|> Map.values() |> Enum.filter(&matches_delete_coordinate?(&1, delete_coordinates, deleter_pubkey))
|> Enum.filter(fn candidate ->
matches_delete_coordinate?(candidate, delete_coordinates, deleter_pubkey)
end)
|> Enum.map(& &1["id"]) |> Enum.map(& &1["id"])
end)
all_delete_ids = Enum.uniq(delete_event_ids ++ coordinate_delete_ids) all_delete_ids = Enum.uniq(delete_event_ids ++ coordinate_delete_ids)
Store.update(fn state -> Enum.each(all_delete_ids, &Store.mark_deleted/1)
Enum.reduce(all_delete_ids, state, fn event_id, acc ->
update_in(acc.deleted, &MapSet.put(&1, event_id))
end)
end)
{:ok, length(all_delete_ids)} {:ok, length(all_delete_ids)}
end end
@@ -132,18 +128,11 @@ defmodule Parrhesia.Storage.Adapters.Memory.Events do
pubkey = Map.get(event, "pubkey") pubkey = Map.get(event, "pubkey")
deleted_ids = deleted_ids =
Store.get(fn state -> pubkey
state.events |> vanish_candidates(Map.get(event, "created_at"))
|> Map.values()
|> Enum.filter(fn candidate -> candidate["pubkey"] == pubkey end)
|> Enum.map(& &1["id"]) |> Enum.map(& &1["id"])
end)
Store.update(fn state -> Enum.each(deleted_ids, &Store.mark_deleted/1)
Enum.reduce(deleted_ids, state, fn event_id, acc ->
update_in(acc.deleted, &MapSet.put(&1, event_id))
end)
end)
{:ok, length(deleted_ids)} {:ok, length(deleted_ids)}
end end
@@ -224,4 +213,311 @@ defmodule Parrhesia.Storage.Adapters.Memory.Events do
_other -> refs _other -> refs
end end
end end
defp matching_events_for_filter(filter, requester_pubkeys, opts) do
cond do
Map.has_key?(filter, "ids") ->
direct_id_lookup_events(filter, requester_pubkeys, opts)
indexed_candidate_spec(filter) != nil ->
indexed_tag_lookup_events(filter, requester_pubkeys, opts)
true ->
scan_filter_matches(filter, requester_pubkeys, opts)
end
end
defp direct_id_lookup_events(filter, requester_pubkeys, opts) do
filter
|> Map.get("ids", [])
|> Enum.reduce([], fn event_id, acc ->
maybe_prepend_direct_lookup_match(acc, event_id, filter, requester_pubkeys)
end)
|> deduplicate_events()
|> sort_events()
|> maybe_take_filter_limit(filter, opts)
end
defp scan_filter_matches(filter, requester_pubkeys, opts) do
limit =
if Keyword.get(opts, :apply_filter_limits?, true) do
effective_filter_limit(filter, opts)
else
nil
end
{matches, _count} =
Store.reduce_events_newest(
{[], 0},
&reduce_scan_match(&1, &2, filter, requester_pubkeys, limit)
)
matches
|> Enum.reverse()
|> sort_events()
end
defp indexed_tag_lookup_events(filter, requester_pubkeys, opts) do
filter
|> indexed_candidate_events()
|> Enum.filter(&filter_match_visible?(&1, filter, requester_pubkeys))
|> maybe_take_filter_limit(filter, opts)
end
defp indexed_tag_filter(filter) do
filter
|> Enum.filter(fn
{"#" <> _tag_name, values} when is_list(values) -> values != []
_entry -> false
end)
|> Enum.sort_by(fn {key, _values} -> key end)
|> List.first()
|> case do
{"#" <> tag_name, values} -> {tag_name, values}
nil -> nil
end
end
defp indexed_candidate_spec(filter) do
authors = Map.get(filter, "authors")
kinds = Map.get(filter, "kinds")
tag_filter = indexed_tag_filter(filter)
cond do
is_tuple(tag_filter) ->
{tag_name, tag_values} = tag_filter
{:tag, tag_name, effective_indexed_tag_values(filter, tag_values)}
is_list(authors) and is_list(kinds) ->
{:pubkey_kind, authors, kinds}
is_list(authors) ->
{:pubkey, authors}
is_list(kinds) ->
{:kind, kinds}
true ->
nil
end
end
defp indexed_candidate_events(filter) do
case indexed_candidate_spec(filter) do
{:tag, tag_name, tag_values} ->
Store.tagged_events(tag_name, tag_values)
{:pubkey_kind, authors, kinds} ->
Store.events_by_pubkeys_and_kinds(authors, kinds)
{:pubkey, authors} ->
Store.events_by_pubkeys(authors)
{:kind, kinds} ->
Store.events_by_kinds(kinds)
nil ->
[]
end
end
defp effective_indexed_tag_values(filter, tag_values) do
case Map.get(filter, "limit") do
limit when is_integer(limit) and limit == 1 ->
Enum.take(tag_values, 1)
_other ->
tag_values
end
end
defp filter_match_visible?(event, filter, requester_pubkeys) do
Filter.matches_filter?(event, filter) and
giftwrap_visible_to_requester?(event, requester_pubkeys)
end
defp maybe_prepend_direct_lookup_match(acc, event_id, filter, requester_pubkeys) do
case Store.get_event(event_id) do
{:ok, event, false} ->
if filter_match_visible?(event, filter, requester_pubkeys) do
[event | acc]
else
acc
end
_other ->
acc
end
end
defp reduce_scan_match(event, {acc, count}, filter, requester_pubkeys, limit) do
if filter_match_visible?(event, filter, requester_pubkeys) do
maybe_halt_scan([event | acc], count + 1, limit)
else
{acc, count}
end
end
defp maybe_halt_scan(acc, count, limit) when is_integer(limit) and count >= limit do
{:halt, {acc, count}}
end
defp maybe_halt_scan(acc, count, _limit), do: {acc, count}
defp reduce_unique_matching_events(filters, requester_pubkeys, opts, acc, reducer) do
Enum.reduce(filters, acc, fn filter, current_acc ->
reduce_matching_events_for_filter(filter, requester_pubkeys, opts, current_acc, reducer)
end)
end
defp reduce_matching_events_for_filter(filter, requester_pubkeys, _opts, acc, reducer) do
cond do
Map.has_key?(filter, "ids") ->
filter
|> Map.get("ids", [])
|> Enum.reduce(acc, &reduce_event_id_match(&1, filter, requester_pubkeys, &2, reducer))
indexed_candidate_spec(filter) != nil ->
filter
|> indexed_candidate_events()
|> Enum.reduce(
acc,
&maybe_reduce_visible_event(&1, filter, requester_pubkeys, &2, reducer)
)
true ->
Store.reduce_events_newest(
acc,
&maybe_reduce_visible_event(&1, filter, requester_pubkeys, &2, reducer)
)
end
end
defp coordinate_delete_candidates(delete_coordinates, deleter_pubkey) do
delete_coordinates
|> Enum.flat_map(fn coordinate ->
cond do
coordinate.pubkey != deleter_pubkey ->
[]
addressable_kind?(coordinate.kind) ->
Store.events_by_addresses([{coordinate.kind, deleter_pubkey, coordinate.d_tag}])
replaceable_kind?(coordinate.kind) ->
Store.events_by_pubkeys_and_kinds([deleter_pubkey], [coordinate.kind])
true ->
[]
end
end)
|> deduplicate_events()
end
defp vanish_candidates(pubkey, created_at) do
own_events =
Store.events_by_pubkeys([pubkey])
|> Enum.filter(&(&1["created_at"] <= created_at))
giftwrap_events =
Store.tagged_events("p", [pubkey])
|> Enum.filter(&(&1["kind"] == 1059 and &1["created_at"] <= created_at))
deduplicate_events(own_events ++ giftwrap_events)
end
defp event_ref(event) do
%{
created_at: Map.fetch!(event, "created_at"),
id: Base.decode16!(Map.fetch!(event, "id"), case: :mixed)
}
end
defp append_unique_event_ref(event, {seen_ids, acc}) do
reduce_unique_event(event, {seen_ids, acc}, fn _event_id, next_seen_ids ->
{next_seen_ids, [event_ref(event) | acc]}
end)
end
defp count_unique_event(event, {seen_ids, acc}) do
reduce_unique_event(event, {seen_ids, acc}, fn _event_id, next_seen_ids ->
{next_seen_ids, acc + 1}
end)
end
defp reduce_unique_event(event, {seen_ids, acc}, fun) do
event_id = Map.fetch!(event, "id")
if MapSet.member?(seen_ids, event_id) do
{seen_ids, acc}
else
fun.(event_id, MapSet.put(seen_ids, event_id))
end
end
defp maybe_reduce_visible_event(event, filter, requester_pubkeys, acc, reducer) do
if filter_match_visible?(event, filter, requester_pubkeys) do
reducer.(event, acc)
else
acc
end
end
defp reduce_event_id_match(event_id, filter, requester_pubkeys, acc, reducer) do
case Store.get_event(event_id) do
{:ok, event, false} ->
maybe_reduce_visible_event(event, filter, requester_pubkeys, acc, reducer)
_other ->
acc
end
end
defp deduplicate_events(events) do
events
|> Enum.reduce(%{}, fn event, acc -> Map.put(acc, event["id"], event) end)
|> Map.values()
end
defp sort_events(events) do
Enum.sort(events, &chronological_sorter/2)
end
defp chronological_sorter(left, right) do
cond do
left["created_at"] > right["created_at"] -> true
left["created_at"] < right["created_at"] -> false
true -> left["id"] < right["id"]
end
end
defp maybe_apply_query_limit(events, opts) do
case Keyword.get(opts, :limit) do
limit when is_integer(limit) and limit > 0 -> Enum.take(events, limit)
_other -> events
end
end
defp maybe_take_filter_limit(events, filter, opts) do
case effective_filter_limit(filter, opts) do
limit when is_integer(limit) and limit > 0 -> Enum.take(events, limit)
_other -> events
end
end
defp effective_filter_limit(filter, opts) do
max_filter_limit = Keyword.get(opts, :max_filter_limit)
case Map.get(filter, "limit") do
limit
when is_integer(limit) and limit > 0 and is_integer(max_filter_limit) and
max_filter_limit > 0 ->
min(limit, max_filter_limit)
limit when is_integer(limit) and limit > 0 ->
limit
_other ->
nil
end
end
end end

View File

@@ -4,10 +4,15 @@ defmodule Parrhesia.Storage.Adapters.Memory.Store do
use Agent use Agent
@name __MODULE__ @name __MODULE__
@events_table :parrhesia_memory_events
@events_by_time_table :parrhesia_memory_events_by_time
@events_by_tag_table :parrhesia_memory_events_by_tag
@events_by_pubkey_table :parrhesia_memory_events_by_pubkey
@events_by_kind_table :parrhesia_memory_events_by_kind
@events_by_pubkey_kind_table :parrhesia_memory_events_by_pubkey_kind
@events_by_address_table :parrhesia_memory_events_by_address
@initial_state %{ @initial_state %{
events: %{},
deleted: MapSet.new(),
bans: %{pubkeys: MapSet.new(), events: MapSet.new(), ips: MapSet.new()}, bans: %{pubkeys: MapSet.new(), events: MapSet.new(), ips: MapSet.new()},
allowed_pubkeys: MapSet.new(), allowed_pubkeys: MapSet.new(),
acl_rules: [], acl_rules: [],
@@ -17,22 +22,142 @@ defmodule Parrhesia.Storage.Adapters.Memory.Store do
audit_logs: [] audit_logs: []
} }
def ensure_started do def ensure_started, do: start_store()
if Process.whereis(@name) do
def put_event(event_id, event) when is_binary(event_id) and is_map(event) do
:ok = ensure_started()
if :ets.insert_new(@events_table, {event_id, event, false}) do
true = :ets.insert(@events_by_time_table, {{sort_key(event), event_id}, event_id})
index_event_tags(event_id, event)
index_event_secondary_keys(event_id, event)
:ok :ok
else else
start_store() {:error, :duplicate_event}
end end
end end
defp start_store do def get_event(event_id) when is_binary(event_id) do
case Agent.start_link(fn -> @initial_state end, name: @name) do :ok = ensure_started()
{:ok, _pid} -> :ok
{:error, {:already_started, _pid}} -> :ok case :ets.lookup(@events_table, event_id) do
{:error, reason} -> {:error, reason} [{^event_id, event, deleted?}] -> {:ok, event, deleted?}
[] -> :error
end end
end end
def mark_deleted(event_id) when is_binary(event_id) do
:ok = ensure_started()
case lookup_event(event_id) do
{:ok, event, false} ->
true = :ets.insert(@events_table, {event_id, event, true})
true = :ets.delete(@events_by_time_table, {sort_key(event), event_id})
unindex_event_tags(event_id, event)
unindex_event_secondary_keys(event_id, event)
:ok
{:ok, _event, true} ->
:ok
:error ->
:ok
end
end
def reduce_events(acc, fun) when is_function(fun, 2) do
:ok = ensure_started()
:ets.foldl(
fn {_event_id, event, deleted?}, current_acc ->
if deleted? do
current_acc
else
fun.(event, current_acc)
end
end,
acc,
@events_table
)
end
def reduce_events_newest(acc, fun) when is_function(fun, 2) do
:ok = ensure_started()
reduce_events_newest_from(:ets.first(@events_by_time_table), acc, fun)
end
def tagged_events(tag_name, tag_values) when is_binary(tag_name) and is_list(tag_values) do
:ok = ensure_started()
tag_values
|> Enum.flat_map(&indexed_events_for_value(@events_by_tag_table, {tag_name, &1}))
|> sort_and_deduplicate_events()
end
def events_by_pubkeys(pubkeys) when is_list(pubkeys) do
:ok = ensure_started()
pubkeys
|> Enum.flat_map(&indexed_events_for_value(@events_by_pubkey_table, &1))
|> sort_and_deduplicate_events()
end
def events_by_kinds(kinds) when is_list(kinds) do
:ok = ensure_started()
kinds
|> Enum.flat_map(&indexed_events_for_value(@events_by_kind_table, &1))
|> sort_and_deduplicate_events()
end
def events_by_pubkeys_and_kinds(pubkeys, kinds) when is_list(pubkeys) and is_list(kinds) do
:ok = ensure_started()
pubkeys
|> Enum.flat_map(fn pubkey ->
kinds
|> Enum.flat_map(&indexed_events_for_value(@events_by_pubkey_kind_table, {pubkey, &1}))
end)
|> sort_and_deduplicate_events()
end
def events_by_addresses(addresses) when is_list(addresses) do
:ok = ensure_started()
addresses
|> Enum.flat_map(&indexed_events_for_value(@events_by_address_table, &1))
|> sort_and_deduplicate_events()
end
defp reduce_events_newest_from(:"$end_of_table", acc, _fun), do: acc
defp reduce_events_newest_from(key, acc, fun) do
next_key = :ets.next(@events_by_time_table, key)
acc = reduce_indexed_event(key, acc, fun)
case acc do
{:halt, final_acc} -> final_acc
next_acc -> reduce_events_newest_from(next_key, next_acc, fun)
end
end
defp reduce_indexed_event(key, acc, fun) do
case :ets.lookup(@events_by_time_table, key) do
[{^key, event_id}] -> apply_reduce_fun(event_id, acc, fun)
[] -> acc
end
end
defp apply_reduce_fun(event_id, acc, fun) do
case lookup_event(event_id) do
{:ok, event, false} -> normalize_reduce_result(fun.(event, acc))
_other -> acc
end
end
defp normalize_reduce_result({:halt, next_acc}), do: {:halt, next_acc}
defp normalize_reduce_result(next_acc), do: next_acc
def get(fun) do def get(fun) do
:ok = ensure_started() :ok = ensure_started()
Agent.get(@name, fun) Agent.get(@name, fun)
@@ -47,4 +172,208 @@ defmodule Parrhesia.Storage.Adapters.Memory.Store do
:ok = ensure_started() :ok = ensure_started()
Agent.get_and_update(@name, fun) Agent.get_and_update(@name, fun)
end end
defp start_store do
case Agent.start_link(&init_state/0, name: @name) do
{:ok, _pid} -> :ok
{:error, {:already_started, _pid}} -> :ok
{:error, reason} -> {:error, reason}
end
end
defp init_state do
ensure_tables_started()
@initial_state
end
defp ensure_tables_started do
ensure_table(@events_table, [
:named_table,
:public,
:set,
read_concurrency: true,
write_concurrency: true
])
ensure_table(@events_by_time_table, [
:named_table,
:public,
:ordered_set,
read_concurrency: true,
write_concurrency: true
])
ensure_table(@events_by_tag_table, [
:named_table,
:public,
:bag,
read_concurrency: true,
write_concurrency: true
])
ensure_table(@events_by_pubkey_table, [
:named_table,
:public,
:bag,
read_concurrency: true,
write_concurrency: true
])
ensure_table(@events_by_kind_table, [
:named_table,
:public,
:bag,
read_concurrency: true,
write_concurrency: true
])
ensure_table(@events_by_pubkey_kind_table, [
:named_table,
:public,
:bag,
read_concurrency: true,
write_concurrency: true
])
ensure_table(@events_by_address_table, [
:named_table,
:public,
:bag,
read_concurrency: true,
write_concurrency: true
])
end
defp ensure_table(name, options) do
case :ets.whereis(name) do
:undefined -> :ets.new(name, options)
_table -> :ok
end
end
defp lookup_event(event_id) do
case :ets.lookup(@events_table, event_id) do
[{^event_id, event, deleted?}] -> {:ok, event, deleted?}
[] -> :error
end
end
defp index_event_tags(event_id, event) do
event
|> event_tag_index_entries(event_id)
|> Enum.each(fn entry ->
true = :ets.insert(@events_by_tag_table, entry)
end)
end
defp index_event_secondary_keys(event_id, event) do
event
|> secondary_index_entries(event_id)
|> Enum.each(fn {table, entry} ->
true = :ets.insert(table, entry)
end)
end
defp unindex_event_tags(event_id, event) do
event
|> event_tag_index_entries(event_id)
|> Enum.each(&:ets.delete_object(@events_by_tag_table, &1))
end
defp unindex_event_secondary_keys(event_id, event) do
event
|> secondary_index_entries(event_id)
|> Enum.each(fn {table, entry} ->
:ets.delete_object(table, entry)
end)
end
defp event_tag_index_entries(event, event_id) do
created_sort_key = sort_key(event)
event
|> Map.get("tags", [])
|> Enum.flat_map(fn
[tag_name, tag_value | _rest] when is_binary(tag_name) and is_binary(tag_value) ->
[{{tag_name, tag_value}, created_sort_key, event_id}]
_tag ->
[]
end)
end
defp secondary_index_entries(event, event_id) do
created_sort_key = sort_key(event)
pubkey = Map.get(event, "pubkey")
kind = Map.get(event, "kind")
[]
|> maybe_put_secondary_entry(@events_by_pubkey_table, pubkey, created_sort_key, event_id)
|> maybe_put_secondary_entry(@events_by_kind_table, kind, created_sort_key, event_id)
|> maybe_put_pubkey_kind_entry(pubkey, kind, created_sort_key, event_id)
|> maybe_put_address_entry(event, pubkey, kind, event_id)
end
defp maybe_put_secondary_entry(entries, _table, key, _created_sort_key, _event_id)
when is_nil(key),
do: entries
defp maybe_put_secondary_entry(entries, table, key, created_sort_key, event_id) do
[{table, {key, created_sort_key, event_id}} | entries]
end
defp maybe_put_pubkey_kind_entry(entries, pubkey, kind, created_sort_key, event_id)
when is_binary(pubkey) and is_integer(kind) do
[{@events_by_pubkey_kind_table, {{pubkey, kind}, created_sort_key, event_id}} | entries]
end
defp maybe_put_pubkey_kind_entry(entries, _pubkey, _kind, _created_sort_key, _event_id),
do: entries
defp maybe_put_address_entry(entries, event, pubkey, kind, event_id)
when is_binary(pubkey) and is_integer(kind) and kind >= 30_000 and kind < 40_000 do
d_tag =
event
|> Map.get("tags", [])
|> Enum.find_value("", fn
["d", value | _rest] -> value
_tag -> nil
end)
[{@events_by_address_table, {{kind, pubkey, d_tag}, sort_key(event), event_id}} | entries]
end
defp maybe_put_address_entry(entries, _event, _pubkey, _kind, _event_id), do: entries
defp indexed_events_for_value(_table, value)
when not is_binary(value) and not is_integer(value) and not is_tuple(value),
do: []
defp indexed_events_for_value(table, value) do
table
|> :ets.lookup(value)
|> Enum.reduce([], fn {^value, _created_sort_key, event_id}, acc ->
case lookup_event(event_id) do
{:ok, event, false} -> [event | acc]
_other -> acc
end
end)
end
defp sort_and_deduplicate_events(events) do
events
|> Enum.uniq_by(& &1["id"])
|> Enum.sort(&chronological_sorter/2)
end
defp chronological_sorter(left, right) do
cond do
left["created_at"] > right["created_at"] -> true
left["created_at"] < right["created_at"] -> false
true -> left["id"] < right["id"]
end
end
defp sort_key(event), do: -Map.get(event, "created_at", 0)
end end

View File

@@ -5,6 +5,7 @@ defmodule Parrhesia.Storage.Adapters.Postgres.ACL do
import Ecto.Query import Ecto.Query
alias Parrhesia.PostgresRepos
alias Parrhesia.Repo alias Parrhesia.Repo
@behaviour Parrhesia.Storage.ACL @behaviour Parrhesia.Storage.ACL
@@ -74,7 +75,8 @@ defmodule Parrhesia.Storage.Adapters.Postgres.ACL do
|> maybe_filter_principal(Keyword.get(opts, :principal)) |> maybe_filter_principal(Keyword.get(opts, :principal))
|> maybe_filter_capability(Keyword.get(opts, :capability)) |> maybe_filter_capability(Keyword.get(opts, :capability))
{:ok, Enum.map(Repo.all(query), &normalize_persisted_rule/1)} repo = read_repo()
{:ok, Enum.map(repo.all(query), &normalize_persisted_rule/1)}
end end
def list_rules(_context, _opts), do: {:error, :invalid_opts} def list_rules(_context, _opts), do: {:error, :invalid_opts}
@@ -133,12 +135,16 @@ defmodule Parrhesia.Storage.Adapters.Postgres.ACL do
} }
) )
case Repo.one(query) do repo = read_repo()
case repo.one(query) do
nil -> nil nil -> nil
stored_rule -> normalize_persisted_rule(stored_rule) stored_rule -> normalize_persisted_rule(stored_rule)
end end
end end
defp read_repo, do: PostgresRepos.read()
defp insert_rule(normalized_rule) do defp insert_rule(normalized_rule) do
now = DateTime.utc_now() |> DateTime.truncate(:microsecond) now = DateTime.utc_now() |> DateTime.truncate(:microsecond)

View File

@@ -5,6 +5,7 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Admin do
import Ecto.Query import Ecto.Query
alias Parrhesia.PostgresRepos
alias Parrhesia.Repo alias Parrhesia.Repo
@behaviour Parrhesia.Storage.Admin @behaviour Parrhesia.Storage.Admin
@@ -73,8 +74,8 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Admin do
|> maybe_filter_actor_pubkey(Keyword.get(opts, :actor_pubkey)) |> maybe_filter_actor_pubkey(Keyword.get(opts, :actor_pubkey))
logs = logs =
query read_repo()
|> Repo.all() |> then(fn repo -> repo.all(query) end)
|> Enum.map(&to_audit_log_map/1) |> Enum.map(&to_audit_log_map/1)
{:ok, logs} {:ok, logs}
@@ -83,11 +84,12 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Admin do
def list_audit_logs(_context, _opts), do: {:error, :invalid_opts} def list_audit_logs(_context, _opts), do: {:error, :invalid_opts}
defp relay_stats do defp relay_stats do
events_count = Repo.aggregate("events", :count, :id) repo = read_repo()
banned_pubkeys = Repo.aggregate("banned_pubkeys", :count, :pubkey) events_count = repo.aggregate("events", :count, :id)
allowed_pubkeys = Repo.aggregate("allowed_pubkeys", :count, :pubkey) banned_pubkeys = repo.aggregate("banned_pubkeys", :count, :pubkey)
blocked_ips = Repo.aggregate("blocked_ips", :count, :ip) allowed_pubkeys = repo.aggregate("allowed_pubkeys", :count, :pubkey)
acl_rules = Repo.aggregate("acl_rules", :count, :id) blocked_ips = repo.aggregate("blocked_ips", :count, :ip)
acl_rules = repo.aggregate("acl_rules", :count, :id)
%{ %{
"events" => events_count, "events" => events_count,
@@ -234,6 +236,8 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Admin do
defp normalize_pubkey(_value), do: {:error, :invalid_actor_pubkey} defp normalize_pubkey(_value), do: {:error, :invalid_actor_pubkey}
defp read_repo, do: PostgresRepos.read()
defp invalid_key_reason(:params), do: :invalid_params defp invalid_key_reason(:params), do: :invalid_params
defp invalid_key_reason(:result), do: :invalid_result defp invalid_key_reason(:result), do: :invalid_result

View File

@@ -5,10 +5,16 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
import Ecto.Query import Ecto.Query
alias Parrhesia.PostgresRepos
alias Parrhesia.Protocol.Filter alias Parrhesia.Protocol.Filter
alias Parrhesia.Repo alias Parrhesia.Repo
@behaviour Parrhesia.Storage.Events @behaviour Parrhesia.Storage.Events
@trigram_fallback_max_single_term_length 4
@trigram_fallback_pattern ~r/[^\p{L}\p{N}\s"]/u
@fts_match_fragment "to_tsvector('simple', ?) @@ websearch_to_tsquery('simple', ?)"
@fts_rank_fragment "ts_rank_cd(to_tsvector('simple', ?), websearch_to_tsquery('simple', ?))"
@trigram_rank_fragment "word_similarity(lower(?), lower(?))"
@type normalized_event :: %{ @type normalized_event :: %{
id: binary(), id: binary(),
@@ -62,7 +68,9 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
} }
) )
case Repo.one(event_query) do repo = read_repo()
case repo.one(event_query) do
nil -> nil ->
{:ok, nil} {:ok, nil}
@@ -76,16 +84,17 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
def query(_context, filters, opts) when is_list(opts) do def query(_context, filters, opts) when is_list(opts) do
with :ok <- Filter.validate_filters(filters) do with :ok <- Filter.validate_filters(filters) do
now = Keyword.get(opts, :now, System.system_time(:second)) now = Keyword.get(opts, :now, System.system_time(:second))
repo = read_repo()
persisted_events = persisted_events =
filters filters
|> Enum.flat_map(fn filter -> |> Enum.flat_map(fn filter ->
filter filter
|> event_query_for_filter(now, opts) |> event_query_for_filter(now, opts)
|> Repo.all() |> repo.all()
end) end)
|> deduplicate_events() |> deduplicate_events()
|> sort_persisted_events() |> sort_persisted_events(filters)
|> maybe_apply_query_limit(opts) |> maybe_apply_query_limit(opts)
{:ok, Enum.map(persisted_events, &to_nostr_event/1)} {:ok, Enum.map(persisted_events, &to_nostr_event/1)}
@@ -360,30 +369,7 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
defp maybe_upsert_replaceable_state(normalized_event, now, deleted_at) do defp maybe_upsert_replaceable_state(normalized_event, now, deleted_at) do
if replaceable_kind?(normalized_event.kind) do if replaceable_kind?(normalized_event.kind) do
lookup_query = upsert_replaceable_state_table(normalized_event, now, deleted_at)
from(state in "replaceable_event_state",
where:
state.pubkey == ^normalized_event.pubkey and state.kind == ^normalized_event.kind,
select: %{event_created_at: state.event_created_at, event_id: state.event_id}
)
update_query =
from(state in "replaceable_event_state",
where:
state.pubkey == ^normalized_event.pubkey and
state.kind == ^normalized_event.kind
)
upsert_state_table(
"replaceable_event_state",
lookup_query,
update_query,
replaceable_state_row(normalized_event, now),
normalized_event,
now,
deleted_at,
:replaceable_state_update_failed
)
else else
:ok :ok
end end
@@ -391,157 +377,92 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
defp maybe_upsert_addressable_state(normalized_event, now, deleted_at) do defp maybe_upsert_addressable_state(normalized_event, now, deleted_at) do
if addressable_kind?(normalized_event.kind) do if addressable_kind?(normalized_event.kind) do
lookup_query = upsert_addressable_state_table(normalized_event, now, deleted_at)
from(state in "addressable_event_state",
where:
state.pubkey == ^normalized_event.pubkey and
state.kind == ^normalized_event.kind and
state.d_tag == ^normalized_event.d_tag,
select: %{event_created_at: state.event_created_at, event_id: state.event_id}
)
update_query =
from(state in "addressable_event_state",
where:
state.pubkey == ^normalized_event.pubkey and
state.kind == ^normalized_event.kind and
state.d_tag == ^normalized_event.d_tag
)
upsert_state_table(
"addressable_event_state",
lookup_query,
update_query,
addressable_state_row(normalized_event, now),
normalized_event,
now,
deleted_at,
:addressable_state_update_failed
)
else else
:ok :ok
end end
end end
defp upsert_state_table( defp upsert_replaceable_state_table(normalized_event, now, deleted_at) do
table_name, params = [
lookup_query, normalized_event.pubkey,
update_query, normalized_event.kind,
insert_row, normalized_event.created_at,
normalized_event, normalized_event.id,
now, now,
deleted_at, now
failure_reason
) do
case Repo.one(lookup_query) do
nil ->
insert_state_or_resolve_race(
table_name,
lookup_query,
update_query,
insert_row,
normalized_event,
now,
deleted_at,
failure_reason
)
current_state ->
maybe_update_state(
update_query,
normalized_event,
current_state,
now,
deleted_at,
failure_reason
)
end
end
defp insert_state_or_resolve_race(
table_name,
lookup_query,
update_query,
insert_row,
normalized_event,
now,
deleted_at,
failure_reason
) do
case Repo.insert_all(table_name, [insert_row], on_conflict: :nothing) do
{1, _result} ->
:ok
{0, _result} ->
resolve_state_race(
lookup_query,
update_query,
normalized_event,
now,
deleted_at,
failure_reason
)
{_inserted, _result} ->
Repo.rollback(failure_reason)
end
end
defp resolve_state_race(
lookup_query,
update_query,
normalized_event,
now,
deleted_at,
failure_reason
) do
case Repo.one(lookup_query) do
nil ->
Repo.rollback(failure_reason)
current_state ->
maybe_update_state(
update_query,
normalized_event,
current_state,
now,
deleted_at,
failure_reason
)
end
end
defp maybe_update_state(
update_query,
normalized_event,
current_state,
now,
deleted_at,
failure_reason
) do
if candidate_wins_state?(normalized_event, current_state) do
{updated, _result} =
Repo.update_all(update_query,
set: [
event_created_at: normalized_event.created_at,
event_id: normalized_event.id,
updated_at: now
] ]
)
if updated == 1 do case Repo.query(replaceable_state_upsert_sql(), params) do
retire_event!( {:ok, %{rows: [row]}} ->
current_state.event_created_at, finalize_state_upsert(row, normalized_event, deleted_at, :replaceable_state_update_failed)
current_state.event_id,
{:ok, _result} ->
Repo.rollback(:replaceable_state_update_failed)
{:error, _reason} ->
Repo.rollback(:replaceable_state_update_failed)
end
end
defp upsert_addressable_state_table(normalized_event, now, deleted_at) do
params = [
normalized_event.pubkey,
normalized_event.kind,
normalized_event.d_tag,
normalized_event.created_at,
normalized_event.id,
now,
now
]
case Repo.query(addressable_state_upsert_sql(), params) do
{:ok, %{rows: [row]}} ->
finalize_state_upsert(row, normalized_event, deleted_at, :addressable_state_update_failed)
{:ok, _result} ->
Repo.rollback(:addressable_state_update_failed)
{:error, _reason} ->
Repo.rollback(:addressable_state_update_failed)
end
end
defp finalize_state_upsert(
[retired_event_created_at, retired_event_id, winner_event_created_at, winner_event_id],
normalized_event,
deleted_at,
failure_reason
) do
case {winner_event_created_at, winner_event_id} do
{created_at, event_id}
when created_at == normalized_event.created_at and event_id == normalized_event.id ->
maybe_retire_previous_state_event(
retired_event_created_at,
retired_event_id,
deleted_at,
failure_reason
)
{_created_at, _event_id} ->
retire_event!(
normalized_event.created_at,
normalized_event.id,
deleted_at, deleted_at,
failure_reason failure_reason
) )
else
Repo.rollback(failure_reason)
end end
else
retire_event!(normalized_event.created_at, normalized_event.id, deleted_at, failure_reason)
end end
defp maybe_retire_previous_state_event(nil, nil, _deleted_at, _failure_reason), do: :ok
defp maybe_retire_previous_state_event(
retired_event_created_at,
retired_event_id,
deleted_at,
failure_reason
) do
retire_event!(retired_event_created_at, retired_event_id, deleted_at, failure_reason)
end end
defp retire_event!(event_created_at, event_id, deleted_at, failure_reason) do defp retire_event!(event_created_at, event_id, deleted_at, failure_reason) do
@@ -567,27 +488,147 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
defp addressable_kind?(kind), do: kind >= 30_000 and kind < 40_000 defp addressable_kind?(kind), do: kind >= 30_000 and kind < 40_000
defp replaceable_state_row(normalized_event, now) do defp replaceable_state_upsert_sql do
%{ """
pubkey: normalized_event.pubkey, WITH inserted AS (
kind: normalized_event.kind, INSERT INTO replaceable_event_state (
event_created_at: normalized_event.created_at, pubkey,
event_id: normalized_event.id, kind,
inserted_at: now, event_created_at,
updated_at: now event_id,
} inserted_at,
updated_at
)
VALUES ($1, $2, $3, $4, $5, $6)
ON CONFLICT (pubkey, kind) DO NOTHING
RETURNING
NULL::bigint AS retired_event_created_at,
NULL::bytea AS retired_event_id,
event_created_at AS winner_event_created_at,
event_id AS winner_event_id
),
updated AS (
UPDATE replaceable_event_state AS state
SET
event_created_at = $3,
event_id = $4,
updated_at = $6
FROM (
SELECT current.event_created_at, current.event_id
FROM replaceable_event_state AS current
WHERE current.pubkey = $1 AND current.kind = $2
FOR UPDATE
) AS previous
WHERE
NOT EXISTS (SELECT 1 FROM inserted)
AND state.pubkey = $1
AND state.kind = $2
AND (
state.event_created_at < $3
OR (state.event_created_at = $3 AND state.event_id > $4)
)
RETURNING
previous.event_created_at AS retired_event_created_at,
previous.event_id AS retired_event_id,
state.event_created_at AS winner_event_created_at,
state.event_id AS winner_event_id
),
current AS (
SELECT
NULL::bigint AS retired_event_created_at,
NULL::bytea AS retired_event_id,
state.event_created_at AS winner_event_created_at,
state.event_id AS winner_event_id
FROM replaceable_event_state AS state
WHERE
NOT EXISTS (SELECT 1 FROM inserted)
AND NOT EXISTS (SELECT 1 FROM updated)
AND state.pubkey = $1
AND state.kind = $2
)
SELECT *
FROM inserted
UNION ALL
SELECT *
FROM updated
UNION ALL
SELECT *
FROM current
LIMIT 1
"""
end end
defp addressable_state_row(normalized_event, now) do defp addressable_state_upsert_sql do
%{ """
pubkey: normalized_event.pubkey, WITH inserted AS (
kind: normalized_event.kind, INSERT INTO addressable_event_state (
d_tag: normalized_event.d_tag, pubkey,
event_created_at: normalized_event.created_at, kind,
event_id: normalized_event.id, d_tag,
inserted_at: now, event_created_at,
updated_at: now event_id,
} inserted_at,
updated_at
)
VALUES ($1, $2, $3, $4, $5, $6, $7)
ON CONFLICT (pubkey, kind, d_tag) DO NOTHING
RETURNING
NULL::bigint AS retired_event_created_at,
NULL::bytea AS retired_event_id,
event_created_at AS winner_event_created_at,
event_id AS winner_event_id
),
updated AS (
UPDATE addressable_event_state AS state
SET
event_created_at = $4,
event_id = $5,
updated_at = $7
FROM (
SELECT current.event_created_at, current.event_id
FROM addressable_event_state AS current
WHERE current.pubkey = $1 AND current.kind = $2 AND current.d_tag = $3
FOR UPDATE
) AS previous
WHERE
NOT EXISTS (SELECT 1 FROM inserted)
AND state.pubkey = $1
AND state.kind = $2
AND state.d_tag = $3
AND (
state.event_created_at < $4
OR (state.event_created_at = $4 AND state.event_id > $5)
)
RETURNING
previous.event_created_at AS retired_event_created_at,
previous.event_id AS retired_event_id,
state.event_created_at AS winner_event_created_at,
state.event_id AS winner_event_id
),
current AS (
SELECT
NULL::bigint AS retired_event_created_at,
NULL::bytea AS retired_event_id,
state.event_created_at AS winner_event_created_at,
state.event_id AS winner_event_id
FROM addressable_event_state AS state
WHERE
NOT EXISTS (SELECT 1 FROM inserted)
AND NOT EXISTS (SELECT 1 FROM updated)
AND state.pubkey = $1
AND state.kind = $2
AND state.d_tag = $3
)
SELECT *
FROM inserted
UNION ALL
SELECT *
FROM updated
UNION ALL
SELECT *
FROM current
LIMIT 1
"""
end end
defp event_row(normalized_event, now) do defp event_row(normalized_event, now) do
@@ -607,11 +648,12 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
end end
defp event_query_for_filter(filter, now, opts) do defp event_query_for_filter(filter, now, opts) do
search_plan = search_plan(Map.get(filter, "search"))
{base_query, remaining_tag_filters} = event_source_query(filter, now) {base_query, remaining_tag_filters} = event_source_query(filter, now)
base_query base_query
|> apply_common_event_filters(filter, remaining_tag_filters, opts) |> apply_common_event_filters(filter, remaining_tag_filters, opts, search_plan)
|> order_by([event: event], desc: event.created_at, asc: event.id) |> maybe_order_by_search_rank(search_plan)
|> select([event: event], %{ |> select([event: event], %{
id: event.id, id: event.id,
pubkey: event.pubkey, pubkey: event.pubkey,
@@ -621,14 +663,16 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
content: event.content, content: event.content,
sig: event.sig sig: event.sig
}) })
|> maybe_select_search_score(search_plan)
|> maybe_limit_query(effective_filter_limit(filter, opts)) |> maybe_limit_query(effective_filter_limit(filter, opts))
end end
defp event_id_query_for_filter(filter, now, opts) do defp event_id_query_for_filter(filter, now, opts) do
search_plan = search_plan(Map.get(filter, "search"))
{base_query, remaining_tag_filters} = event_source_query(filter, now) {base_query, remaining_tag_filters} = event_source_query(filter, now)
base_query base_query
|> apply_common_event_filters(filter, remaining_tag_filters, opts) |> apply_common_event_filters(filter, remaining_tag_filters, opts, search_plan)
|> select([event: event], event.id) |> select([event: event], event.id)
end end
@@ -647,10 +691,11 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
end end
defp event_ref_query_for_filter(filter, now, opts) do defp event_ref_query_for_filter(filter, now, opts) do
search_plan = search_plan(Map.get(filter, "search"))
{base_query, remaining_tag_filters} = event_source_query(filter, now) {base_query, remaining_tag_filters} = event_source_query(filter, now)
base_query base_query
|> apply_common_event_filters(filter, remaining_tag_filters, opts) |> apply_common_event_filters(filter, remaining_tag_filters, opts, search_plan)
|> order_by([event: event], asc: event.created_at, asc: event.id) |> order_by([event: event], asc: event.created_at, asc: event.id)
|> select([event: event], %{ |> select([event: event], %{
created_at: event.created_at, created_at: event.created_at,
@@ -674,13 +719,17 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
end end
defp fetch_event_refs([filter], now, opts) do defp fetch_event_refs([filter], now, opts) do
query =
filter filter
|> event_ref_query_for_filter(now, opts) |> event_ref_query_for_filter(now, opts)
|> maybe_limit_query(Keyword.get(opts, :limit)) |> maybe_limit_query(Keyword.get(opts, :limit))
|> Repo.all()
read_repo()
|> then(fn repo -> repo.all(query) end)
end end
defp fetch_event_refs(filters, now, opts) do defp fetch_event_refs(filters, now, opts) do
query =
filters filters
|> event_ref_union_query_for_filters(now, opts) |> event_ref_union_query_for_filters(now, opts)
|> subquery() |> subquery()
@@ -692,27 +741,35 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
) )
end) end)
|> maybe_limit_query(Keyword.get(opts, :limit)) |> maybe_limit_query(Keyword.get(opts, :limit))
|> Repo.all()
read_repo()
|> then(fn repo -> repo.all(query) end)
end end
defp count_events([filter], now, opts) do defp count_events([filter], now, opts) do
query =
filter filter
|> event_id_query_for_filter(now, opts) |> event_id_query_for_filter(now, opts)
|> subquery() |> subquery()
|> then(fn query -> |> then(fn query ->
from(event in query, select: count()) from(event in query, select: count())
end) end)
|> Repo.one()
read_repo()
|> then(fn repo -> repo.one(query) end)
end end
defp count_events(filters, now, opts) do defp count_events(filters, now, opts) do
query =
filters filters
|> event_id_distinct_union_query_for_filters(now, opts) |> event_id_distinct_union_query_for_filters(now, opts)
|> subquery() |> subquery()
|> then(fn union_query -> |> then(fn union_query ->
from(event in union_query, select: count()) from(event in union_query, select: count())
end) end)
|> Repo.one()
read_repo()
|> then(fn repo -> repo.one(query) end)
end end
defp event_source_query(filter, now) do defp event_source_query(filter, now) do
@@ -744,14 +801,14 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
end end
end end
defp apply_common_event_filters(query, filter, remaining_tag_filters, opts) do defp apply_common_event_filters(query, filter, remaining_tag_filters, opts, search_plan) do
query query
|> maybe_filter_ids(Map.get(filter, "ids")) |> maybe_filter_ids(Map.get(filter, "ids"))
|> maybe_filter_authors(Map.get(filter, "authors")) |> maybe_filter_authors(Map.get(filter, "authors"))
|> maybe_filter_kinds(Map.get(filter, "kinds")) |> maybe_filter_kinds(Map.get(filter, "kinds"))
|> maybe_filter_since(Map.get(filter, "since")) |> maybe_filter_since(Map.get(filter, "since"))
|> maybe_filter_until(Map.get(filter, "until")) |> maybe_filter_until(Map.get(filter, "until"))
|> maybe_filter_search(Map.get(filter, "search")) |> maybe_filter_search(search_plan)
|> filter_by_tag_filters(remaining_tag_filters) |> filter_by_tag_filters(remaining_tag_filters)
|> maybe_restrict_giftwrap_access(filter, opts) |> maybe_restrict_giftwrap_access(filter, opts)
end end
@@ -792,13 +849,19 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
defp maybe_filter_search(query, nil), do: query defp maybe_filter_search(query, nil), do: query
defp maybe_filter_search(query, search) when is_binary(search) and search != "" do defp maybe_filter_search(query, %{mode: :fts, query: search}) do
where(
query,
[event: event],
fragment(@fts_match_fragment, event.content, ^search)
)
end
defp maybe_filter_search(query, %{mode: :trigram, query: search}) do
escaped_search = escape_like_pattern(search) escaped_search = escape_like_pattern(search)
where(query, [event: event], ilike(event.content, ^"%#{escaped_search}%")) where(query, [event: event], ilike(event.content, ^"%#{escaped_search}%"))
end end
defp maybe_filter_search(query, _search), do: query
defp escape_like_pattern(search) do defp escape_like_pattern(search) do
search search
|> String.replace("\\", "\\\\") |> String.replace("\\", "\\\\")
@@ -886,20 +949,90 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
defp maybe_limit_query(query, nil), do: query defp maybe_limit_query(query, nil), do: query
defp maybe_limit_query(query, limit), do: limit(query, ^limit) defp maybe_limit_query(query, limit), do: limit(query, ^limit)
defp maybe_order_by_search_rank(query, nil) do
order_by(query, [event: event], desc: event.created_at, asc: event.id)
end
defp maybe_order_by_search_rank(query, %{mode: :fts, query: search}) do
order_by(
query,
[event: event],
desc: fragment(@fts_rank_fragment, event.content, ^search),
desc: event.created_at,
asc: event.id
)
end
defp maybe_order_by_search_rank(query, %{mode: :trigram, query: search}) do
order_by(
query,
[event: event],
desc: fragment(@trigram_rank_fragment, ^search, event.content),
desc: event.created_at,
asc: event.id
)
end
defp maybe_select_search_score(query, nil), do: query
defp maybe_select_search_score(query, %{mode: :fts, query: search}) do
select_merge(
query,
[event: event],
%{search_score: fragment(@fts_rank_fragment, event.content, ^search)}
)
end
defp maybe_select_search_score(query, %{mode: :trigram, query: search}) do
select_merge(
query,
[event: event],
%{search_score: fragment(@trigram_rank_fragment, ^search, event.content)}
)
end
defp search_plan(nil), do: nil
defp search_plan(search) when is_binary(search) do
normalized_search = String.trim(search)
cond do
normalized_search == "" ->
nil
trigram_fallback_search?(normalized_search) ->
%{mode: :trigram, query: normalized_search}
true ->
%{mode: :fts, query: normalized_search}
end
end
defp trigram_fallback_search?(search) do
String.match?(search, @trigram_fallback_pattern) or short_single_term_search?(search)
end
defp short_single_term_search?(search) do
case String.split(search, ~r/\s+/, trim: true) do
[term] -> String.length(term) <= @trigram_fallback_max_single_term_length
_other -> false
end
end
defp deduplicate_events(events) do defp deduplicate_events(events) do
events events
|> Enum.reduce(%{}, fn event, acc -> Map.put_new(acc, event.id, event) end) |> Enum.reduce(%{}, fn event, acc ->
Map.update(acc, event.id, event, fn existing -> preferred_event(existing, event) end)
end)
|> Map.values() |> Map.values()
end end
defp sort_persisted_events(events) do defp sort_persisted_events(events, filters) do
Enum.sort(events, fn left, right -> if Enum.any?(filters, &search_filter?/1) do
cond do Enum.sort(events, &search_result_sorter/2)
left.created_at > right.created_at -> true else
left.created_at < right.created_at -> false Enum.sort(events, &chronological_sorter/2)
true -> left.id < right.id
end end
end)
end end
defp maybe_apply_query_limit(events, opts) do defp maybe_apply_query_limit(events, opts) do
@@ -921,6 +1054,50 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
} }
end end
defp preferred_event(existing, candidate) do
if search_result_sorter(candidate, existing) do
candidate
else
existing
end
end
defp search_filter?(filter) do
filter
|> Map.get("search")
|> search_plan()
|> Kernel.!=(nil)
end
defp search_result_sorter(left, right) do
left_score = search_score(left)
right_score = search_score(right)
cond do
left_score > right_score -> true
left_score < right_score -> false
true -> chronological_sorter(left, right)
end
end
defp chronological_sorter(left, right) do
cond do
left.created_at > right.created_at -> true
left.created_at < right.created_at -> false
true -> left.id < right.id
end
end
defp search_score(event) do
event
|> Map.get(:search_score, 0.0)
|> case do
score when is_float(score) -> score
score when is_integer(score) -> score / 1
_other -> 0.0
end
end
defp normalize_persisted_tags(tags) when is_list(tags), do: tags defp normalize_persisted_tags(tags) when is_list(tags), do: tags
defp normalize_persisted_tags(_tags), do: [] defp normalize_persisted_tags(_tags), do: []
@@ -1066,4 +1243,6 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Events do
end end
defp maybe_apply_mls_group_retention(expires_at, _kind, _created_at), do: expires_at defp maybe_apply_mls_group_retention(expires_at, _kind, _created_at), do: expires_at
defp read_repo, do: PostgresRepos.read()
end end

View File

@@ -5,6 +5,7 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Groups do
import Ecto.Query import Ecto.Query
alias Parrhesia.PostgresRepos
alias Parrhesia.Repo alias Parrhesia.Repo
@behaviour Parrhesia.Storage.Groups @behaviour Parrhesia.Storage.Groups
@@ -46,7 +47,9 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Groups do
limit: 1 limit: 1
) )
case Repo.one(query) do repo = read_repo()
case repo.one(query) do
nil -> nil ->
{:ok, nil} {:ok, nil}
@@ -94,8 +97,8 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Groups do
) )
memberships = memberships =
query read_repo()
|> Repo.all() |> then(fn repo -> repo.all(query) end)
|> Enum.map(fn membership -> |> Enum.map(fn membership ->
to_membership_map( to_membership_map(
membership.group_id, membership.group_id,
@@ -163,8 +166,8 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Groups do
) )
roles = roles =
query read_repo()
|> Repo.all() |> then(fn repo -> repo.all(query) end)
|> Enum.map(fn role -> |> Enum.map(fn role ->
to_role_map(role.group_id, role.pubkey, role.role, role.metadata) to_role_map(role.group_id, role.pubkey, role.role, role.metadata)
end) end)
@@ -242,6 +245,7 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Groups do
defp unwrap_transaction_result({:ok, result}), do: {:ok, result} defp unwrap_transaction_result({:ok, result}), do: {:ok, result}
defp unwrap_transaction_result({:error, reason}), do: {:error, reason} defp unwrap_transaction_result({:error, reason}), do: {:error, reason}
defp read_repo, do: PostgresRepos.read()
defp fetch_required_string(map, key) do defp fetch_required_string(map, key) do
map map

View File

@@ -5,6 +5,7 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Moderation do
import Ecto.Query import Ecto.Query
alias Parrhesia.PostgresRepos
alias Parrhesia.Repo alias Parrhesia.Repo
@behaviour Parrhesia.Storage.Moderation @behaviour Parrhesia.Storage.Moderation
@@ -212,7 +213,8 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Moderation do
select: field(record, ^field) select: field(record, ^field)
) )
Repo.all(query) read_repo()
|> then(fn repo -> repo.all(query) end)
end end
defp cache_put(scope, value) do defp cache_put(scope, value) do
@@ -266,7 +268,9 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Moderation do
limit: 1 limit: 1
) )
Repo.one(query) == 1 read_repo()
|> then(fn repo -> repo.one(query) end)
|> Kernel.==(1)
end end
defp scope_populated_db?(table, field) do defp scope_populated_db?(table, field) do
@@ -276,7 +280,10 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Moderation do
limit: 1 limit: 1
) )
not is_nil(Repo.one(query)) read_repo()
|> then(fn repo -> repo.one(query) end)
|> is_nil()
|> Kernel.not()
end end
defp normalize_hex_or_binary(value, expected_bytes, _reason) defp normalize_hex_or_binary(value, expected_bytes, _reason)
@@ -315,4 +322,6 @@ defmodule Parrhesia.Storage.Adapters.Postgres.Moderation do
defp to_inet({_, _, _, _, _, _, _, _} = ip_tuple), defp to_inet({_, _, _, _, _, _, _, _} = ip_tuple),
do: %Postgrex.INET{address: ip_tuple, netmask: 128} do: %Postgrex.INET{address: ip_tuple, netmask: 128}
defp read_repo, do: PostgresRepos.read()
end end

View File

@@ -5,6 +5,7 @@ defmodule Parrhesia.Storage.Partitions do
import Ecto.Query import Ecto.Query
alias Parrhesia.PostgresRepos
alias Parrhesia.Repo alias Parrhesia.Repo
@identifier_pattern ~r/^[a-zA-Z_][a-zA-Z0-9_]*$/ @identifier_pattern ~r/^[a-zA-Z_][a-zA-Z0-9_]*$/
@@ -35,7 +36,8 @@ defmodule Parrhesia.Storage.Partitions do
order_by: [asc: table.tablename] order_by: [asc: table.tablename]
) )
Repo.all(query) read_repo()
|> then(fn repo -> repo.all(query) end)
end end
@doc """ @doc """
@@ -88,7 +90,9 @@ defmodule Parrhesia.Storage.Partitions do
""" """
@spec database_size_bytes() :: {:ok, non_neg_integer()} | {:error, term()} @spec database_size_bytes() :: {:ok, non_neg_integer()} | {:error, term()}
def database_size_bytes do def database_size_bytes do
case Repo.query("SELECT pg_database_size(current_database())") do repo = read_repo()
case repo.query("SELECT pg_database_size(current_database())") do
{:ok, %{rows: [[size]]}} when is_integer(size) and size >= 0 -> {:ok, size} {:ok, %{rows: [[size]]}} when is_integer(size) and size >= 0 -> {:ok, size}
{:ok, _result} -> {:error, :unexpected_result} {:ok, _result} -> {:error, :unexpected_result}
{:error, reason} -> {:error, reason} {:error, reason} -> {:error, reason}
@@ -219,7 +223,9 @@ defmodule Parrhesia.Storage.Partitions do
LIMIT 1 LIMIT 1
""" """
case Repo.query(query, [partition_name, parent_table_name]) do repo = read_repo()
case repo.query(query, [partition_name, parent_table_name]) do
{:ok, %{rows: [[1]]}} -> true {:ok, %{rows: [[1]]}} -> true
{:ok, %{rows: []}} -> false {:ok, %{rows: []}} -> false
{:ok, _result} -> false {:ok, _result} -> false
@@ -278,6 +284,8 @@ defmodule Parrhesia.Storage.Partitions do
|> DateTime.to_unix() |> DateTime.to_unix()
end end
defp read_repo, do: PostgresRepos.read()
defp month_start(%Date{} = date), do: Date.new!(date.year, date.month, 1) defp month_start(%Date{} = date), do: Date.new!(date.year, date.month, 1)
defp shift_month(%Date{} = date, month_delta) when is_integer(month_delta) do defp shift_month(%Date{} = date, month_delta) when is_integer(month_delta) do

View File

@@ -5,18 +5,28 @@ defmodule Parrhesia.Storage.Supervisor do
use Supervisor use Supervisor
alias Parrhesia.PostgresRepos
def start_link(init_arg \\ []) do def start_link(init_arg \\ []) do
Supervisor.start_link(__MODULE__, init_arg, name: __MODULE__) Supervisor.start_link(__MODULE__, init_arg, name: __MODULE__)
end end
@impl true @impl true
def init(_init_arg) do def init(_init_arg) do
children = [ children = moderation_cache_children() ++ PostgresRepos.started_repos()
{Parrhesia.Storage.Adapters.Postgres.ModerationCache,
name: Parrhesia.Storage.Adapters.Postgres.ModerationCache},
Parrhesia.Repo
]
Supervisor.init(children, strategy: :one_for_one) Supervisor.init(children, strategy: :one_for_one)
end end
defp moderation_cache_children do
if PostgresRepos.postgres_enabled?() and
Application.get_env(:parrhesia, :moderation_cache_enabled, true) do
[
{Parrhesia.Storage.Adapters.Postgres.ModerationCache,
name: Parrhesia.Storage.Adapters.Postgres.ModerationCache}
]
else
[]
end
end
end end

View File

@@ -14,6 +14,7 @@ defmodule Parrhesia.Subscriptions.Supervisor do
children = children =
[ [
{Parrhesia.Subscriptions.Index, name: Parrhesia.Subscriptions.Index}, {Parrhesia.Subscriptions.Index, name: Parrhesia.Subscriptions.Index},
{Parrhesia.Fanout.Dispatcher, name: Parrhesia.Fanout.Dispatcher},
{Registry, keys: :unique, name: Parrhesia.API.Stream.Registry}, {Registry, keys: :unique, name: Parrhesia.API.Stream.Registry},
{DynamicSupervisor, strategy: :one_for_one, name: Parrhesia.API.Stream.Supervisor} {DynamicSupervisor, strategy: :one_for_one, name: Parrhesia.API.Stream.Supervisor}
] ++ ] ++

View File

@@ -1,6 +1,7 @@
defmodule Parrhesia.Sync.RelayInfoClient do defmodule Parrhesia.Sync.RelayInfoClient do
@moduledoc false @moduledoc false
alias Parrhesia.HTTP
alias Parrhesia.Sync.TLS alias Parrhesia.Sync.TLS
@spec verify_remote_identity(map(), keyword()) :: :ok | {:error, term()} @spec verify_remote_identity(map(), keyword()) :: :ok | {:error, term()}
@@ -18,11 +19,12 @@ defmodule Parrhesia.Sync.RelayInfoClient do
end end
defp default_request(url, opts) do defp default_request(url, opts) do
case Req.get( case HTTP.get(
url: url, url: url,
headers: [{"accept", "application/nostr+json"}], headers: [{"accept", "application/nostr+json"}],
decode_body: false, decode_body: false,
connect_options: opts connect_options: Keyword.merge([timeout: 5_000], opts),
receive_timeout: 5_000
) do ) do
{:ok, response} -> {:ok, response} {:ok, response} -> {:ok, response}
{:error, reason} -> {:error, reason} {:error, reason} -> {:error, reason}

View File

@@ -30,10 +30,19 @@ defmodule Parrhesia.Tasks.ExpirationWorker do
def handle_info(:tick, state) do def handle_info(:tick, state) do
started_at = System.monotonic_time() started_at = System.monotonic_time()
_result = Storage.events().purge_expired([]) purged_events =
case Storage.events().purge_expired([]) do
{:ok, count} when is_integer(count) and count >= 0 -> count
_other -> 0
end
duration = System.monotonic_time() - started_at duration = System.monotonic_time() - started_at
Telemetry.emit([:parrhesia, :maintenance, :purge_expired, :stop], %{duration: duration}, %{})
Telemetry.emit(
[:parrhesia, :maintenance, :purge_expired, :stop],
%{duration: duration, purged_events: purged_events},
%{}
)
schedule_tick(state.interval_ms) schedule_tick(state.interval_ms)
{:noreply, state} {:noreply, state}

View File

@@ -0,0 +1,40 @@
defmodule Parrhesia.Tasks.Nip66Publisher do
@moduledoc """
Periodic worker that publishes NIP-66 monitor and discovery events.
"""
use GenServer
alias Parrhesia.NIP66
@spec start_link(keyword()) :: GenServer.on_start()
def start_link(opts \\ []) do
name = Keyword.get(opts, :name, __MODULE__)
GenServer.start_link(__MODULE__, opts, name: name)
end
@impl true
def init(opts) do
state = %{
interval_ms: Keyword.get(opts, :interval_ms, NIP66.publish_interval_ms()),
publish_opts: Keyword.drop(opts, [:name, :interval_ms, :nip66_module]),
nip66_module: Keyword.get(opts, :nip66_module, NIP66)
}
schedule_tick(0)
{:ok, state}
end
@impl true
def handle_info(:tick, state) do
_result = state.nip66_module.publish_snapshot(state.publish_opts)
schedule_tick(state.interval_ms)
{:noreply, state}
end
def handle_info(_message, state), do: {:noreply, state}
defp schedule_tick(interval_ms) do
Process.send_after(self(), :tick, interval_ms)
end
end

View File

@@ -11,7 +11,7 @@ defmodule Parrhesia.Tasks.Supervisor do
@impl true @impl true
def init(_init_arg) do def init(_init_arg) do
children = expiration_children() ++ partition_retention_children() children = expiration_children() ++ partition_retention_children() ++ nip66_children()
Supervisor.init(children, strategy: :one_for_one) Supervisor.init(children, strategy: :one_for_one)
end end
@@ -25,8 +25,20 @@ defmodule Parrhesia.Tasks.Supervisor do
end end
defp partition_retention_children do defp partition_retention_children do
if Application.get_env(:parrhesia, :enable_partition_retention_worker, true) do
[ [
{Parrhesia.Tasks.PartitionRetentionWorker, name: Parrhesia.Tasks.PartitionRetentionWorker} {Parrhesia.Tasks.PartitionRetentionWorker, name: Parrhesia.Tasks.PartitionRetentionWorker}
] ]
else
[]
end
end
defp nip66_children do
if Parrhesia.NIP66.enabled?() do
[{Parrhesia.Tasks.Nip66Publisher, name: Parrhesia.Tasks.Nip66Publisher}]
else
[]
end
end end
end end

View File

@@ -1,12 +1,17 @@
defmodule Parrhesia.Telemetry do defmodule Parrhesia.Telemetry do
@moduledoc """ @moduledoc """
Supervision entrypoint and helpers for relay telemetry. Supervision entrypoint and helpers for relay telemetry.
Starts the Prometheus reporter and telemetry poller as supervised children.
All relay metrics are namespaced under `parrhesia.*` and exposed through the
`/metrics` endpoint in Prometheus exposition format.
""" """
use Supervisor use Supervisor
import Telemetry.Metrics import Telemetry.Metrics
@repo_query_handler_id "parrhesia-repo-query-handler"
@prometheus_reporter __MODULE__.Prometheus @prometheus_reporter __MODULE__.Prometheus
@spec start_link(keyword()) :: Supervisor.on_start() @spec start_link(keyword()) :: Supervisor.on_start()
@@ -16,6 +21,8 @@ defmodule Parrhesia.Telemetry do
@impl true @impl true
def init(_init_arg) do def init(_init_arg) do
:ok = attach_repo_query_handlers()
children = [ children = [
{TelemetryMetricsPrometheus.Core, name: @prometheus_reporter, metrics: metrics()}, {TelemetryMetricsPrometheus.Core, name: @prometheus_reporter, metrics: metrics()},
{:telemetry_poller, measurements: periodic_measurements(), period: 10_000} {:telemetry_poller, measurements: periodic_measurements(), period: 10_000}
@@ -30,6 +37,12 @@ defmodule Parrhesia.Telemetry do
@spec metrics() :: [Telemetry.Metrics.t()] @spec metrics() :: [Telemetry.Metrics.t()]
def metrics do def metrics do
[ [
counter("parrhesia.ingest.events.count",
event_name: [:parrhesia, :ingest, :result],
measurement: :count,
tags: [:traffic_class, :outcome, :reason],
tag_values: &ingest_result_tag_values/1
),
distribution("parrhesia.ingest.duration.ms", distribution("parrhesia.ingest.duration.ms",
event_name: [:parrhesia, :ingest, :stop], event_name: [:parrhesia, :ingest, :stop],
measurement: :duration, measurement: :duration,
@@ -38,14 +51,27 @@ defmodule Parrhesia.Telemetry do
tag_values: &traffic_class_tag_values/1, tag_values: &traffic_class_tag_values/1,
reporter_options: [buckets: [1, 5, 10, 25, 50, 100, 250, 500, 1000]] reporter_options: [buckets: [1, 5, 10, 25, 50, 100, 250, 500, 1000]]
), ),
counter("parrhesia.query.requests.count",
event_name: [:parrhesia, :query, :result],
measurement: :count,
tags: [:traffic_class, :operation, :outcome],
tag_values: &query_result_tag_values/1
),
distribution("parrhesia.query.duration.ms", distribution("parrhesia.query.duration.ms",
event_name: [:parrhesia, :query, :stop], event_name: [:parrhesia, :query, :stop],
measurement: :duration, measurement: :duration,
unit: {:native, :millisecond}, unit: {:native, :millisecond},
tags: [:traffic_class], tags: [:traffic_class, :operation],
tag_values: &traffic_class_tag_values/1, tag_values: &query_stop_tag_values/1,
reporter_options: [buckets: [1, 5, 10, 25, 50, 100, 250, 500, 1000]] reporter_options: [buckets: [1, 5, 10, 25, 50, 100, 250, 500, 1000]]
), ),
distribution("parrhesia.query.results.count",
event_name: [:parrhesia, :query, :stop],
measurement: :result_count,
tags: [:traffic_class, :operation],
tag_values: &query_stop_tag_values/1,
reporter_options: [buckets: [0, 1, 5, 10, 25, 50, 100, 250, 500, 1000, 5000]]
),
distribution("parrhesia.fanout.duration.ms", distribution("parrhesia.fanout.duration.ms",
event_name: [:parrhesia, :fanout, :stop], event_name: [:parrhesia, :fanout, :stop],
measurement: :duration, measurement: :duration,
@@ -54,6 +80,25 @@ defmodule Parrhesia.Telemetry do
tag_values: &traffic_class_tag_values/1, tag_values: &traffic_class_tag_values/1,
reporter_options: [buckets: [1, 5, 10, 25, 50, 100, 250, 500, 1000]] reporter_options: [buckets: [1, 5, 10, 25, 50, 100, 250, 500, 1000]]
), ),
counter("parrhesia.fanout.events_considered.count",
event_name: [:parrhesia, :fanout, :stop],
measurement: :considered,
tags: [:traffic_class],
tag_values: &traffic_class_tag_values/1
),
counter("parrhesia.fanout.events_enqueued.count",
event_name: [:parrhesia, :fanout, :stop],
measurement: :enqueued,
tags: [:traffic_class],
tag_values: &traffic_class_tag_values/1
),
distribution("parrhesia.fanout.batch_size",
event_name: [:parrhesia, :fanout, :stop],
measurement: :enqueued,
tags: [:traffic_class],
tag_values: &traffic_class_tag_values/1,
reporter_options: [buckets: [0, 1, 5, 10, 25, 50, 100, 250, 500, 1000]]
),
last_value("parrhesia.connection.outbound_queue.depth", last_value("parrhesia.connection.outbound_queue.depth",
event_name: [:parrhesia, :connection, :outbound_queue], event_name: [:parrhesia, :connection, :outbound_queue],
measurement: :depth, measurement: :depth,
@@ -80,11 +125,153 @@ defmodule Parrhesia.Telemetry do
tags: [:traffic_class], tags: [:traffic_class],
tag_values: &traffic_class_tag_values/1 tag_values: &traffic_class_tag_values/1
), ),
counter("parrhesia.connection.outbound_queue.drained_frames.count",
event_name: [:parrhesia, :connection, :outbound_queue, :drain],
measurement: :count
),
distribution("parrhesia.connection.outbound_queue.drain_batch_size",
event_name: [:parrhesia, :connection, :outbound_queue, :drain],
measurement: :count,
reporter_options: [buckets: [0, 1, 5, 10, 25, 50, 100, 250]]
),
counter("parrhesia.connection.outbound_queue.dropped_events.count",
event_name: [:parrhesia, :connection, :outbound_queue, :drop],
measurement: :count,
tags: [:strategy],
tag_values: &strategy_tag_values/1
),
last_value("parrhesia.listener.connections.active",
event_name: [:parrhesia, :listener, :population],
measurement: :connections,
tags: [:listener_id],
tag_values: &listener_tag_values/1,
reporter_options: [prometheus_type: :gauge]
),
last_value("parrhesia.listener.subscriptions.active",
event_name: [:parrhesia, :listener, :population],
measurement: :subscriptions,
tags: [:listener_id],
tag_values: &listener_tag_values/1,
reporter_options: [prometheus_type: :gauge]
),
counter("parrhesia.rate_limit.hits.count",
event_name: [:parrhesia, :rate_limit, :hit],
measurement: :count,
tags: [:scope, :traffic_class],
tag_values: &rate_limit_tag_values/1
),
last_value("parrhesia.process.mailbox.depth",
event_name: [:parrhesia, :process, :mailbox],
measurement: :depth,
tags: [:process_type],
tag_values: &process_tag_values/1,
reporter_options: [prometheus_type: :gauge]
),
counter("parrhesia.db.query.count",
event_name: [:parrhesia, :db, :query],
measurement: :count,
tags: [:repo_role],
tag_values: &repo_query_tag_values/1
),
distribution("parrhesia.db.query.total_time.ms",
event_name: [:parrhesia, :db, :query],
measurement: :total_time,
unit: {:native, :millisecond},
tags: [:repo_role],
tag_values: &repo_query_tag_values/1,
reporter_options: [buckets: [1, 5, 10, 25, 50, 100, 250, 500, 1000]]
),
distribution("parrhesia.db.query.queue_time.ms",
event_name: [:parrhesia, :db, :query],
measurement: :queue_time,
unit: {:native, :millisecond},
tags: [:repo_role],
tag_values: &repo_query_tag_values/1,
reporter_options: [buckets: [0, 1, 5, 10, 25, 50, 100, 250, 500, 1000]]
),
distribution("parrhesia.db.query.query_time.ms",
event_name: [:parrhesia, :db, :query],
measurement: :query_time,
unit: {:native, :millisecond},
tags: [:repo_role],
tag_values: &repo_query_tag_values/1,
reporter_options: [buckets: [0, 1, 5, 10, 25, 50, 100, 250, 500, 1000]]
),
distribution("parrhesia.db.query.decode_time.ms",
event_name: [:parrhesia, :db, :query],
measurement: :decode_time,
unit: {:native, :millisecond},
tags: [:repo_role],
tag_values: &repo_query_tag_values/1,
reporter_options: [buckets: [0, 1, 5, 10, 25, 50, 100, 250, 500, 1000]]
),
distribution("parrhesia.db.query.idle_time.ms",
event_name: [:parrhesia, :db, :query],
measurement: :idle_time,
unit: {:native, :millisecond},
tags: [:repo_role],
tag_values: &repo_query_tag_values/1,
reporter_options: [buckets: [0, 1, 5, 10, 25, 50, 100, 250, 500, 1000]]
),
distribution("parrhesia.maintenance.purge_expired.duration.ms",
event_name: [:parrhesia, :maintenance, :purge_expired, :stop],
measurement: :duration,
unit: {:native, :millisecond},
reporter_options: [buckets: [1, 5, 10, 25, 50, 100, 250, 500, 1000]]
),
counter("parrhesia.maintenance.purge_expired.events.count",
event_name: [:parrhesia, :maintenance, :purge_expired, :stop],
measurement: :purged_events
),
distribution("parrhesia.maintenance.partition_retention.duration.ms",
event_name: [:parrhesia, :maintenance, :partition_retention, :stop],
measurement: :duration,
unit: {:native, :millisecond},
tags: [:status],
tag_values: &status_tag_values/1,
reporter_options: [buckets: [1, 5, 10, 25, 50, 100, 250, 500, 1000]]
),
counter("parrhesia.maintenance.partition_retention.dropped_partitions.count",
event_name: [:parrhesia, :maintenance, :partition_retention, :stop],
measurement: :dropped_partitions,
tags: [:status],
tag_values: &status_tag_values/1
),
last_value("parrhesia.vm.memory.total.bytes", last_value("parrhesia.vm.memory.total.bytes",
event_name: [:parrhesia, :vm, :memory], event_name: [:parrhesia, :vm, :memory],
measurement: :total, measurement: :total,
unit: :byte, unit: :byte,
reporter_options: [prometheus_type: :gauge] reporter_options: [prometheus_type: :gauge]
),
last_value("parrhesia.vm.memory.processes.bytes",
event_name: [:parrhesia, :vm, :memory],
measurement: :processes,
unit: :byte,
reporter_options: [prometheus_type: :gauge]
),
last_value("parrhesia.vm.memory.system.bytes",
event_name: [:parrhesia, :vm, :memory],
measurement: :system,
unit: :byte,
reporter_options: [prometheus_type: :gauge]
),
last_value("parrhesia.vm.memory.atom.bytes",
event_name: [:parrhesia, :vm, :memory],
measurement: :atom,
unit: :byte,
reporter_options: [prometheus_type: :gauge]
),
last_value("parrhesia.vm.memory.binary.bytes",
event_name: [:parrhesia, :vm, :memory],
measurement: :binary,
unit: :byte,
reporter_options: [prometheus_type: :gauge]
),
last_value("parrhesia.vm.memory.ets.bytes",
event_name: [:parrhesia, :vm, :memory],
measurement: :ets,
unit: :byte,
reporter_options: [prometheus_type: :gauge]
) )
] ]
end end
@@ -95,6 +282,22 @@ defmodule Parrhesia.Telemetry do
:telemetry.execute(event_name, measurements, metadata) :telemetry.execute(event_name, measurements, metadata)
end end
@spec emit_process_mailbox_depth(atom(), map()) :: :ok
def emit_process_mailbox_depth(process_type, metadata \\ %{})
when is_atom(process_type) and is_map(metadata) do
case Process.info(self(), :message_queue_len) do
{:message_queue_len, depth} ->
emit(
[:parrhesia, :process, :mailbox],
%{depth: depth},
Map.put(metadata, :process_type, process_type)
)
nil ->
:ok
end
end
defp periodic_measurements do defp periodic_measurements do
[ [
{__MODULE__, :emit_vm_memory, []} {__MODULE__, :emit_vm_memory, []}
@@ -103,12 +306,119 @@ defmodule Parrhesia.Telemetry do
@doc false @doc false
def emit_vm_memory do def emit_vm_memory do
total = :erlang.memory(:total) emit(
emit([:parrhesia, :vm, :memory], %{total: total}, %{}) [:parrhesia, :vm, :memory],
%{
total: :erlang.memory(:total),
processes: :erlang.memory(:processes),
system: :erlang.memory(:system),
atom: :erlang.memory(:atom),
binary: :erlang.memory(:binary),
ets: :erlang.memory(:ets)
},
%{}
)
end end
defp traffic_class_tag_values(metadata) do defp traffic_class_tag_values(metadata) do
traffic_class = metadata |> Map.get(:traffic_class, :generic) |> to_string() traffic_class = metadata |> Map.get(:traffic_class, :generic) |> to_string()
%{traffic_class: traffic_class} %{traffic_class: traffic_class}
end end
defp ingest_result_tag_values(metadata) do
%{
traffic_class: metadata |> Map.get(:traffic_class, :generic) |> to_string(),
outcome: metadata |> Map.get(:outcome, :unknown) |> to_string(),
reason: metadata |> Map.get(:reason, :unknown) |> to_string()
}
end
defp query_stop_tag_values(metadata) do
%{
traffic_class: metadata |> Map.get(:traffic_class, :generic) |> to_string(),
operation: metadata |> Map.get(:operation, :query) |> to_string()
}
end
defp query_result_tag_values(metadata) do
%{
traffic_class: metadata |> Map.get(:traffic_class, :generic) |> to_string(),
operation: metadata |> Map.get(:operation, :query) |> to_string(),
outcome: metadata |> Map.get(:outcome, :unknown) |> to_string()
}
end
defp strategy_tag_values(metadata) do
%{strategy: metadata |> Map.get(:strategy, :unknown) |> to_string()}
end
defp listener_tag_values(metadata) do
%{listener_id: metadata |> Map.get(:listener_id, :unknown) |> to_string()}
end
defp rate_limit_tag_values(metadata) do
%{
scope: metadata |> Map.get(:scope, :unknown) |> to_string(),
traffic_class: metadata |> Map.get(:traffic_class, :generic) |> to_string()
}
end
defp process_tag_values(metadata) do
process_type = metadata |> Map.get(:process_type, :unknown) |> to_string()
%{process_type: process_type}
end
defp repo_query_tag_values(metadata) do
%{repo_role: metadata |> Map.get(:repo_role, :unknown) |> to_string()}
end
defp status_tag_values(metadata) do
%{status: metadata |> Map.get(:status, :unknown) |> to_string()}
end
defp attach_repo_query_handlers do
:telemetry.detach(@repo_query_handler_id)
:telemetry.attach_many(
@repo_query_handler_id,
[[:parrhesia, :repo, :query], [:parrhesia, :read_repo, :query]],
&__MODULE__.handle_repo_query_event/4,
nil
)
:ok
rescue
ArgumentError -> :ok
end
@doc false
def handle_repo_query_event(event_name, measurements, _metadata, _config) do
repo_role =
case event_name do
[:parrhesia, :read_repo, :query] -> :read
[:parrhesia, :repo, :query] -> :write
end
total_time =
Map.get(
measurements,
:total_time,
Map.get(measurements, :queue_time, 0) +
Map.get(measurements, :query_time, 0) +
Map.get(measurements, :decode_time, 0)
)
emit(
[:parrhesia, :db, :query],
%{
count: 1,
total_time: total_time,
queue_time: Map.get(measurements, :queue_time, 0),
query_time: Map.get(measurements, :query_time, 0),
decode_time: Map.get(measurements, :decode_time, 0),
idle_time: Map.get(measurements, :idle_time, 0)
},
%{repo_role: repo_role}
)
end
end end

View File

@@ -9,6 +9,7 @@ defmodule Parrhesia.Web.Connection do
alias Parrhesia.API.RequestContext alias Parrhesia.API.RequestContext
alias Parrhesia.API.Stream alias Parrhesia.API.Stream
alias Parrhesia.Auth.Challenges alias Parrhesia.Auth.Challenges
alias Parrhesia.ConnectionStats
alias Parrhesia.Negentropy.Sessions alias Parrhesia.Negentropy.Sessions
alias Parrhesia.Policy.ConnectionPolicy alias Parrhesia.Policy.ConnectionPolicy
alias Parrhesia.Policy.EventPolicy alias Parrhesia.Policy.EventPolicy
@@ -16,6 +17,8 @@ defmodule Parrhesia.Web.Connection do
alias Parrhesia.Protocol.Filter alias Parrhesia.Protocol.Filter
alias Parrhesia.Subscriptions.Index alias Parrhesia.Subscriptions.Index
alias Parrhesia.Telemetry alias Parrhesia.Telemetry
alias Parrhesia.Web.EventIngestLimiter
alias Parrhesia.Web.IPEventIngestLimiter
alias Parrhesia.Web.Listener alias Parrhesia.Web.Listener
@default_max_subscriptions_per_connection 32 @default_max_subscriptions_per_connection 32
@@ -62,11 +65,14 @@ defmodule Parrhesia.Web.Connection do
drain_scheduled?: false, drain_scheduled?: false,
max_frame_bytes: @default_max_frame_bytes, max_frame_bytes: @default_max_frame_bytes,
max_event_bytes: @default_max_event_bytes, max_event_bytes: @default_max_event_bytes,
event_ingest_limiter: EventIngestLimiter,
remote_ip_event_ingest_limiter: IPEventIngestLimiter,
max_event_ingest_per_window: @default_event_ingest_rate_limit, max_event_ingest_per_window: @default_event_ingest_rate_limit,
event_ingest_window_seconds: @default_event_ingest_window_seconds, event_ingest_window_seconds: @default_event_ingest_window_seconds,
event_ingest_window_started_at_ms: 0, event_ingest_window_started_at_ms: 0,
event_ingest_count: 0, event_ingest_count: 0,
auth_max_age_seconds: @default_auth_max_age_seconds auth_max_age_seconds: @default_auth_max_age_seconds,
track_population?: true
@type overflow_strategy :: :close | :drop_oldest | :drop_newest @type overflow_strategy :: :close | :drop_oldest | :drop_newest
@@ -96,15 +102,19 @@ defmodule Parrhesia.Web.Connection do
drain_scheduled?: boolean(), drain_scheduled?: boolean(),
max_frame_bytes: pos_integer(), max_frame_bytes: pos_integer(),
max_event_bytes: pos_integer(), max_event_bytes: pos_integer(),
event_ingest_limiter: GenServer.server() | nil,
remote_ip_event_ingest_limiter: GenServer.server() | nil,
max_event_ingest_per_window: pos_integer(), max_event_ingest_per_window: pos_integer(),
event_ingest_window_seconds: pos_integer(), event_ingest_window_seconds: pos_integer(),
event_ingest_window_started_at_ms: integer(), event_ingest_window_started_at_ms: integer(),
event_ingest_count: non_neg_integer(), event_ingest_count: non_neg_integer(),
auth_max_age_seconds: pos_integer() auth_max_age_seconds: pos_integer(),
track_population?: boolean()
} }
@impl true @impl true
def init(opts) do def init(opts) do
maybe_configure_exit_trapping(opts)
auth_challenges = auth_challenges(opts) auth_challenges = auth_challenges(opts)
state = %__MODULE__{ state = %__MODULE__{
@@ -122,17 +132,23 @@ defmodule Parrhesia.Web.Connection do
outbound_drain_batch_size: outbound_drain_batch_size(opts), outbound_drain_batch_size: outbound_drain_batch_size(opts),
max_frame_bytes: max_frame_bytes(opts), max_frame_bytes: max_frame_bytes(opts),
max_event_bytes: max_event_bytes(opts), max_event_bytes: max_event_bytes(opts),
event_ingest_limiter: event_ingest_limiter(opts),
remote_ip_event_ingest_limiter: remote_ip_event_ingest_limiter(opts),
max_event_ingest_per_window: max_event_ingest_per_window(opts), max_event_ingest_per_window: max_event_ingest_per_window(opts),
event_ingest_window_seconds: event_ingest_window_seconds(opts), event_ingest_window_seconds: event_ingest_window_seconds(opts),
event_ingest_window_started_at_ms: System.monotonic_time(:millisecond), event_ingest_window_started_at_ms: System.monotonic_time(:millisecond),
auth_max_age_seconds: auth_max_age_seconds(opts) auth_max_age_seconds: auth_max_age_seconds(opts),
track_population?: track_population?(opts)
} }
:ok = maybe_track_connection_open(state)
Telemetry.emit_process_mailbox_depth(:connection)
{:ok, state} {:ok, state}
end end
@impl true @impl true
def handle_in({payload, [opcode: :text]}, %__MODULE__{} = state) do def handle_in({payload, [opcode: :text]}, %__MODULE__{} = state) do
result =
if byte_size(payload) > state.max_frame_bytes do if byte_size(payload) > state.max_frame_bytes do
response = response =
Protocol.encode_relay({ Protocol.encode_relay({
@@ -151,6 +167,8 @@ defmodule Parrhesia.Web.Connection do
{:push, {:text, response}, state} {:push, {:text, response}, state}
end end
end end
emit_connection_mailbox_depth(result)
end end
@impl true @impl true
@@ -159,6 +177,7 @@ defmodule Parrhesia.Web.Connection do
Protocol.encode_relay({:notice, "invalid: binary websocket frames are not supported"}) Protocol.encode_relay({:notice, "invalid: binary websocket frames are not supported"})
{:push, {:text, response}, state} {:push, {:text, response}, state}
|> emit_connection_mailbox_depth()
end end
defp handle_decoded_message({:event, event}, state), do: handle_event_ingest(state, event) defp handle_decoded_message({:event, event}, state), do: handle_event_ingest(state, event)
@@ -203,8 +222,10 @@ defmodule Parrhesia.Web.Connection do
when is_reference(ref) and is_binary(subscription_id) and is_map(event) do when is_reference(ref) and is_binary(subscription_id) and is_map(event) do
if current_subscription_ref?(state, subscription_id, ref) do if current_subscription_ref?(state, subscription_id, ref) do
handle_fanout_events(state, [{subscription_id, event}]) handle_fanout_events(state, [{subscription_id, event}])
|> emit_connection_mailbox_depth()
else else
{:ok, state} {:ok, state}
|> emit_connection_mailbox_depth()
end end
end end
@@ -216,9 +237,12 @@ defmodule Parrhesia.Web.Connection do
if current_subscription_ref?(state, subscription_id, ref) and if current_subscription_ref?(state, subscription_id, ref) and
not subscription_eose_sent?(state, subscription_id) do not subscription_eose_sent?(state, subscription_id) do
response = Protocol.encode_relay({:eose, subscription_id}) response = Protocol.encode_relay({:eose, subscription_id})
{:push, {:text, response}, mark_subscription_eose_sent(state, subscription_id)} {:push, {:text, response}, mark_subscription_eose_sent(state, subscription_id)}
|> emit_connection_mailbox_depth()
else else
{:ok, state} {:ok, state}
|> emit_connection_mailbox_depth()
end end
end end
@@ -234,20 +258,25 @@ defmodule Parrhesia.Web.Connection do
|> drop_queued_subscription_events(subscription_id) |> drop_queued_subscription_events(subscription_id)
response = Protocol.encode_relay({:closed, subscription_id, stream_closed_reason(reason)}) response = Protocol.encode_relay({:closed, subscription_id, stream_closed_reason(reason)})
{:push, {:text, response}, next_state} {:push, {:text, response}, next_state}
|> emit_connection_mailbox_depth()
else else
{:ok, state} {:ok, state}
|> emit_connection_mailbox_depth()
end end
end end
def handle_info({:fanout_event, subscription_id, event}, %__MODULE__{} = state) def handle_info({:fanout_event, subscription_id, event}, %__MODULE__{} = state)
when is_binary(subscription_id) and is_map(event) do when is_binary(subscription_id) and is_map(event) do
handle_fanout_events(state, [{subscription_id, event}]) handle_fanout_events(state, [{subscription_id, event}])
|> emit_connection_mailbox_depth()
end end
def handle_info({:fanout_events, fanout_events}, %__MODULE__{} = state) def handle_info({:fanout_events, fanout_events}, %__MODULE__{} = state)
when is_list(fanout_events) do when is_list(fanout_events) do
handle_fanout_events(state, fanout_events) handle_fanout_events(state, fanout_events)
|> emit_connection_mailbox_depth()
end end
def handle_info(@drain_outbound_queue, %__MODULE__{} = state) do def handle_info(@drain_outbound_queue, %__MODULE__{} = state) do
@@ -255,17 +284,32 @@ defmodule Parrhesia.Web.Connection do
if frames == [] do if frames == [] do
{:ok, next_state} {:ok, next_state}
|> emit_connection_mailbox_depth()
else else
{:push, frames, next_state} {:push, frames, next_state}
|> emit_connection_mailbox_depth()
end end
end end
def handle_info({:EXIT, _from, :shutdown}, %__MODULE__{} = state) do
close_with_drained_outbound_frames(state)
|> emit_connection_mailbox_depth()
end
def handle_info({:EXIT, _from, {:shutdown, _detail}}, %__MODULE__{} = state) do
close_with_drained_outbound_frames(state)
|> emit_connection_mailbox_depth()
end
def handle_info(_message, %__MODULE__{} = state) do def handle_info(_message, %__MODULE__{} = state) do
{:ok, state} {:ok, state}
|> emit_connection_mailbox_depth()
end end
@impl true @impl true
def terminate(_reason, %__MODULE__{} = state) do def terminate(_reason, %__MODULE__{} = state) do
:ok = maybe_track_subscription_delta(state, -map_size(state.subscriptions))
:ok = maybe_track_connection_close(state)
:ok = maybe_unsubscribe_all_stream_subscriptions(state) :ok = maybe_unsubscribe_all_stream_subscriptions(state)
:ok = maybe_remove_index_owner(state) :ok = maybe_remove_index_owner(state)
:ok = maybe_clear_auth_challenge(state) :ok = maybe_clear_auth_challenge(state)
@@ -274,15 +318,32 @@ defmodule Parrhesia.Web.Connection do
defp handle_event_ingest(%__MODULE__{} = state, event) do defp handle_event_ingest(%__MODULE__{} = state, event) do
event_id = Map.get(event, "id", "") event_id = Map.get(event, "id", "")
traffic_class = traffic_class_for_event(event)
case maybe_allow_event_ingest(state) do case maybe_allow_event_ingest(state) do
{:ok, next_state} -> {:ok, next_state} ->
case authorize_listener_write(next_state, event) do maybe_publish_ingested_event(next_state, state, event, event_id)
:ok -> publish_event_response(next_state, event)
{:error, reason} -> ingest_error_response(state, event_id, reason)
end
{:error, reason} -> {:error, reason} ->
maybe_emit_rate_limit_hit(reason, traffic_class)
ingest_error_response(state, event_id, reason)
end
end
defp maybe_publish_ingested_event(next_state, state, event, event_id) do
traffic_class = traffic_class_for_event(event)
with :ok <-
maybe_allow_remote_ip_event_ingest(
next_state.remote_ip,
next_state.remote_ip_event_ingest_limiter
),
:ok <- maybe_allow_relay_event_ingest(next_state.event_ingest_limiter),
:ok <- authorize_listener_write(next_state, event) do
publish_event_response(next_state, event)
else
{:error, reason} ->
maybe_emit_rate_limit_hit(reason, traffic_class)
ingest_error_response(state, event_id, reason) ingest_error_response(state, event_id, reason)
end end
end end
@@ -388,6 +449,8 @@ defmodule Parrhesia.Web.Connection do
) )
{:error, :subscription_limit_reached} -> {:error, :subscription_limit_reached} ->
maybe_emit_rate_limit_hit(:subscription_limit_reached)
response = response =
Protocol.encode_relay({ Protocol.encode_relay({
:closed, :closed,
@@ -414,6 +477,7 @@ defmodule Parrhesia.Web.Connection do
:invalid_until, :invalid_until,
:invalid_limit, :invalid_limit,
:invalid_search, :invalid_search,
:too_many_tag_values,
:invalid_tag_filter :invalid_tag_filter
] -> ] ->
Filter.error_message(reason) Filter.error_message(reason)
@@ -462,6 +526,27 @@ defmodule Parrhesia.Web.Connection do
restricted_count_notice(state, subscription_id, EventPolicy.error_message(reason)) restricted_count_notice(state, subscription_id, EventPolicy.error_message(reason))
end end
defp handle_count_error(state, subscription_id, reason)
when reason in [
:invalid_filters,
:empty_filters,
:too_many_filters,
:invalid_filter,
:invalid_filter_key,
:invalid_ids,
:invalid_authors,
:invalid_kinds,
:invalid_since,
:invalid_until,
:invalid_limit,
:invalid_search,
:too_many_tag_values,
:invalid_tag_filter
] do
response = Protocol.encode_relay({:closed, subscription_id, Filter.error_message(reason)})
{:push, {:text, response}, state}
end
defp handle_count_error(state, subscription_id, reason) do defp handle_count_error(state, subscription_id, reason) do
response = Protocol.encode_relay({:closed, subscription_id, inspect(reason)}) response = Protocol.encode_relay({:closed, subscription_id, inspect(reason)})
{:push, {:text, response}, state} {:push, {:text, response}, state}
@@ -541,6 +626,12 @@ defmodule Parrhesia.Web.Connection do
defp error_message_for_ingest_failure(:event_rate_limited), defp error_message_for_ingest_failure(:event_rate_limited),
do: "rate-limited: too many EVENT messages" do: "rate-limited: too many EVENT messages"
defp error_message_for_ingest_failure(:ip_event_rate_limited),
do: "rate-limited: too many EVENT messages from this IP"
defp error_message_for_ingest_failure(:relay_event_rate_limited),
do: "rate-limited: relay-wide EVENT ingress exceeded"
defp error_message_for_ingest_failure(:event_too_large), defp error_message_for_ingest_failure(:event_too_large),
do: "invalid: event exceeds max event size" do: "invalid: event exceeds max event size"
@@ -648,6 +739,7 @@ defmodule Parrhesia.Web.Connection do
:invalid_until, :invalid_until,
:invalid_limit, :invalid_limit,
:invalid_search, :invalid_search,
:too_many_tag_values,
:invalid_tag_filter, :invalid_tag_filter,
:auth_required, :auth_required,
:pubkey_not_allowed, :pubkey_not_allowed,
@@ -701,6 +793,7 @@ defmodule Parrhesia.Web.Connection do
:invalid_until, :invalid_until,
:invalid_limit, :invalid_limit,
:invalid_search, :invalid_search,
:too_many_tag_values,
:invalid_tag_filter :invalid_tag_filter
], ],
do: Filter.error_message(reason) do: Filter.error_message(reason)
@@ -911,22 +1004,38 @@ defmodule Parrhesia.Web.Connection do
telemetry_metadata = telemetry_metadata_for_fanout_events(fanout_events) telemetry_metadata = telemetry_metadata_for_fanout_events(fanout_events)
case enqueue_fanout_events(state, fanout_events) do case enqueue_fanout_events(state, fanout_events) do
{:ok, next_state} -> {:ok, next_state, stats} ->
Telemetry.emit( Telemetry.emit(
[:parrhesia, :fanout, :stop], [:parrhesia, :fanout, :stop],
%{duration: System.monotonic_time() - started_at}, %{
duration: System.monotonic_time() - started_at,
considered: stats.considered,
enqueued: stats.enqueued
},
telemetry_metadata telemetry_metadata
) )
{:ok, maybe_schedule_drain(next_state)} {:ok, maybe_schedule_drain(next_state)}
{:close, next_state} -> {:close, next_state, stats} ->
Telemetry.emit( Telemetry.emit(
[:parrhesia, :connection, :outbound_queue, :overflow], [:parrhesia, :connection, :outbound_queue, :overflow],
%{count: 1}, %{count: 1},
telemetry_metadata telemetry_metadata
) )
Telemetry.emit(
[:parrhesia, :fanout, :stop],
%{
duration: System.monotonic_time() - started_at,
considered: stats.considered,
enqueued: stats.enqueued
},
telemetry_metadata
)
maybe_emit_rate_limit_hit(:outbound_queue_overflow, telemetry_metadata.traffic_class)
close_with_outbound_overflow(next_state) close_with_outbound_overflow(next_state)
end end
end end
@@ -938,16 +1047,33 @@ defmodule Parrhesia.Web.Connection do
{:stop, :normal, {1008, message}, [{:text, notice}], state} {:stop, :normal, {1008, message}, [{:text, notice}], state}
end end
defp enqueue_fanout_events(state, fanout_events) do defp close_with_drained_outbound_frames(state) do
Enum.reduce_while(fanout_events, {:ok, state}, fn {frames, next_state} = drain_all_outbound_frames(state)
{subscription_id, event}, {:ok, acc} when is_binary(subscription_id) and is_map(event) -> {:stop, :normal, {1012, "service restart"}, frames, next_state}
case maybe_enqueue_fanout_event(acc, subscription_id, event) do
{:ok, next_acc} -> {:cont, {:ok, next_acc}}
{:close, next_acc} -> {:halt, {:close, next_acc}}
end end
_invalid_event, {:ok, acc} -> defp enqueue_fanout_events(state, fanout_events) do
{:cont, {:ok, acc}} initial_stats = %{considered: 0, enqueued: 0}
Enum.reduce_while(fanout_events, {:ok, state, initial_stats}, fn
{subscription_id, event}, {:ok, acc, stats}
when is_binary(subscription_id) and is_map(event) ->
case maybe_enqueue_fanout_event(acc, subscription_id, event) do
{:ok, next_acc, enqueued?} ->
next_stats = %{
considered: stats.considered + 1,
enqueued: stats.enqueued + if(enqueued?, do: 1, else: 0)
}
{:cont, {:ok, next_acc, next_stats}}
{:close, next_acc} ->
next_stats = %{stats | considered: stats.considered + 1}
{:halt, {:close, next_acc, next_stats}}
end
_invalid_event, {:ok, acc, stats} ->
{:cont, {:ok, acc, stats}}
end) end)
end end
@@ -955,7 +1081,7 @@ defmodule Parrhesia.Web.Connection do
if subscription_matches?(state, subscription_id, event) do if subscription_matches?(state, subscription_id, event) do
enqueue_outbound(state, {subscription_id, event}, traffic_class_for_event(event)) enqueue_outbound(state, {subscription_id, event}, traffic_class_for_event(event))
else else
{:ok, state} {:ok, state, false}
end end
end end
@@ -981,15 +1107,17 @@ defmodule Parrhesia.Web.Connection do
} }
emit_outbound_queue_depth(next_state, %{traffic_class: traffic_class}) emit_outbound_queue_depth(next_state, %{traffic_class: traffic_class})
{:ok, next_state} {:ok, next_state, true}
end end
defp enqueue_outbound( defp enqueue_outbound(
%__MODULE__{outbound_overflow_strategy: :drop_newest} = state, %__MODULE__{outbound_overflow_strategy: :drop_newest} = state,
_queue_entry, _queue_entry,
_traffic_class _traffic_class
), ) do
do: {:ok, state} emit_outbound_queue_drop(:drop_newest)
{:ok, state, false}
end
defp enqueue_outbound( defp enqueue_outbound(
%__MODULE__{outbound_overflow_strategy: :drop_oldest} = state, %__MODULE__{outbound_overflow_strategy: :drop_oldest} = state,
@@ -1002,7 +1130,8 @@ defmodule Parrhesia.Web.Connection do
next_state = %__MODULE__{state | outbound_queue: next_queue, outbound_queue_size: next_size} next_state = %__MODULE__{state | outbound_queue: next_queue, outbound_queue_size: next_size}
emit_outbound_queue_depth(next_state, %{traffic_class: traffic_class}) emit_outbound_queue_depth(next_state, %{traffic_class: traffic_class})
{:ok, next_state} emit_outbound_queue_drop(:drop_oldest)
{:ok, next_state, true}
end end
defp enqueue_outbound( defp enqueue_outbound(
@@ -1039,6 +1168,25 @@ defmodule Parrhesia.Web.Connection do
} }
|> maybe_schedule_drain() |> maybe_schedule_drain()
emit_outbound_queue_drain(length(frames))
emit_outbound_queue_depth(next_state)
{Enum.reverse(frames), next_state}
end
defp drain_all_outbound_frames(%__MODULE__{} = state) do
{frames, next_queue, remaining_size} =
pop_frames(state.outbound_queue, state.outbound_queue_size, :infinity, [])
next_state =
%__MODULE__{
state
| outbound_queue: next_queue,
outbound_queue_size: remaining_size,
drain_scheduled?: false
}
emit_outbound_queue_drain(length(frames))
emit_outbound_queue_depth(next_state) emit_outbound_queue_depth(next_state)
{Enum.reverse(frames), next_state} {Enum.reverse(frames), next_state}
@@ -1047,6 +1195,17 @@ defmodule Parrhesia.Web.Connection do
defp pop_frames(queue, queue_size, _remaining_batch, acc) when queue_size == 0, defp pop_frames(queue, queue_size, _remaining_batch, acc) when queue_size == 0,
do: {acc, queue, queue_size} do: {acc, queue, queue_size}
defp pop_frames(queue, queue_size, :infinity, acc) do
case :queue.out(queue) do
{{:value, {subscription_id, event}}, next_queue} ->
frame = {:text, Protocol.encode_relay({:event, subscription_id, event})}
pop_frames(next_queue, queue_size - 1, :infinity, [frame | acc])
{:empty, _same_queue} ->
{acc, :queue.new(), 0}
end
end
defp pop_frames(queue, queue_size, remaining_batch, acc) when remaining_batch <= 0, defp pop_frames(queue, queue_size, remaining_batch, acc) when remaining_batch <= 0,
do: {acc, queue, queue_size} do: {acc, queue, queue_size}
@@ -1095,6 +1254,11 @@ defmodule Parrhesia.Web.Connection do
end end
end end
defp emit_connection_mailbox_depth(result) do
Telemetry.emit_process_mailbox_depth(:connection)
result
end
defp ensure_subscription_capacity(%__MODULE__{} = state, subscription_id) do defp ensure_subscription_capacity(%__MODULE__{} = state, subscription_id) do
cond do cond do
Map.has_key?(state.subscriptions, subscription_id) -> Map.has_key?(state.subscriptions, subscription_id) ->
@@ -1110,12 +1274,26 @@ defmodule Parrhesia.Web.Connection do
defp put_subscription(%__MODULE__{} = state, subscription_id, subscription) do defp put_subscription(%__MODULE__{} = state, subscription_id, subscription) do
subscriptions = Map.put(state.subscriptions, subscription_id, subscription) subscriptions = Map.put(state.subscriptions, subscription_id, subscription)
%__MODULE__{state | subscriptions: subscriptions} next_state = %__MODULE__{state | subscriptions: subscriptions}
if Map.has_key?(state.subscriptions, subscription_id) do
next_state
else
:ok = maybe_track_subscription_delta(next_state, 1)
next_state
end
end end
defp drop_subscription(%__MODULE__{} = state, subscription_id) do defp drop_subscription(%__MODULE__{} = state, subscription_id) do
subscriptions = Map.delete(state.subscriptions, subscription_id) subscriptions = Map.delete(state.subscriptions, subscription_id)
%__MODULE__{state | subscriptions: subscriptions} next_state = %__MODULE__{state | subscriptions: subscriptions}
if Map.has_key?(state.subscriptions, subscription_id) do
:ok = maybe_track_subscription_delta(next_state, -1)
next_state
else
next_state
end
end end
defp drop_queued_subscription_events( defp drop_queued_subscription_events(
@@ -1523,6 +1701,26 @@ defmodule Parrhesia.Web.Connection do
|> Keyword.get(:max_event_ingest_per_window, @default_event_ingest_rate_limit) |> Keyword.get(:max_event_ingest_per_window, @default_event_ingest_rate_limit)
end end
defp event_ingest_limiter(opts) when is_list(opts) do
Keyword.get(opts, :event_ingest_limiter, EventIngestLimiter)
end
defp event_ingest_limiter(opts) when is_map(opts) do
Map.get(opts, :event_ingest_limiter, EventIngestLimiter)
end
defp event_ingest_limiter(_opts), do: EventIngestLimiter
defp remote_ip_event_ingest_limiter(opts) when is_list(opts) do
Keyword.get(opts, :remote_ip_event_ingest_limiter, IPEventIngestLimiter)
end
defp remote_ip_event_ingest_limiter(opts) when is_map(opts) do
Map.get(opts, :remote_ip_event_ingest_limiter, IPEventIngestLimiter)
end
defp remote_ip_event_ingest_limiter(_opts), do: IPEventIngestLimiter
defp event_ingest_window_seconds(opts) when is_list(opts) do defp event_ingest_window_seconds(opts) when is_list(opts) do
opts opts
|> Keyword.get(:event_ingest_window_seconds) |> Keyword.get(:event_ingest_window_seconds)
@@ -1571,6 +1769,22 @@ defmodule Parrhesia.Web.Connection do
|> Keyword.get(:auth_max_age_seconds, @default_auth_max_age_seconds) |> Keyword.get(:auth_max_age_seconds, @default_auth_max_age_seconds)
end end
defp track_population?(opts) when is_list(opts), do: Keyword.get(opts, :track_population?, true)
defp track_population?(opts) when is_map(opts), do: Map.get(opts, :track_population?, true)
defp track_population?(_opts), do: true
defp maybe_configure_exit_trapping(opts) do
if trap_exit?(opts) do
Process.flag(:trap_exit, true)
end
:ok
end
defp trap_exit?(opts) when is_list(opts), do: Keyword.get(opts, :trap_exit?, true)
defp trap_exit?(opts) when is_map(opts), do: Map.get(opts, :trap_exit?, true)
defp trap_exit?(_opts), do: true
defp request_context(%__MODULE__{} = state, subscription_id \\ nil) do defp request_context(%__MODULE__{} = state, subscription_id \\ nil) do
%RequestContext{ %RequestContext{
authenticated_pubkeys: state.authenticated_pubkeys, authenticated_pubkeys: state.authenticated_pubkeys,
@@ -1622,4 +1836,90 @@ defmodule Parrhesia.Web.Connection do
{:error, :event_rate_limited} {:error, :event_rate_limited}
end end
end end
defp maybe_allow_remote_ip_event_ingest(_remote_ip, nil), do: :ok
defp maybe_allow_remote_ip_event_ingest(remote_ip, server) do
IPEventIngestLimiter.allow(remote_ip, server)
catch
:exit, {:noproc, _details} -> :ok
:exit, {:normal, _details} -> :ok
end
defp maybe_allow_relay_event_ingest(nil), do: :ok
defp maybe_allow_relay_event_ingest(server) do
EventIngestLimiter.allow(server)
catch
:exit, {:noproc, _details} -> :ok
:exit, {:normal, _details} -> :ok
end
defp maybe_track_connection_open(%__MODULE__{track_population?: false}), do: :ok
defp maybe_track_connection_open(%__MODULE__{} = state) do
ConnectionStats.connection_open(listener_id(state))
end
defp maybe_track_connection_close(%__MODULE__{track_population?: false}), do: :ok
defp maybe_track_connection_close(%__MODULE__{} = state) do
ConnectionStats.connection_close(listener_id(state))
end
defp maybe_track_subscription_delta(_state, 0), do: :ok
defp maybe_track_subscription_delta(%__MODULE__{track_population?: false}, _delta), do: :ok
defp maybe_track_subscription_delta(%__MODULE__{} = state, delta) do
ConnectionStats.subscriptions_change(listener_id(state), delta)
end
defp listener_id(%__MODULE__{listener: %{id: id}}), do: id
defp listener_id(_state), do: :unknown
defp emit_outbound_queue_drain(0), do: :ok
defp emit_outbound_queue_drain(count) when is_integer(count) and count > 0 do
Telemetry.emit([:parrhesia, :connection, :outbound_queue, :drain], %{count: count}, %{})
end
defp emit_outbound_queue_drop(strategy) do
Telemetry.emit(
[:parrhesia, :connection, :outbound_queue, :drop],
%{count: 1},
%{strategy: strategy}
)
end
defp maybe_emit_rate_limit_hit(reason, traffic_class \\ :generic)
defp maybe_emit_rate_limit_hit(:event_rate_limited, traffic_class) do
emit_rate_limit_hit(:event_ingest_per_connection, traffic_class)
end
defp maybe_emit_rate_limit_hit(:ip_event_rate_limited, traffic_class) do
emit_rate_limit_hit(:event_ingest_per_ip, traffic_class)
end
defp maybe_emit_rate_limit_hit(:relay_event_rate_limited, traffic_class) do
emit_rate_limit_hit(:event_ingest_relay, traffic_class)
end
defp maybe_emit_rate_limit_hit(:subscription_limit_reached, traffic_class) do
emit_rate_limit_hit(:subscriptions_per_connection, traffic_class)
end
defp maybe_emit_rate_limit_hit(:outbound_queue_overflow, traffic_class) do
emit_rate_limit_hit(:outbound_queue, traffic_class)
end
defp maybe_emit_rate_limit_hit(_reason, _traffic_class), do: :ok
defp emit_rate_limit_hit(scope, traffic_class) do
Telemetry.emit(
[:parrhesia, :rate_limit, :hit],
%{count: 1},
%{scope: scope, traffic_class: traffic_class}
)
end
end end

View File

@@ -16,6 +16,7 @@ defmodule Parrhesia.Web.Endpoint do
@spec reload_listener(Supervisor.supervisor(), atom()) :: :ok | {:error, term()} @spec reload_listener(Supervisor.supervisor(), atom()) :: :ok | {:error, term()}
def reload_listener(supervisor \\ __MODULE__, listener_id) when is_atom(listener_id) do def reload_listener(supervisor \\ __MODULE__, listener_id) when is_atom(listener_id) do
with :ok <- Supervisor.terminate_child(supervisor, {:listener, listener_id}), with :ok <- Supervisor.terminate_child(supervisor, {:listener, listener_id}),
:ok <- clear_pem_cache(),
{:ok, _pid} <- Supervisor.restart_child(supervisor, {:listener, listener_id}) do {:ok, _pid} <- Supervisor.restart_child(supervisor, {:listener, listener_id}) do
:ok :ok
else else
@@ -27,17 +28,44 @@ defmodule Parrhesia.Web.Endpoint do
@spec reload_all(Supervisor.supervisor()) :: :ok | {:error, term()} @spec reload_all(Supervisor.supervisor()) :: :ok | {:error, term()}
def reload_all(supervisor \\ __MODULE__) do def reload_all(supervisor \\ __MODULE__) do
listener_ids =
supervisor supervisor
|> Supervisor.which_children() |> Supervisor.which_children()
|> Enum.filter(fn {id, _pid, _type, _modules} -> |> Enum.flat_map(fn
match?({:listener, _listener_id}, id) {{:listener, listener_id}, _pid, _type, _modules} -> [listener_id]
_other -> []
end) end)
|> Enum.reduce_while(:ok, fn {{:listener, listener_id}, _pid, _type, _modules}, :ok ->
case reload_listener(supervisor, listener_id) do with :ok <- terminate_listeners(supervisor, listener_ids),
:ok -> {:cont, :ok} :ok <- clear_pem_cache() do
{:error, _reason} = error -> {:halt, error} restart_listeners(supervisor, listener_ids)
end end
end) end
defp terminate_listeners(_supervisor, []), do: :ok
defp terminate_listeners(supervisor, [listener_id | rest]) do
case Supervisor.terminate_child(supervisor, {:listener, listener_id}) do
:ok -> terminate_listeners(supervisor, rest)
{:error, _reason} = error -> error
end
end
defp restart_listeners(_supervisor, []), do: :ok
defp restart_listeners(supervisor, [listener_id | rest]) do
case Supervisor.restart_child(supervisor, {:listener, listener_id}) do
{:ok, _pid} -> restart_listeners(supervisor, rest)
{:error, _reason} = error -> error
end
end
# OTP's ssl module caches PEM file contents by filename. When cert/key
# files are replaced on disk, the cache must be cleared so the restarted
# listener reads the updated files.
defp clear_pem_cache do
:ssl.clear_pem_cache()
:ok
end end
@impl true @impl true

View File

@@ -0,0 +1,140 @@
defmodule Parrhesia.Web.EventIngestLimiter do
@moduledoc """
Relay-wide EVENT ingest rate limiting over a fixed time window.
"""
use GenServer
@default_max_events_per_window 10_000
@default_window_seconds 1
@named_table :parrhesia_event_ingest_limiter
@config_key :config
@spec start_link(keyword()) :: GenServer.on_start()
def start_link(opts \\ []) do
max_events_per_window =
normalize_positive_integer(
Keyword.get(opts, :max_events_per_window),
max_events_per_window()
)
window_ms =
normalize_positive_integer(Keyword.get(opts, :window_seconds), window_seconds()) * 1000
init_arg = %{
max_events_per_window: max_events_per_window,
window_ms: window_ms,
named_table?: Keyword.get(opts, :name, __MODULE__) == __MODULE__
}
case Keyword.get(opts, :name, __MODULE__) do
nil -> GenServer.start_link(__MODULE__, init_arg)
name -> GenServer.start_link(__MODULE__, init_arg, name: name)
end
end
@spec allow(GenServer.server()) :: :ok | {:error, :relay_event_rate_limited}
def allow(server \\ __MODULE__)
def allow(__MODULE__) do
case fetch_named_config() do
{:ok, max_events_per_window, window_ms} ->
allow_counter(@named_table, max_events_per_window, window_ms)
:error ->
:ok
end
end
def allow(server), do: GenServer.call(server, :allow)
@impl true
def init(%{
max_events_per_window: max_events_per_window,
window_ms: window_ms,
named_table?: named_table?
}) do
table = create_table(named_table?)
true = :ets.insert(table, {@config_key, max_events_per_window, window_ms})
{:ok,
%{
table: table,
max_events_per_window: max_events_per_window,
window_ms: window_ms
}}
end
@impl true
def handle_call(:allow, _from, state) do
{:reply, allow_counter(state.table, state.max_events_per_window, state.window_ms), state}
end
defp normalize_positive_integer(value, _default) when is_integer(value) and value > 0, do: value
defp normalize_positive_integer(_value, default), do: default
defp create_table(true) do
:ets.new(@named_table, [
:named_table,
:set,
:public,
{:read_concurrency, true},
{:write_concurrency, true}
])
end
defp create_table(false) do
:ets.new(__MODULE__, [:set, :public, {:read_concurrency, true}, {:write_concurrency, true}])
end
defp fetch_named_config do
case :ets.lookup(@named_table, @config_key) do
[{@config_key, max_events_per_window, window_ms}] -> {:ok, max_events_per_window, window_ms}
_other -> :error
end
rescue
ArgumentError -> :error
end
defp allow_counter(table, max_events_per_window, window_ms) do
window_id = System.monotonic_time(:millisecond) |> div(window_ms)
key = {:window, window_id}
count = :ets.update_counter(table, key, {2, 1}, {key, 0})
if count == 1 do
prune_expired_windows(table, window_id)
end
if count <= max_events_per_window do
:ok
else
{:error, :relay_event_rate_limited}
end
rescue
ArgumentError -> :ok
end
defp prune_expired_windows(table, window_id) do
:ets.select_delete(table, [
{{{:window, :"$1"}, :_}, [{:<, :"$1", window_id}], [true]}
])
end
defp max_events_per_window do
case Application.get_env(:parrhesia, :limits, [])
|> Keyword.get(:relay_max_event_ingest_per_window) do
value when is_integer(value) and value > 0 -> value
_other -> @default_max_events_per_window
end
end
defp window_seconds do
case Application.get_env(:parrhesia, :limits, [])
|> Keyword.get(:relay_event_ingest_window_seconds) do
value when is_integer(value) and value > 0 -> value
_other -> @default_window_seconds
end
end
end

View File

@@ -0,0 +1,169 @@
defmodule Parrhesia.Web.IPEventIngestLimiter do
@moduledoc """
Per-IP EVENT ingest rate limiting over a fixed time window.
"""
use GenServer
@default_max_events_per_window 1_000
@default_window_seconds 1
@named_table :parrhesia_ip_event_ingest_limiter
@config_key :config
@spec start_link(keyword()) :: GenServer.on_start()
def start_link(opts \\ []) do
max_events_per_window =
normalize_positive_integer(
Keyword.get(opts, :max_events_per_window),
max_events_per_window()
)
window_ms =
normalize_positive_integer(Keyword.get(opts, :window_seconds), window_seconds()) * 1000
init_arg = %{
max_events_per_window: max_events_per_window,
window_ms: window_ms,
named_table?: Keyword.get(opts, :name, __MODULE__) == __MODULE__
}
case Keyword.get(opts, :name, __MODULE__) do
nil -> GenServer.start_link(__MODULE__, init_arg)
name -> GenServer.start_link(__MODULE__, init_arg, name: name)
end
end
@spec allow(tuple() | String.t() | nil, GenServer.server()) ::
:ok | {:error, :ip_event_rate_limited}
def allow(remote_ip, server \\ __MODULE__)
def allow(remote_ip, __MODULE__) do
case normalize_remote_ip(remote_ip) do
nil ->
:ok
normalized_remote_ip ->
case fetch_named_config() do
{:ok, max_events_per_window, window_ms} ->
allow_counter(@named_table, normalized_remote_ip, max_events_per_window, window_ms)
:error ->
:ok
end
end
end
def allow(remote_ip, server), do: GenServer.call(server, {:allow, remote_ip})
@impl true
def init(%{
max_events_per_window: max_events_per_window,
window_ms: window_ms,
named_table?: named_table?
}) do
table = create_table(named_table?)
true = :ets.insert(table, {@config_key, max_events_per_window, window_ms})
{:ok,
%{
table: table,
max_events_per_window: max_events_per_window,
window_ms: window_ms
}}
end
@impl true
def handle_call({:allow, remote_ip}, _from, state) do
reply =
case normalize_remote_ip(remote_ip) do
nil ->
:ok
normalized_remote_ip ->
allow_counter(
state.table,
normalized_remote_ip,
state.max_events_per_window,
state.window_ms
)
end
{:reply, reply, state}
end
defp normalize_positive_integer(value, _default) when is_integer(value) and value > 0, do: value
defp normalize_positive_integer(_value, default), do: default
defp create_table(true) do
:ets.new(@named_table, [
:named_table,
:set,
:public,
{:read_concurrency, true},
{:write_concurrency, true}
])
end
defp create_table(false) do
:ets.new(__MODULE__, [:set, :public, {:read_concurrency, true}, {:write_concurrency, true}])
end
defp fetch_named_config do
case :ets.lookup(@named_table, @config_key) do
[{@config_key, max_events_per_window, window_ms}] -> {:ok, max_events_per_window, window_ms}
_other -> :error
end
rescue
ArgumentError -> :error
end
defp allow_counter(table, remote_ip, max_events_per_window, window_ms) do
window_id = System.monotonic_time(:millisecond) |> div(window_ms)
key = {:window, remote_ip, window_id}
count = :ets.update_counter(table, key, {2, 1}, {key, 0})
if count == 1 do
prune_expired_windows(table, window_id)
end
if count <= max_events_per_window do
:ok
else
{:error, :ip_event_rate_limited}
end
rescue
ArgumentError -> :ok
end
defp prune_expired_windows(table, window_id) do
:ets.select_delete(table, [
{{{:window, :"$1", :"$2"}, :_}, [{:<, :"$2", window_id}], [true]}
])
end
defp normalize_remote_ip({_, _, _, _} = remote_ip), do: :inet.ntoa(remote_ip) |> to_string()
defp normalize_remote_ip({_, _, _, _, _, _, _, _} = remote_ip),
do: :inet.ntoa(remote_ip) |> to_string()
defp normalize_remote_ip(remote_ip) when is_binary(remote_ip) and remote_ip != "", do: remote_ip
defp normalize_remote_ip(_remote_ip), do: nil
defp max_events_per_window do
case Application.get_env(:parrhesia, :limits, [])
|> Keyword.get(:ip_max_event_ingest_per_window) do
value when is_integer(value) and value > 0 -> value
_other -> @default_max_events_per_window
end
end
defp window_seconds do
case Application.get_env(:parrhesia, :limits, [])
|> Keyword.get(:ip_event_ingest_window_seconds) do
value when is_integer(value) and value > 0 -> value
_other -> @default_window_seconds
end
end
end

View File

@@ -21,6 +21,7 @@ defmodule Parrhesia.Web.Listener do
id: atom(), id: atom(),
enabled: boolean(), enabled: boolean(),
bind: %{ip: tuple(), port: pos_integer()}, bind: %{ip: tuple(), port: pos_integer()},
max_connections: pos_integer() | :infinity,
transport: map(), transport: map(),
proxy: map(), proxy: map(),
network: map(), network: map(),
@@ -167,12 +168,20 @@ defmodule Parrhesia.Web.Listener do
_other -> listener.transport.scheme _other -> listener.transport.scheme
end end
thousand_island_options =
listener.bandit_options
|> Keyword.get(:thousand_island_options, [])
|> maybe_put_connection_limit(listener.max_connections)
[ [
ip: listener.bind.ip, ip: listener.bind.ip,
port: listener.bind.port, port: listener.bind.port,
scheme: scheme, scheme: scheme,
plug: {Parrhesia.Web.ListenerPlug, listener: listener} plug: {Parrhesia.Web.ListenerPlug, listener: listener}
] ++ TLS.bandit_options(listener.transport.tls) ++ listener.bandit_options ] ++
TLS.bandit_options(listener.transport.tls) ++
[thousand_island_options: thousand_island_options] ++
Keyword.delete(listener.bandit_options, :thousand_island_options)
end end
defp normalize_listeners(listeners) when is_list(listeners) do defp normalize_listeners(listeners) when is_list(listeners) do
@@ -195,6 +204,7 @@ defmodule Parrhesia.Web.Listener do
id = normalize_atom(fetch_value(listener, :id), :listener) id = normalize_atom(fetch_value(listener, :id), :listener)
enabled = normalize_boolean(fetch_value(listener, :enabled), true) enabled = normalize_boolean(fetch_value(listener, :enabled), true)
bind = normalize_bind(fetch_value(listener, :bind), listener) bind = normalize_bind(fetch_value(listener, :bind), listener)
max_connections = normalize_max_connections(fetch_value(listener, :max_connections), id)
transport = normalize_transport(fetch_value(listener, :transport)) transport = normalize_transport(fetch_value(listener, :transport))
proxy = normalize_proxy(fetch_value(listener, :proxy)) proxy = normalize_proxy(fetch_value(listener, :proxy))
network = normalize_access(fetch_value(listener, :network), %{allow_all?: true}) network = normalize_access(fetch_value(listener, :network), %{allow_all?: true})
@@ -207,6 +217,7 @@ defmodule Parrhesia.Web.Listener do
id: id, id: id,
enabled: enabled, enabled: enabled,
bind: bind, bind: bind,
max_connections: max_connections,
transport: transport, transport: transport,
proxy: proxy, proxy: proxy,
network: network, network: network,
@@ -233,6 +244,14 @@ defmodule Parrhesia.Web.Listener do
} }
end end
defp normalize_max_connections(value, _listener_id) when is_integer(value) and value > 0,
do: value
defp normalize_max_connections(:infinity, _listener_id), do: :infinity
defp normalize_max_connections("infinity", _listener_id), do: :infinity
defp normalize_max_connections(_value, :metrics), do: 1_024
defp normalize_max_connections(_value, _listener_id), do: 20_000
defp default_bind_ip(listener) do defp default_bind_ip(listener) do
normalize_ip(fetch_value(listener, :ip), {0, 0, 0, 0}) normalize_ip(fetch_value(listener, :ip), {0, 0, 0, 0})
end end
@@ -349,6 +368,27 @@ defmodule Parrhesia.Web.Listener do
defp normalize_bandit_options(options) when is_list(options), do: options defp normalize_bandit_options(options) when is_list(options), do: options
defp normalize_bandit_options(_options), do: [] defp normalize_bandit_options(_options), do: []
defp maybe_put_connection_limit(thousand_island_options, :infinity)
when is_list(thousand_island_options),
do: Keyword.put_new(thousand_island_options, :num_connections, :infinity)
defp maybe_put_connection_limit(thousand_island_options, max_connections)
when is_list(thousand_island_options) and is_integer(max_connections) and
max_connections > 0 do
num_acceptors =
case Keyword.get(thousand_island_options, :num_acceptors, 100) do
value when is_integer(value) and value > 0 -> value
_other -> 100
end
per_acceptor_limit = ceil(max_connections / num_acceptors)
Keyword.put_new(thousand_island_options, :num_connections, per_acceptor_limit)
end
defp maybe_put_connection_limit(thousand_island_options, _max_connections)
when is_list(thousand_island_options),
do: thousand_island_options
defp normalize_access(access, defaults) when is_map(access) do defp normalize_access(access, defaults) when is_map(access) do
%{ %{
public?: public?:
@@ -516,6 +556,7 @@ defmodule Parrhesia.Web.Listener do
id: :public, id: :public,
enabled: true, enabled: true,
bind: %{ip: {0, 0, 0, 0}, port: 4413}, bind: %{ip: {0, 0, 0, 0}, port: 4413},
max_connections: 20_000,
transport: %{scheme: :http, tls: TLS.default_config()}, transport: %{scheme: :http, tls: TLS.default_config()},
proxy: %{trusted_cidrs: [], honor_x_forwarded_for: true}, proxy: %{trusted_cidrs: [], honor_x_forwarded_for: true},
network: %{public?: false, private_networks_only?: false, allow_cidrs: [], allow_all?: true}, network: %{public?: false, private_networks_only?: false, allow_cidrs: [], allow_all?: true},

View File

@@ -35,6 +35,9 @@ defmodule Parrhesia.Web.Management do
{:error, :stale_event} -> {:error, :stale_event} ->
send_json(conn, 401, %{"ok" => false, "error" => "stale-auth-event"}) send_json(conn, 401, %{"ok" => false, "error" => "stale-auth-event"})
{:error, :replayed_auth_event} ->
send_json(conn, 401, %{"ok" => false, "error" => "replayed-auth-event"})
{:error, :invalid_method_tag} -> {:error, :invalid_method_tag} ->
send_json(conn, 401, %{"ok" => false, "error" => "auth-method-tag-mismatch"}) send_json(conn, 401, %{"ok" => false, "error" => "auth-method-tag-mismatch"})

View File

@@ -1,12 +1,14 @@
defmodule Parrhesia.Web.Readiness do defmodule Parrhesia.Web.Readiness do
@moduledoc false @moduledoc false
alias Parrhesia.PostgresRepos
@spec ready?() :: boolean() @spec ready?() :: boolean()
def ready? do def ready? do
process_ready?(Parrhesia.Subscriptions.Index) and process_ready?(Parrhesia.Subscriptions.Index) and
process_ready?(Parrhesia.Auth.Challenges) and process_ready?(Parrhesia.Auth.Challenges) and
negentropy_ready?() and negentropy_ready?() and
process_ready?(Parrhesia.Repo) repos_ready?()
end end
defp negentropy_ready? do defp negentropy_ready? do
@@ -29,4 +31,8 @@ defmodule Parrhesia.Web.Readiness do
nil -> false nil -> false
end end
end end
defp repos_ready? do
Enum.all?(PostgresRepos.started_repos(), &process_ready?/1)
end
end end

View File

@@ -1,32 +1,58 @@
defmodule Parrhesia.Web.RelayInfo do defmodule Parrhesia.Web.RelayInfo do
@moduledoc """ @moduledoc """
NIP-11 relay information document. NIP-11 relay information document.
`document/1` builds the JSON-serialisable relay info map served on
`GET /relay` with `Accept: application/nostr+json`, including supported NIPs,
limitations, and the relay's advertised public key.
""" """
alias Parrhesia.API.Identity alias Parrhesia.API.Identity
alias Parrhesia.Metadata
alias Parrhesia.NIP43
alias Parrhesia.Web.Listener alias Parrhesia.Web.Listener
@spec document(Listener.t()) :: map() @spec document(map()) :: map()
def document(listener) do def document(listener) do
%{ document = %{
"name" => "Parrhesia", "name" => Metadata.name(),
"description" => "Nostr/Marmot relay", "description" => "Nostr/Marmot relay",
"pubkey" => relay_pubkey(), "pubkey" => relay_pubkey(),
"self" => relay_pubkey(),
"supported_nips" => supported_nips(), "supported_nips" => supported_nips(),
"software" => "https://git.teralink.net/self/parrhesia", "software" => "https://git.teralink.net/self/parrhesia",
"version" => Application.spec(:parrhesia, :vsn) |> to_string(),
"limitation" => limitations(listener) "limitation" => limitations(listener)
} }
if Metadata.hide_version?() do
document
else
Map.put(document, "version", Metadata.version())
end
end end
defp supported_nips do defp supported_nips do
base = [1, 9, 11, 13, 17, 40, 42, 43, 44, 45, 50, 59, 62, 66, 70] base = [1, 9, 11, 13, 17, 40, 42, 44, 45, 50, 59, 62, 70]
with_nip43 =
if NIP43.enabled?() do
base ++ [43]
else
base
end
with_nip66 =
if Parrhesia.NIP66.enabled?() do
with_nip43 ++ [66]
else
with_nip43
end
with_negentropy = with_negentropy =
if negentropy_enabled?() do if negentropy_enabled?() do
base ++ [77] with_nip66 ++ [77]
else else
base with_nip66
end end
with_negentropy ++ [86, 98] with_negentropy ++ [86, 98]
@@ -38,7 +64,12 @@ defmodule Parrhesia.Web.RelayInfo do
"max_subscriptions" => "max_subscriptions" =>
Parrhesia.Config.get([:limits, :max_subscriptions_per_connection], 32), Parrhesia.Config.get([:limits, :max_subscriptions_per_connection], 32),
"max_filters" => Parrhesia.Config.get([:limits, :max_filters_per_req], 16), "max_filters" => Parrhesia.Config.get([:limits, :max_filters_per_req], 16),
"auth_required" => Listener.relay_auth_required?(listener) "max_limit" => Parrhesia.Config.get([:limits, :max_filter_limit], 500),
"max_event_tags" => Parrhesia.Config.get([:limits, :max_tags_per_event], 256),
"min_pow_difficulty" => Parrhesia.Config.get([:policies, :min_pow_difficulty], 0),
"auth_required" => Listener.relay_auth_required?(listener),
"payment_required" => false,
"restricted_writes" => restricted_writes?(listener)
} }
end end
@@ -54,4 +85,12 @@ defmodule Parrhesia.Web.RelayInfo do
{:error, _reason} -> nil {:error, _reason} -> nil
end end
end end
defp restricted_writes?(listener) do
listener.auth.nip42_required or
(listener.baseline_acl.write != [] and
Enum.any?(listener.baseline_acl.write, &(&1.action == :deny))) or
Parrhesia.Config.get([:policies, :auth_required_for_writes], false) or
Parrhesia.Config.get([:policies, :min_pow_difficulty], 0) > 0
end
end end

48
mix.exs
View File

@@ -4,11 +4,13 @@ defmodule Parrhesia.MixProject do
def project do def project do
[ [
app: :parrhesia, app: :parrhesia,
version: "0.5.0", version: "0.6.0",
elixir: "~> 1.18", elixir: "~> 1.18",
elixirc_paths: elixirc_paths(Mix.env()),
start_permanent: Mix.env() == :prod, start_permanent: Mix.env() == :prod,
deps: deps(), deps: deps(),
aliases: aliases() aliases: aliases(),
docs: docs()
] ]
end end
@@ -20,8 +22,11 @@ defmodule Parrhesia.MixProject do
] ]
end end
defp elixirc_paths(:test), do: ["lib", "test/support"]
defp elixirc_paths(_env), do: ["lib"]
def cli do def cli do
[preferred_envs: [precommit: :test, bench: :test]] [preferred_envs: [precommit: :test, bench: :test, "bench.update": :test]]
end end
# Run "mix help deps" to learn about dependencies. # Run "mix help deps" to learn about dependencies.
@@ -49,6 +54,7 @@ defmodule Parrhesia.MixProject do
# Project tooling # Project tooling
{:credo, "~> 1.7", only: [:dev, :test], runtime: false}, {:credo, "~> 1.7", only: [:dev, :test], runtime: false},
{:ex_doc, "~> 0.34", only: :dev, runtime: false},
{:deps_changelog, "~> 0.3"}, {:deps_changelog, "~> 0.3"},
{:igniter, "~> 0.6", only: [:dev, :test]} {:igniter, "~> 0.6", only: [:dev, :test]}
] ]
@@ -65,6 +71,7 @@ defmodule Parrhesia.MixProject do
"test.node_sync_e2e": ["cmd ./scripts/run_node_sync_e2e.sh"], "test.node_sync_e2e": ["cmd ./scripts/run_node_sync_e2e.sh"],
"test.node_sync_docker_e2e": ["cmd ./scripts/run_node_sync_docker_e2e.sh"], "test.node_sync_docker_e2e": ["cmd ./scripts/run_node_sync_docker_e2e.sh"],
bench: ["cmd ./scripts/run_bench_compare.sh"], bench: ["cmd ./scripts/run_bench_compare.sh"],
"bench.update": ["cmd ./scripts/run_bench_update.sh"],
# cov: ["cmd mix coveralls.lcov"], # cov: ["cmd mix coveralls.lcov"],
lint: ["format --check-formatted", "credo"], lint: ["format --check-formatted", "credo"],
precommit: [ precommit: [
@@ -78,4 +85,39 @@ defmodule Parrhesia.MixProject do
] ]
] ]
end end
defp docs do
[
main: "readme",
output: "_build/doc",
extras: [
"README.md",
"docs/LOCAL_API.md",
"docs/SYNC.md",
"docs/ARCH.md",
"docs/CLUSTER.md",
"BENCHMARK.md"
],
groups_for_modules: [
"Embedded API": [
Parrhesia.API.ACL,
Parrhesia.API.Admin,
Parrhesia.API.Auth,
Parrhesia.API.Auth.Context,
Parrhesia.API.Events,
Parrhesia.API.Events.PublishResult,
Parrhesia.API.Identity,
Parrhesia.API.RequestContext,
Parrhesia.API.Stream,
Parrhesia.API.Sync
],
Runtime: [
Parrhesia,
Parrhesia.Release,
Parrhesia.Runtime
]
],
nest_modules_by_prefix: [Parrhesia.API]
]
end
end end

View File

@@ -8,9 +8,11 @@
"db_connection": {:hex, :db_connection, "2.9.0", "a6a97c5c958a2d7091a58a9be40caf41ab496b0701d21e1d1abff3fa27a7f371", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "17d502eacaf61829db98facf6f20808ed33da6ccf495354a41e64fe42f9c509c"}, "db_connection": {:hex, :db_connection, "2.9.0", "a6a97c5c958a2d7091a58a9be40caf41ab496b0701d21e1d1abff3fa27a7f371", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "17d502eacaf61829db98facf6f20808ed33da6ccf495354a41e64fe42f9c509c"},
"decimal": {:hex, :decimal, "2.3.0", "3ad6255aa77b4a3c4f818171b12d237500e63525c2fd056699967a3e7ea20f62", [:mix], [], "hexpm", "a4d66355cb29cb47c3cf30e71329e58361cfcb37c34235ef3bf1d7bf3773aeac"}, "decimal": {:hex, :decimal, "2.3.0", "3ad6255aa77b4a3c4f818171b12d237500e63525c2fd056699967a3e7ea20f62", [:mix], [], "hexpm", "a4d66355cb29cb47c3cf30e71329e58361cfcb37c34235ef3bf1d7bf3773aeac"},
"deps_changelog": {:hex, :deps_changelog, "0.3.5", "65981997d9bc893b8027a0c03da093a4083328c00b17f562df269c2b61d44073", [:mix], [], "hexpm", "298fcd7794395d8e61dba8d29ce8fcee09f1df4d48adb273a41e8f4a1736491e"}, "deps_changelog": {:hex, :deps_changelog, "0.3.5", "65981997d9bc893b8027a0c03da093a4083328c00b17f562df269c2b61d44073", [:mix], [], "hexpm", "298fcd7794395d8e61dba8d29ce8fcee09f1df4d48adb273a41e8f4a1736491e"},
"earmark_parser": {:hex, :earmark_parser, "1.4.44", "f20830dd6b5c77afe2b063777ddbbff09f9759396500cdbe7523efd58d7a339c", [:mix], [], "hexpm", "4778ac752b4701a5599215f7030989c989ffdc4f6df457c5f36938cc2d2a2750"},
"ecto": {:hex, :ecto, "3.13.5", "9d4a69700183f33bf97208294768e561f5c7f1ecf417e0fa1006e4a91713a834", [:mix], [{:decimal, "~> 2.0", [hex: :decimal, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "df9efebf70cf94142739ba357499661ef5dbb559ef902b68ea1f3c1fabce36de"}, "ecto": {:hex, :ecto, "3.13.5", "9d4a69700183f33bf97208294768e561f5c7f1ecf417e0fa1006e4a91713a834", [:mix], [{:decimal, "~> 2.0", [hex: :decimal, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "df9efebf70cf94142739ba357499661ef5dbb559ef902b68ea1f3c1fabce36de"},
"ecto_sql": {:hex, :ecto_sql, "3.13.5", "2f8282b2ad97bf0f0d3217ea0a6fff320ead9e2f8770f810141189d182dc304e", [:mix], [{:db_connection, "~> 2.4.1 or ~> 2.5", [hex: :db_connection, repo: "hexpm", optional: false]}, {:ecto, "~> 3.13.0", [hex: :ecto, repo: "hexpm", optional: false]}, {:myxql, "~> 0.7", [hex: :myxql, repo: "hexpm", optional: true]}, {:postgrex, "~> 0.19 or ~> 1.0", [hex: :postgrex, repo: "hexpm", optional: true]}, {:tds, "~> 2.1.1 or ~> 2.2", [hex: :tds, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.0 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "aa36751f4e6a2b56ae79efb0e088042e010ff4935fc8684e74c23b1f49e25fdc"}, "ecto_sql": {:hex, :ecto_sql, "3.13.5", "2f8282b2ad97bf0f0d3217ea0a6fff320ead9e2f8770f810141189d182dc304e", [:mix], [{:db_connection, "~> 2.4.1 or ~> 2.5", [hex: :db_connection, repo: "hexpm", optional: false]}, {:ecto, "~> 3.13.0", [hex: :ecto, repo: "hexpm", optional: false]}, {:myxql, "~> 0.7", [hex: :myxql, repo: "hexpm", optional: true]}, {:postgrex, "~> 0.19 or ~> 1.0", [hex: :postgrex, repo: "hexpm", optional: true]}, {:tds, "~> 2.1.1 or ~> 2.2", [hex: :tds, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.0 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "aa36751f4e6a2b56ae79efb0e088042e010ff4935fc8684e74c23b1f49e25fdc"},
"elixir_make": {:hex, :elixir_make, "0.9.0", "6484b3cd8c0cee58f09f05ecaf1a140a8c97670671a6a0e7ab4dc326c3109726", [:mix], [], "hexpm", "db23d4fd8b757462ad02f8aa73431a426fe6671c80b200d9710caf3d1dd0ffdb"}, "elixir_make": {:hex, :elixir_make, "0.9.0", "6484b3cd8c0cee58f09f05ecaf1a140a8c97670671a6a0e7ab4dc326c3109726", [:mix], [], "hexpm", "db23d4fd8b757462ad02f8aa73431a426fe6671c80b200d9710caf3d1dd0ffdb"},
"ex_doc": {:hex, :ex_doc, "0.40.1", "67542e4b6dde74811cfd580e2c0149b78010fd13001fda7cfeb2b2c2ffb1344d", [:mix], [{:earmark_parser, "~> 1.4.44", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "bcef0e2d360d93ac19f01a85d58f91752d930c0a30e2681145feea6bd3516e00"},
"file_system": {:hex, :file_system, "1.1.1", "31864f4685b0148f25bd3fbef2b1228457c0c89024ad67f7a81a3ffbc0bbad3a", [:mix], [], "hexpm", "7a15ff97dfe526aeefb090a7a9d3d03aa907e100e262a0f8f7746b78f8f87a5d"}, "file_system": {:hex, :file_system, "1.1.1", "31864f4685b0148f25bd3fbef2b1228457c0c89024ad67f7a81a3ffbc0bbad3a", [:mix], [], "hexpm", "7a15ff97dfe526aeefb090a7a9d3d03aa907e100e262a0f8f7746b78f8f87a5d"},
"finch": {:hex, :finch, "0.21.0", "b1c3b2d48af02d0c66d2a9ebfb5622be5c5ecd62937cf79a88a7f98d48a8290c", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mint, "~> 1.6.2 or ~> 1.7", [hex: :mint, repo: "hexpm", optional: false]}, {:nimble_options, "~> 0.4 or ~> 1.0", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:nimble_pool, "~> 1.1", [hex: :nimble_pool, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "87dc6e169794cb2570f75841a19da99cfde834249568f2a5b121b809588a4377"}, "finch": {:hex, :finch, "0.21.0", "b1c3b2d48af02d0c66d2a9ebfb5622be5c5ecd62937cf79a88a7f98d48a8290c", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mint, "~> 1.6.2 or ~> 1.7", [hex: :mint, repo: "hexpm", optional: false]}, {:nimble_options, "~> 0.4 or ~> 1.0", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:nimble_pool, "~> 1.1", [hex: :nimble_pool, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "87dc6e169794cb2570f75841a19da99cfde834249568f2a5b121b809588a4377"},
"glob_ex": {:hex, :glob_ex, "0.1.11", "cb50d3f1ef53f6ca04d6252c7fde09fd7a1cf63387714fe96f340a1349e62c93", [:mix], [], "hexpm", "342729363056e3145e61766b416769984c329e4378f1d558b63e341020525de4"}, "glob_ex": {:hex, :glob_ex, "0.1.11", "cb50d3f1ef53f6ca04d6252c7fde09fd7a1cf63387714fe96f340a1349e62c93", [:mix], [], "hexpm", "342729363056e3145e61766b416769984c329e4378f1d558b63e341020525de4"},
@@ -18,9 +20,13 @@
"igniter": {:hex, :igniter, "0.7.4", "b5f9dd512eb1e672f1c141b523142b5b4602fcca231df5b4e362999df4b88e14", [:mix], [{:glob_ex, "~> 0.1.7", [hex: :glob_ex, repo: "hexpm", optional: false]}, {:jason, "~> 1.4", [hex: :jason, repo: "hexpm", optional: false]}, {:owl, "~> 0.11", [hex: :owl, repo: "hexpm", optional: false]}, {:phx_new, "~> 1.7", [hex: :phx_new, repo: "hexpm", optional: true]}, {:req, "~> 0.5", [hex: :req, repo: "hexpm", optional: false]}, {:rewrite, ">= 1.1.1 and < 2.0.0-0", [hex: :rewrite, repo: "hexpm", optional: false]}, {:sourceror, "~> 1.4", [hex: :sourceror, repo: "hexpm", optional: false]}, {:spitfire, ">= 0.1.3 and < 1.0.0-0", [hex: :spitfire, repo: "hexpm", optional: false]}], "hexpm", "971b240ee916a06b1af56381a262d9eeaff9610eddc299d61a213cd7a9d79efd"}, "igniter": {:hex, :igniter, "0.7.4", "b5f9dd512eb1e672f1c141b523142b5b4602fcca231df5b4e362999df4b88e14", [:mix], [{:glob_ex, "~> 0.1.7", [hex: :glob_ex, repo: "hexpm", optional: false]}, {:jason, "~> 1.4", [hex: :jason, repo: "hexpm", optional: false]}, {:owl, "~> 0.11", [hex: :owl, repo: "hexpm", optional: false]}, {:phx_new, "~> 1.7", [hex: :phx_new, repo: "hexpm", optional: true]}, {:req, "~> 0.5", [hex: :req, repo: "hexpm", optional: false]}, {:rewrite, ">= 1.1.1 and < 2.0.0-0", [hex: :rewrite, repo: "hexpm", optional: false]}, {:sourceror, "~> 1.4", [hex: :sourceror, repo: "hexpm", optional: false]}, {:spitfire, ">= 0.1.3 and < 1.0.0-0", [hex: :spitfire, repo: "hexpm", optional: false]}], "hexpm", "971b240ee916a06b1af56381a262d9eeaff9610eddc299d61a213cd7a9d79efd"},
"jason": {:hex, :jason, "1.4.4", "b9226785a9aa77b6857ca22832cffa5d5011a667207eb2a0ad56adb5db443b8a", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "c5eb0cab91f094599f94d55bc63409236a8ec69a21a67814529e8d5f6cc90b3b"}, "jason": {:hex, :jason, "1.4.4", "b9226785a9aa77b6857ca22832cffa5d5011a667207eb2a0ad56adb5db443b8a", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "c5eb0cab91f094599f94d55bc63409236a8ec69a21a67814529e8d5f6cc90b3b"},
"lib_secp256k1": {:hex, :lib_secp256k1, "0.7.1", "53cad778b8da3a29e453a7a477517d99fb5f13f615c8050eb2db8fd1dce7a1db", [:make, :mix], [{:elixir_make, "~> 0.9", [hex: :elixir_make, repo: "hexpm", optional: false]}], "hexpm", "78bdd3661a17448aff5aeec5ca74c8ddbc09b01f0ecfa3ba1aba3e8ae47ab2b3"}, "lib_secp256k1": {:hex, :lib_secp256k1, "0.7.1", "53cad778b8da3a29e453a7a477517d99fb5f13f615c8050eb2db8fd1dce7a1db", [:make, :mix], [{:elixir_make, "~> 0.9", [hex: :elixir_make, repo: "hexpm", optional: false]}], "hexpm", "78bdd3661a17448aff5aeec5ca74c8ddbc09b01f0ecfa3ba1aba3e8ae47ab2b3"},
"makeup": {:hex, :makeup, "1.2.1", "e90ac1c65589ef354378def3ba19d401e739ee7ee06fb47f94c687016e3713d1", [:mix], [{:nimble_parsec, "~> 1.4", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "d36484867b0bae0fea568d10131197a4c2e47056a6fbe84922bf6ba71c8d17ce"},
"makeup_elixir": {:hex, :makeup_elixir, "1.0.1", "e928a4f984e795e41e3abd27bfc09f51db16ab8ba1aebdba2b3a575437efafc2", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 1.2.3 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "7284900d412a3e5cfd97fdaed4f5ed389b8f2b4cb49efc0eb3bd10e2febf9507"},
"makeup_erlang": {:hex, :makeup_erlang, "1.0.3", "4252d5d4098da7415c390e847c814bad3764c94a814a0b4245176215615e1035", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "953297c02582a33411ac6208f2c6e55f0e870df7f80da724ed613f10e6706afd"},
"mime": {:hex, :mime, "2.0.7", "b8d739037be7cd402aee1ba0306edfdef982687ee7e9859bee6198c1e7e2f128", [:mix], [], "hexpm", "6171188e399ee16023ffc5b76ce445eb6d9672e2e241d2df6050f3c771e80ccd"}, "mime": {:hex, :mime, "2.0.7", "b8d739037be7cd402aee1ba0306edfdef982687ee7e9859bee6198c1e7e2f128", [:mix], [], "hexpm", "6171188e399ee16023ffc5b76ce445eb6d9672e2e241d2df6050f3c771e80ccd"},
"mint": {:hex, :mint, "1.7.1", "113fdb2b2f3b59e47c7955971854641c61f378549d73e829e1768de90fc1abf1", [:mix], [{:castore, "~> 0.1.0 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:hpax, "~> 0.1.1 or ~> 0.2.0 or ~> 1.0", [hex: :hpax, repo: "hexpm", optional: false]}], "hexpm", "fceba0a4d0f24301ddee3024ae116df1c3f4bb7a563a731f45fdfeb9d39a231b"}, "mint": {:hex, :mint, "1.7.1", "113fdb2b2f3b59e47c7955971854641c61f378549d73e829e1768de90fc1abf1", [:mix], [{:castore, "~> 0.1.0 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:hpax, "~> 0.1.1 or ~> 0.2.0 or ~> 1.0", [hex: :hpax, repo: "hexpm", optional: false]}], "hexpm", "fceba0a4d0f24301ddee3024ae116df1c3f4bb7a563a731f45fdfeb9d39a231b"},
"nimble_options": {:hex, :nimble_options, "1.1.1", "e3a492d54d85fc3fd7c5baf411d9d2852922f66e69476317787a7b2bb000a61b", [:mix], [], "hexpm", "821b2470ca9442c4b6984882fe9bb0389371b8ddec4d45a9504f00a66f650b44"}, "nimble_options": {:hex, :nimble_options, "1.1.1", "e3a492d54d85fc3fd7c5baf411d9d2852922f66e69476317787a7b2bb000a61b", [:mix], [], "hexpm", "821b2470ca9442c4b6984882fe9bb0389371b8ddec4d45a9504f00a66f650b44"},
"nimble_parsec": {:hex, :nimble_parsec, "1.4.2", "8efba0122db06df95bfaa78f791344a89352ba04baedd3849593bfce4d0dc1c6", [:mix], [], "hexpm", "4b21398942dda052b403bbe1da991ccd03a053668d147d53fb8c4e0efe09c973"},
"nimble_pool": {:hex, :nimble_pool, "1.1.0", "bf9c29fbdcba3564a8b800d1eeb5a3c58f36e1e11d7b7fb2e084a643f645f06b", [:mix], [], "hexpm", "af2e4e6b34197db81f7aad230c1118eac993acc0dae6bc83bac0126d4ae0813a"}, "nimble_pool": {:hex, :nimble_pool, "1.1.0", "bf9c29fbdcba3564a8b800d1eeb5a3c58f36e1e11d7b7fb2e084a643f645f06b", [:mix], [], "hexpm", "af2e4e6b34197db81f7aad230c1118eac993acc0dae6bc83bac0126d4ae0813a"},
"owl": {:hex, :owl, "0.13.0", "26010e066d5992774268f3163506972ddac0a7e77bfe57fa42a250f24d6b876e", [:mix], [{:ucwidth, "~> 0.2", [hex: :ucwidth, repo: "hexpm", optional: true]}], "hexpm", "59bf9d11ce37a4db98f57cb68fbfd61593bf419ec4ed302852b6683d3d2f7475"}, "owl": {:hex, :owl, "0.13.0", "26010e066d5992774268f3163506972ddac0a7e77bfe57fa42a250f24d6b876e", [:mix], [{:ucwidth, "~> 0.2", [hex: :ucwidth, repo: "hexpm", optional: true]}], "hexpm", "59bf9d11ce37a4db98f57cb68fbfd61593bf419ec4ed302852b6683d3d2f7475"},
"plug": {:hex, :plug, "1.19.1", "09bac17ae7a001a68ae393658aa23c7e38782be5c5c00c80be82901262c394c0", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.1.1 or ~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "560a0017a8f6d5d30146916862aaf9300b7280063651dd7e532b8be168511e62"}, "plug": {:hex, :plug, "1.19.1", "09bac17ae7a001a68ae393658aa23c7e38782be5c5c00c80be82901262c394c0", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.1.1 or ~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "560a0017a8f6d5d30146916862aaf9300b7280063651dd7e532b8be168511e62"},

View File

@@ -0,0 +1,27 @@
defmodule Parrhesia.Repo.Migrations.AddNip50FtsAndTrigramSearch do
use Ecto.Migration
def up do
execute("CREATE EXTENSION IF NOT EXISTS pg_trgm")
execute("""
CREATE INDEX events_content_fts_idx
ON events
USING GIN (to_tsvector('simple', content))
WHERE deleted_at IS NULL
""")
execute("""
CREATE INDEX events_content_trgm_idx
ON events
USING GIN (content gin_trgm_ops)
WHERE deleted_at IS NULL
""")
end
def down do
execute("DROP INDEX IF EXISTS events_content_trgm_idx")
execute("DROP INDEX IF EXISTS events_content_fts_idx")
execute("DROP EXTENSION IF EXISTS pg_trgm")
end
end

View File

@@ -0,0 +1,46 @@
defmodule Parrhesia.Repo.Migrations.AddBinaryIdentifierLengthConstraints do
use Ecto.Migration
@constraints [
{"event_ids", "event_ids_id_length_check", "octet_length(id) = 32"},
{"events", "events_id_length_check", "octet_length(id) = 32"},
{"events", "events_pubkey_length_check", "octet_length(pubkey) = 32"},
{"events", "events_sig_length_check", "octet_length(sig) = 64"},
{"event_tags", "event_tags_event_id_length_check", "octet_length(event_id) = 32"},
{"replaceable_event_state", "replaceable_event_state_pubkey_length_check",
"octet_length(pubkey) = 32"},
{"replaceable_event_state", "replaceable_event_state_event_id_length_check",
"octet_length(event_id) = 32"},
{"addressable_event_state", "addressable_event_state_pubkey_length_check",
"octet_length(pubkey) = 32"},
{"addressable_event_state", "addressable_event_state_event_id_length_check",
"octet_length(event_id) = 32"},
{"banned_pubkeys", "banned_pubkeys_pubkey_length_check", "octet_length(pubkey) = 32"},
{"allowed_pubkeys", "allowed_pubkeys_pubkey_length_check", "octet_length(pubkey) = 32"},
{"banned_events", "banned_events_event_id_length_check", "octet_length(event_id) = 32"},
{"group_memberships", "group_memberships_pubkey_length_check", "octet_length(pubkey) = 32"},
{"group_roles", "group_roles_pubkey_length_check", "octet_length(pubkey) = 32"},
{"management_audit_logs", "management_audit_logs_actor_pubkey_length_check",
"actor_pubkey IS NULL OR octet_length(actor_pubkey) = 32"},
{"acl_rules", "acl_rules_principal_length_check", "octet_length(principal) = 32"}
]
def up do
Enum.each(@constraints, fn {table_name, constraint_name, expression} ->
execute("""
ALTER TABLE #{table_name}
ADD CONSTRAINT #{constraint_name}
CHECK (#{expression})
""")
end)
end
def down do
Enum.each(@constraints, fn {table_name, constraint_name, _expression} ->
execute("""
ALTER TABLE #{table_name}
DROP CONSTRAINT #{constraint_name}
""")
end)
end
end

View File

@@ -724,7 +724,11 @@ defmodule NodeSyncE2E.Runner do
%{ %{
"created_at" => System.system_time(:second), "created_at" => System.system_time(:second),
"kind" => 27_235, "kind" => 27_235,
"tags" => [["method", method], ["u", url]], "tags" => [
["method", method],
["u", url],
["nonce", "#{System.unique_integer([:positive, :monotonic])}"]
],
"content" => "" "content" => ""
} }
end end

View File

@@ -10,9 +10,10 @@ usage:
./scripts/run_bench_compare.sh ./scripts/run_bench_compare.sh
Runs the same nostr-bench suite against: Runs the same nostr-bench suite against:
1) Parrhesia (temporary prod relay via run_e2e_suite.sh) 1) Parrhesia (Postgres, temporary prod relay via run_e2e_suite.sh)
2) strfry (ephemeral instance) — optional, skipped if not in PATH 2) Parrhesia (in-memory storage, temporary prod relay via run_e2e_suite.sh)
3) nostr-rs-relay (ephemeral sqlite instance) — optional, skipped if not in PATH 3) strfry (ephemeral instance) — optional, skipped if not in PATH
4) nostr-rs-relay (ephemeral sqlite instance) — optional, skipped if not in PATH
Environment: Environment:
PARRHESIA_BENCH_RUNS Number of comparison runs (default: 2) PARRHESIA_BENCH_RUNS Number of comparison runs (default: 2)
@@ -247,7 +248,7 @@ echo " ${NOSTR_BENCH_VERSION}"
echo echo
for run in $(seq 1 "$RUNS"); do for run in $(seq 1 "$RUNS"); do
echo "[run ${run}/${RUNS}] Parrhesia" echo "[run ${run}/${RUNS}] Parrhesia (Postgres)"
parrhesia_log="$WORK_DIR/parrhesia_${run}.log" parrhesia_log="$WORK_DIR/parrhesia_${run}.log"
if ! ./scripts/run_nostr_bench.sh all >"$parrhesia_log" 2>&1; then if ! ./scripts/run_nostr_bench.sh all >"$parrhesia_log" 2>&1; then
echo "Parrhesia benchmark failed. Log: $parrhesia_log" >&2 echo "Parrhesia benchmark failed. Log: $parrhesia_log" >&2
@@ -255,6 +256,14 @@ for run in $(seq 1 "$RUNS"); do
exit 1 exit 1
fi fi
echo "[run ${run}/${RUNS}] Parrhesia (Memory)"
parrhesia_memory_log="$WORK_DIR/parrhesia_memory_${run}.log"
if ! PARRHESIA_BENCH_STORAGE_BACKEND=memory ./scripts/run_nostr_bench.sh all >"$parrhesia_memory_log" 2>&1; then
echo "Parrhesia memory benchmark failed. Log: $parrhesia_memory_log" >&2
tail -n 120 "$parrhesia_memory_log" >&2 || true
exit 1
fi
if (( HAS_STRFRY )); then if (( HAS_STRFRY )); then
echo "[run ${run}/${RUNS}] strfry" echo "[run ${run}/${RUNS}] strfry"
strfry_log="$WORK_DIR/strfry_${run}.log" strfry_log="$WORK_DIR/strfry_${run}.log"
@@ -364,6 +373,7 @@ function loadRuns(prefix) {
} }
const parrhesiaRuns = loadRuns("parrhesia"); const parrhesiaRuns = loadRuns("parrhesia");
const parrhesiaMemoryRuns = loadRuns("parrhesia_memory");
const strfryRuns = hasStrfry ? loadRuns("strfry") : []; const strfryRuns = hasStrfry ? loadRuns("strfry") : [];
const nostrRsRuns = hasNostrRs ? loadRuns("nostr_rs_relay") : []; const nostrRsRuns = hasNostrRs ? loadRuns("nostr_rs_relay") : [];
@@ -382,7 +392,10 @@ function summarise(allRuns) {
return out; return out;
} }
const summary = { parrhesia: summarise(parrhesiaRuns) }; const summary = {
parrhesia: summarise(parrhesiaRuns),
parrhesiaMemory: summarise(parrhesiaMemoryRuns),
};
if (hasStrfry) summary.strfry = summarise(strfryRuns); if (hasStrfry) summary.strfry = summarise(strfryRuns);
if (hasNostrRs) summary.nostrRsRelay = summarise(nostrRsRuns); if (hasNostrRs) summary.nostrRsRelay = summarise(nostrRsRuns);
@@ -404,16 +417,22 @@ const metricLabels = [
["req throughput (MiB/s) ↑", "reqSizeMiBS"], ["req throughput (MiB/s) ↑", "reqSizeMiBS"],
]; ];
const headers = ["metric", "parrhesia"]; const headers = ["metric", "parrhesia-pg", "parrhesia-memory"];
if (hasStrfry) headers.push("strfry"); if (hasStrfry) headers.push("strfry");
if (hasNostrRs) headers.push("nostr-rs-relay"); if (hasNostrRs) headers.push("nostr-rs-relay");
headers.push("memory/parrhesia");
if (hasStrfry) headers.push("strfry/parrhesia"); if (hasStrfry) headers.push("strfry/parrhesia");
if (hasNostrRs) headers.push("nostr-rs/parrhesia"); if (hasNostrRs) headers.push("nostr-rs/parrhesia");
const rows = metricLabels.map(([label, key]) => { const rows = metricLabels.map(([label, key]) => {
const row = [label, toFixed(summary.parrhesia[key])]; const row = [
label,
toFixed(summary.parrhesia[key]),
toFixed(summary.parrhesiaMemory[key]),
];
if (hasStrfry) row.push(toFixed(summary.strfry[key])); if (hasStrfry) row.push(toFixed(summary.strfry[key]));
if (hasNostrRs) row.push(toFixed(summary.nostrRsRelay[key])); if (hasNostrRs) row.push(toFixed(summary.nostrRsRelay[key]));
row.push(ratioVsParrhesia("parrhesiaMemory", key));
if (hasStrfry) row.push(ratioVsParrhesia("strfry", key)); if (hasStrfry) row.push(ratioVsParrhesia("strfry", key));
if (hasNostrRs) row.push(ratioVsParrhesia("nostrRsRelay", key)); if (hasNostrRs) row.push(ratioVsParrhesia("nostrRsRelay", key));
return row; return row;
@@ -444,8 +463,10 @@ if (hasStrfry || hasNostrRs) {
console.log("Run details:"); console.log("Run details:");
for (let i = 0; i < runs; i += 1) { for (let i = 0; i < runs; i += 1) {
const p = parrhesiaRuns[i]; const p = parrhesiaRuns[i];
const pm = parrhesiaMemoryRuns[i];
let line = ` run ${i + 1}: ` + let line = ` run ${i + 1}: ` +
`parrhesia(echo_tps=${toFixed(p.echoTps, 0)}, event_tps=${toFixed(p.eventTps, 0)}, req_tps=${toFixed(p.reqTps, 0)}, connect_avg_ms=${toFixed(p.connectAvgMs, 0)})`; `parrhesia-pg(echo_tps=${toFixed(p.echoTps, 0)}, event_tps=${toFixed(p.eventTps, 0)}, req_tps=${toFixed(p.reqTps, 0)}, connect_avg_ms=${toFixed(p.connectAvgMs, 0)})` +
` | parrhesia-memory(echo_tps=${toFixed(pm.echoTps, 0)}, event_tps=${toFixed(pm.eventTps, 0)}, req_tps=${toFixed(pm.reqTps, 0)}, connect_avg_ms=${toFixed(pm.connectAvgMs, 0)})`;
if (hasStrfry) { if (hasStrfry) {
const s = strfryRuns[i]; const s = strfryRuns[i];
line += ` | strfry(echo_tps=${toFixed(s.echoTps, 0)}, event_tps=${toFixed(s.eventTps, 0)}, req_tps=${toFixed(s.reqTps, 0)}, connect_avg_ms=${toFixed(s.connectAvgMs, 0)})`; line += ` | strfry(echo_tps=${toFixed(s.echoTps, 0)}, event_tps=${toFixed(s.eventTps, 0)}, req_tps=${toFixed(s.reqTps, 0)}, connect_avg_ms=${toFixed(s.connectAvgMs, 0)})`;
@@ -456,4 +477,35 @@ for (let i = 0; i < runs; i += 1) {
} }
console.log(line); console.log(line);
} }
// Structured JSON output for automation (bench:update pipeline)
if (process.env.BENCH_JSON_OUT) {
const jsonSummary = {};
const serverKeys = [
["parrhesia-pg", "parrhesia"],
["parrhesia-memory", "parrhesiaMemory"],
];
if (hasStrfry) serverKeys.push(["strfry", "strfry"]);
if (hasNostrRs) serverKeys.push(["nostr-rs-relay", "nostrRsRelay"]);
for (const [outputKey, summaryKey] of serverKeys) {
const s = summary[summaryKey];
jsonSummary[outputKey] = {
connect_avg_ms: s.connectAvgMs,
connect_max_ms: s.connectMaxMs,
echo_tps: s.echoTps,
echo_mibs: s.echoSizeMiBS,
event_tps: s.eventTps,
event_mibs: s.eventSizeMiBS,
req_tps: s.reqTps,
req_mibs: s.reqSizeMiBS,
};
}
fs.writeFileSync(
process.env.BENCH_JSON_OUT,
JSON.stringify(jsonSummary, null, 2) + "\n",
"utf8"
);
}
NODE NODE

329
scripts/run_bench_update.sh Executable file
View File

@@ -0,0 +1,329 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
cd "$ROOT_DIR"
usage() {
cat <<'EOF'
usage:
./scripts/run_bench_update.sh
Runs the benchmark suite (3 runs by default), then:
1) Appends structured results to bench/history.jsonl
2) Generates bench/chart.svg via gnuplot
3) Updates the comparison table in README.md
Environment:
PARRHESIA_BENCH_RUNS Number of runs (default: 3)
PARRHESIA_BENCH_MACHINE_ID Machine identifier (default: hostname -s)
All PARRHESIA_BENCH_* knobs from run_bench_compare.sh are forwarded.
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
# --- Configuration -----------------------------------------------------------
BENCH_DIR="$ROOT_DIR/bench"
HISTORY_FILE="$BENCH_DIR/history.jsonl"
CHART_FILE="$BENCH_DIR/chart.svg"
GNUPLOT_TEMPLATE="$BENCH_DIR/chart.gnuplot"
MACHINE_ID="${PARRHESIA_BENCH_MACHINE_ID:-$(hostname -s)}"
GIT_TAG="$(git describe --tags --abbrev=0 2>/dev/null || echo 'untagged')"
GIT_COMMIT="$(git rev-parse --short=7 HEAD)"
TIMESTAMP="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
RUNS="${PARRHESIA_BENCH_RUNS:-3}"
mkdir -p "$BENCH_DIR"
WORK_DIR="$(mktemp -d)"
trap 'rm -rf "$WORK_DIR"' EXIT
JSON_OUT="$WORK_DIR/bench_summary.json"
RAW_OUTPUT="$WORK_DIR/bench_output.txt"
# --- Phase 1: Run benchmarks -------------------------------------------------
echo "Running ${RUNS}-run benchmark suite..."
PARRHESIA_BENCH_RUNS="$RUNS" \
BENCH_JSON_OUT="$JSON_OUT" \
./scripts/run_bench_compare.sh 2>&1 | tee "$RAW_OUTPUT"
if [[ ! -f "$JSON_OUT" ]]; then
echo "Benchmark JSON output not found at $JSON_OUT" >&2
exit 1
fi
# --- Phase 2: Append to history ----------------------------------------------
echo "Appending to history..."
node - "$JSON_OUT" "$TIMESTAMP" "$MACHINE_ID" "$GIT_TAG" "$GIT_COMMIT" "$RUNS" "$HISTORY_FILE" <<'NODE'
const fs = require("node:fs");
const [, , jsonOut, timestamp, machineId, gitTag, gitCommit, runsStr, historyFile] = process.argv;
const servers = JSON.parse(fs.readFileSync(jsonOut, "utf8"));
const entry = {
timestamp,
machine_id: machineId,
git_tag: gitTag,
git_commit: gitCommit,
runs: Number(runsStr),
servers,
};
fs.appendFileSync(historyFile, JSON.stringify(entry) + "\n", "utf8");
console.log(" entry: " + gitTag + " (" + gitCommit + ") on " + machineId);
NODE
# --- Phase 3: Generate chart --------------------------------------------------
echo "Generating chart..."
node - "$HISTORY_FILE" "$MACHINE_ID" "$WORK_DIR" <<'NODE'
const fs = require("node:fs");
const path = require("node:path");
const [, , historyFile, machineId, workDir] = process.argv;
if (!fs.existsSync(historyFile)) {
console.log(" no history file, skipping chart generation");
process.exit(0);
}
const lines = fs.readFileSync(historyFile, "utf8")
.split("\n")
.filter(l => l.trim().length > 0)
.map(l => JSON.parse(l));
// Filter to current machine
const entries = lines.filter(e => e.machine_id === machineId);
if (entries.length === 0) {
console.log(" no history entries for machine '" + machineId + "', skipping chart");
process.exit(0);
}
// Sort chronologically, deduplicate by tag (latest wins)
entries.sort((a, b) => a.timestamp.localeCompare(b.timestamp));
const byTag = new Map();
for (const e of entries) {
byTag.set(e.git_tag, e);
}
const deduped = [...byTag.values()];
// Determine which non-parrhesia servers are present
const baselineServerNames = ["strfry", "nostr-rs-relay"];
const presentBaselines = baselineServerNames.filter(srv =>
deduped.some(e => e.servers[srv])
);
// Compute averages for baseline servers (constant horizontal lines)
const baselineAvg = {};
for (const srv of presentBaselines) {
const vals = deduped.filter(e => e.servers[srv]).map(e => e.servers[srv]);
baselineAvg[srv] = {};
for (const metric of Object.keys(vals[0])) {
const valid = vals.map(v => v[metric]).filter(Number.isFinite);
baselineAvg[srv][metric] = valid.length > 0
? valid.reduce((a, b) => a + b, 0) / valid.length
: NaN;
}
}
// Metrics to chart
const chartMetrics = [
{ key: "event_tps", label: "Event Throughput (TPS) — higher is better", file: "event_tps.tsv", ylabel: "TPS" },
{ key: "req_tps", label: "Req Throughput (TPS) — higher is better", file: "req_tps.tsv", ylabel: "TPS" },
{ key: "echo_tps", label: "Echo Throughput (TPS) — higher is better", file: "echo_tps.tsv", ylabel: "TPS" },
{ key: "connect_avg_ms", label: "Connect Avg Latency (ms) — lower is better", file: "connect_avg_ms.tsv", ylabel: "ms" },
];
// Write per-metric TSV files
for (const cm of chartMetrics) {
const header = ["tag", "parrhesia-pg", "parrhesia-memory"];
for (const srv of presentBaselines) header.push(srv);
const rows = [header.join("\t")];
for (const e of deduped) {
const row = [
e.git_tag,
e.servers["parrhesia-pg"]?.[cm.key] ?? "NaN",
e.servers["parrhesia-memory"]?.[cm.key] ?? "NaN",
];
for (const srv of presentBaselines) {
row.push(baselineAvg[srv]?.[cm.key] ?? "NaN");
}
rows.push(row.join("\t"));
}
fs.writeFileSync(path.join(workDir, cm.file), rows.join("\n") + "\n", "utf8");
}
// Generate gnuplot plot commands (handles variable column counts)
const serverLabels = ["parrhesia-pg", "parrhesia-memory"];
for (const srv of presentBaselines) serverLabels.push(srv + " (avg)");
const plotLines = [];
for (const cm of chartMetrics) {
const dataFile = `data_dir."/${cm.file}"`;
plotLines.push(`set title "${cm.label}"`);
plotLines.push(`set ylabel "${cm.ylabel}"`);
const plotParts = [];
// Column 2 = parrhesia-pg, 3 = parrhesia-memory, 4+ = baselines
plotParts.push(`${dataFile} using 0:2:xtic(1) lt 1 title "${serverLabels[0]}"`);
plotParts.push(`'' using 0:3 lt 2 title "${serverLabels[1]}"`);
for (let i = 0; i < presentBaselines.length; i++) {
plotParts.push(`'' using 0:${4 + i} lt ${3 + i} title "${serverLabels[2 + i]}"`);
}
plotLines.push("plot " + plotParts.join(", \\\n "));
plotLines.push("");
}
fs.writeFileSync(
path.join(workDir, "plot_commands.gnuplot"),
plotLines.join("\n") + "\n",
"utf8"
);
console.log(" " + deduped.length + " tag(s), " + presentBaselines.length + " baseline server(s)");
NODE
if [[ -f "$WORK_DIR/plot_commands.gnuplot" ]]; then
gnuplot \
-e "data_dir='$WORK_DIR'" \
-e "output_file='$CHART_FILE'" \
"$GNUPLOT_TEMPLATE"
echo " chart written to $CHART_FILE"
else
echo " chart generation skipped (no data for this machine)"
fi
# --- Phase 4: Update README.md -----------------------------------------------
echo "Updating README.md..."
node - "$JSON_OUT" "$ROOT_DIR/README.md" <<'NODE'
const fs = require("node:fs");
const [, , jsonOut, readmePath] = process.argv;
const servers = JSON.parse(fs.readFileSync(jsonOut, "utf8"));
const readme = fs.readFileSync(readmePath, "utf8");
const pg = servers["parrhesia-pg"];
const mem = servers["parrhesia-memory"];
const strfry = servers["strfry"];
const nostrRs = servers["nostr-rs-relay"];
function toFixed(v, d = 2) {
return Number.isFinite(v) ? v.toFixed(d) : "n/a";
}
function ratio(base, other) {
if (!Number.isFinite(base) || !Number.isFinite(other) || base === 0) return "n/a";
return (other / base).toFixed(2) + "x";
}
function boldIf(ratioStr, lowerIsBetter) {
if (ratioStr === "n/a") return ratioStr;
const num = parseFloat(ratioStr);
const better = lowerIsBetter ? num < 1 : num > 1;
return better ? "**" + ratioStr + "**" : ratioStr;
}
const metricRows = [
["connect avg latency (ms) \u2193", "connect_avg_ms", true],
["connect max latency (ms) \u2193", "connect_max_ms", true],
["echo throughput (TPS) \u2191", "echo_tps", false],
["echo throughput (MiB/s) \u2191", "echo_mibs", false],
["event throughput (TPS) \u2191", "event_tps", false],
["event throughput (MiB/s) \u2191", "event_mibs", false],
["req throughput (TPS) \u2191", "req_tps", false],
["req throughput (MiB/s) \u2191", "req_mibs", false],
];
const hasStrfry = !!strfry;
const hasNostrRs = !!nostrRs;
// Build header
const header = ["metric", "parrhesia-pg", "parrhesia-mem"];
if (hasStrfry) header.push("strfry");
if (hasNostrRs) header.push("nostr-rs-relay");
header.push("mem/pg");
if (hasStrfry) header.push("strfry/pg");
if (hasNostrRs) header.push("nostr-rs/pg");
const alignRow = ["---"];
for (let i = 1; i < header.length; i++) alignRow.push("---:");
const rows = metricRows.map(([label, key, lowerIsBetter]) => {
const row = [label, toFixed(pg[key]), toFixed(mem[key])];
if (hasStrfry) row.push(toFixed(strfry[key]));
if (hasNostrRs) row.push(toFixed(nostrRs[key]));
row.push(boldIf(ratio(pg[key], mem[key]), lowerIsBetter));
if (hasStrfry) row.push(boldIf(ratio(pg[key], strfry[key]), lowerIsBetter));
if (hasNostrRs) row.push(boldIf(ratio(pg[key], nostrRs[key]), lowerIsBetter));
return row;
});
const tableLines = [
"| " + header.join(" | ") + " |",
"| " + alignRow.join(" | ") + " |",
...rows.map(r => "| " + r.join(" | ") + " |"),
];
// Replace the first markdown table in the ## Benchmark section
const readmeLines = readme.split("\n");
const benchIdx = readmeLines.findIndex(l => /^## Benchmark/.test(l));
if (benchIdx === -1) {
console.error("Could not find '## Benchmark' section in README.md");
process.exit(1);
}
let tableStart = -1;
let tableEnd = -1;
for (let i = benchIdx + 1; i < readmeLines.length; i++) {
if (readmeLines[i].startsWith("|")) {
if (tableStart === -1) tableStart = i;
tableEnd = i;
} else if (tableStart !== -1) {
break;
}
}
if (tableStart === -1) {
console.error("Could not find markdown table in ## Benchmark section");
process.exit(1);
}
const before = readmeLines.slice(0, tableStart);
const after = readmeLines.slice(tableEnd + 1);
const updated = [...before, ...tableLines, ...after].join("\n");
fs.writeFileSync(readmePath, updated, "utf8");
console.log(" table updated (" + tableLines.length + " rows)");
NODE
# --- Done ---------------------------------------------------------------------
echo
echo "Benchmark update complete. Files changed:"
echo " $HISTORY_FILE"
echo " $CHART_FILE"
echo " $ROOT_DIR/README.md"
echo
echo "Review with: git diff"

View File

@@ -19,6 +19,8 @@ if [[ "$MIX_ENV" != "test" && "$MIX_ENV" != "prod" ]]; then
fi fi
export MIX_ENV export MIX_ENV
SKIP_ECTO="${PARRHESIA_E2E_SKIP_ECTO:-0}"
SUITE_SLUG="$(printf '%s' "$SUITE_NAME" | tr '[:upper:]' '[:lower:]' | tr -c 'a-z0-9' '_')" SUITE_SLUG="$(printf '%s' "$SUITE_NAME" | tr '[:upper:]' '[:lower:]' | tr -c 'a-z0-9' '_')"
SUITE_UPPER="$(printf '%s' "$SUITE_SLUG" | tr '[:lower:]' '[:upper:]')" SUITE_UPPER="$(printf '%s' "$SUITE_SLUG" | tr '[:lower:]' '[:upper:]')"
PORT_ENV_VAR="PARRHESIA_${SUITE_UPPER}_E2E_RELAY_PORT" PORT_ENV_VAR="PARRHESIA_${SUITE_UPPER}_E2E_RELAY_PORT"
@@ -56,14 +58,16 @@ if [[ -z "${DATABASE_URL:-}" ]]; then
fi fi
fi fi
if [[ "$MIX_ENV" == "test" ]]; then if [[ "$SKIP_ECTO" != "1" ]]; then
if [[ "$MIX_ENV" == "test" ]]; then
PARRHESIA_TEST_HTTP_PORT=0 mix ecto.drop --quiet --force || true PARRHESIA_TEST_HTTP_PORT=0 mix ecto.drop --quiet --force || true
PARRHESIA_TEST_HTTP_PORT=0 mix ecto.create --quiet PARRHESIA_TEST_HTTP_PORT=0 mix ecto.create --quiet
PARRHESIA_TEST_HTTP_PORT=0 mix ecto.migrate --quiet PARRHESIA_TEST_HTTP_PORT=0 mix ecto.migrate --quiet
else else
mix ecto.drop --quiet --force || true mix ecto.drop --quiet --force || true
mix ecto.create --quiet mix ecto.create --quiet
mix ecto.migrate --quiet mix ecto.migrate --quiet
fi
fi fi
SERVER_LOG="${ROOT_DIR}/.${SUITE_SLUG}-e2e-server.log" SERVER_LOG="${ROOT_DIR}/.${SUITE_SLUG}-e2e-server.log"
@@ -75,7 +79,7 @@ cleanup() {
wait "$SERVER_PID" 2>/dev/null || true wait "$SERVER_PID" 2>/dev/null || true
fi fi
if [[ "${PARRHESIA_E2E_DROP_DB_ON_EXIT:-0}" == "1" ]]; then if [[ "$SKIP_ECTO" != "1" && "${PARRHESIA_E2E_DROP_DB_ON_EXIT:-0}" == "1" ]]; then
if [[ "$MIX_ENV" == "test" ]]; then if [[ "$MIX_ENV" == "test" ]]; then
PARRHESIA_TEST_HTTP_PORT=0 mix ecto.drop --quiet --force || true PARRHESIA_TEST_HTTP_PORT=0 mix ecto.drop --quiet --force || true
else else

View File

@@ -146,7 +146,7 @@ start_node() {
PARRHESIA_IDENTITY_PATH="$identity_path" \ PARRHESIA_IDENTITY_PATH="$identity_path" \
PARRHESIA_SYNC_PATH="$sync_path" \ PARRHESIA_SYNC_PATH="$sync_path" \
MIX_ENV=prod \ MIX_ENV=prod \
mix run --no-halt >"$log_path" 2>&1 & mix run --no-compile --no-halt >"$log_path" 2>&1 &
if [[ "$node_name" == "a" ]]; then if [[ "$node_name" == "a" ]]; then
NODE_A_PID=$! NODE_A_PID=$!

View File

@@ -13,6 +13,9 @@ usage:
Runs nostr-bench against a temporary Parrhesia prod server started via Runs nostr-bench against a temporary Parrhesia prod server started via
./scripts/run_e2e_suite.sh. ./scripts/run_e2e_suite.sh.
Benchmark target:
PARRHESIA_BENCH_STORAGE_BACKEND postgres|memory (default: postgres)
Pool tuning: Pool tuning:
POOL_SIZE optional override for prod pool size POOL_SIZE optional override for prod pool size
DB_QUEUE_TARGET_MS optional Repo queue target override DB_QUEUE_TARGET_MS optional Repo queue target override
@@ -39,6 +42,10 @@ Default "all" run can be tuned via env vars:
PARRHESIA_BENCH_REQ_RATE (default: 50) PARRHESIA_BENCH_REQ_RATE (default: 50)
PARRHESIA_BENCH_REQ_LIMIT (default: 10) PARRHESIA_BENCH_REQ_LIMIT (default: 10)
PARRHESIA_BENCH_KEEPALIVE_SECONDS (default: 5) PARRHESIA_BENCH_KEEPALIVE_SECONDS (default: 5)
By default benchmark runs also lift relay limits so the benchmark client, not
relay-side ceilings, is the bottleneck. Set `PARRHESIA_BENCH_LIFT_LIMITS=0` to
disable that behavior.
EOF EOF
} }
@@ -63,11 +70,54 @@ if [[ "$MODE" == "all" && $# -gt 0 ]]; then
exit 1 exit 1
fi fi
if [[ -z "${PGDATABASE:-}" ]]; then BENCH_STORAGE_BACKEND="${PARRHESIA_BENCH_STORAGE_BACKEND:-postgres}"
export PGDATABASE="parrhesia_bench_prod_$(date +%s)_$RANDOM" case "$BENCH_STORAGE_BACKEND" in
postgres|memory)
;;
*)
echo "PARRHESIA_BENCH_STORAGE_BACKEND must be postgres or memory, got: ${BENCH_STORAGE_BACKEND}" >&2
exit 1
;;
esac
export PARRHESIA_STORAGE_BACKEND="$BENCH_STORAGE_BACKEND"
export PARRHESIA_ENABLE_EXPIRATION_WORKER="${PARRHESIA_ENABLE_EXPIRATION_WORKER:-0}"
export PARRHESIA_ENABLE_PARTITION_RETENTION_WORKER="${PARRHESIA_ENABLE_PARTITION_RETENTION_WORKER:-0}"
if [[ "${PARRHESIA_BENCH_LIFT_LIMITS:-1}" == "1" ]]; then
export PARRHESIA_PUBLIC_MAX_CONNECTIONS="${PARRHESIA_PUBLIC_MAX_CONNECTIONS:-infinity}"
export PARRHESIA_LIMITS_MAX_FRAME_BYTES="${PARRHESIA_LIMITS_MAX_FRAME_BYTES:-16777216}"
export PARRHESIA_LIMITS_MAX_EVENT_BYTES="${PARRHESIA_LIMITS_MAX_EVENT_BYTES:-4194304}"
export PARRHESIA_LIMITS_MAX_FILTERS_PER_REQ="${PARRHESIA_LIMITS_MAX_FILTERS_PER_REQ:-1024}"
export PARRHESIA_LIMITS_MAX_FILTER_LIMIT="${PARRHESIA_LIMITS_MAX_FILTER_LIMIT:-100000}"
export PARRHESIA_LIMITS_MAX_TAGS_PER_EVENT="${PARRHESIA_LIMITS_MAX_TAGS_PER_EVENT:-4096}"
export PARRHESIA_LIMITS_MAX_TAG_VALUES_PER_FILTER="${PARRHESIA_LIMITS_MAX_TAG_VALUES_PER_FILTER:-4096}"
export PARRHESIA_LIMITS_IP_MAX_EVENT_INGEST_PER_WINDOW="${PARRHESIA_LIMITS_IP_MAX_EVENT_INGEST_PER_WINDOW:-1000000}"
export PARRHESIA_LIMITS_RELAY_MAX_EVENT_INGEST_PER_WINDOW="${PARRHESIA_LIMITS_RELAY_MAX_EVENT_INGEST_PER_WINDOW:-1000000}"
export PARRHESIA_LIMITS_MAX_SUBSCRIPTIONS_PER_CONNECTION="${PARRHESIA_LIMITS_MAX_SUBSCRIPTIONS_PER_CONNECTION:-4096}"
export PARRHESIA_LIMITS_MAX_EVENT_FUTURE_SKEW_SECONDS="${PARRHESIA_LIMITS_MAX_EVENT_FUTURE_SKEW_SECONDS:-31536000}"
export PARRHESIA_LIMITS_MAX_EVENT_INGEST_PER_WINDOW="${PARRHESIA_LIMITS_MAX_EVENT_INGEST_PER_WINDOW:-1000000}"
export PARRHESIA_LIMITS_AUTH_MAX_AGE_SECONDS="${PARRHESIA_LIMITS_AUTH_MAX_AGE_SECONDS:-31536000}"
export PARRHESIA_LIMITS_MAX_OUTBOUND_QUEUE="${PARRHESIA_LIMITS_MAX_OUTBOUND_QUEUE:-65536}"
export PARRHESIA_LIMITS_OUTBOUND_DRAIN_BATCH_SIZE="${PARRHESIA_LIMITS_OUTBOUND_DRAIN_BATCH_SIZE:-4096}"
export PARRHESIA_LIMITS_MAX_NEGENTROPY_PAYLOAD_BYTES="${PARRHESIA_LIMITS_MAX_NEGENTROPY_PAYLOAD_BYTES:-1048576}"
export PARRHESIA_LIMITS_MAX_NEGENTROPY_SESSIONS_PER_CONNECTION="${PARRHESIA_LIMITS_MAX_NEGENTROPY_SESSIONS_PER_CONNECTION:-256}"
export PARRHESIA_LIMITS_MAX_NEGENTROPY_TOTAL_SESSIONS="${PARRHESIA_LIMITS_MAX_NEGENTROPY_TOTAL_SESSIONS:-100000}"
export PARRHESIA_LIMITS_MAX_NEGENTROPY_ITEMS_PER_SESSION="${PARRHESIA_LIMITS_MAX_NEGENTROPY_ITEMS_PER_SESSION:-1000000}"
fi fi
export PARRHESIA_E2E_DROP_DB_ON_EXIT="${PARRHESIA_E2E_DROP_DB_ON_EXIT:-1}" if [[ "$BENCH_STORAGE_BACKEND" == "memory" ]]; then
export PARRHESIA_E2E_SKIP_ECTO="${PARRHESIA_E2E_SKIP_ECTO:-1}"
export PARRHESIA_E2E_DROP_DB_ON_EXIT=0
export PARRHESIA_MODERATION_CACHE_ENABLED="${PARRHESIA_MODERATION_CACHE_ENABLED:-0}"
else
if [[ -z "${PGDATABASE:-}" ]]; then
export PGDATABASE="parrhesia_bench_prod_$(date +%s)_$RANDOM"
fi
export PARRHESIA_E2E_SKIP_ECTO="${PARRHESIA_E2E_SKIP_ECTO:-0}"
export PARRHESIA_E2E_DROP_DB_ON_EXIT="${PARRHESIA_E2E_DROP_DB_ON_EXIT:-1}"
fi
PARRHESIA_E2E_MIX_ENV="prod" \ PARRHESIA_E2E_MIX_ENV="prod" \
exec ./scripts/run_e2e_suite.sh \ exec ./scripts/run_e2e_suite.sh \

View File

@@ -1,14 +1,10 @@
defmodule Parrhesia.API.ACLTest do defmodule Parrhesia.API.ACLTest do
use ExUnit.Case, async: false use Parrhesia.IntegrationCase, async: false, sandbox: true
alias Ecto.Adapters.SQL.Sandbox
alias Parrhesia.API.ACL alias Parrhesia.API.ACL
alias Parrhesia.API.RequestContext alias Parrhesia.API.RequestContext
alias Parrhesia.Repo
setup do setup do
:ok = Sandbox.checkout(Repo)
previous_acl = Application.get_env(:parrhesia, :acl, []) previous_acl = Application.get_env(:parrhesia, :acl, [])
Application.put_env( Application.put_env(

View File

@@ -43,6 +43,21 @@ defmodule Parrhesia.API.AuthTest do
assert {:ok, _context} = Auth.validate_nip98(header, "POST", url, max_age_seconds: 180) assert {:ok, _context} = Auth.validate_nip98(header, "POST", url, max_age_seconds: 180)
end end
test "validate_nip98 rejects replayed auth events" do
url = "http://example.com/management"
event = nip98_event("POST", url)
header = "Nostr " <> Base.encode64(JSON.encode!(event))
replay_cache =
start_supervised!({Parrhesia.Auth.Nip98ReplayCache, name: nil})
assert {:ok, _context} =
Auth.validate_nip98(header, "POST", url, replay_cache: replay_cache)
assert {:error, :replayed_auth_event} =
Auth.validate_nip98(header, "POST", url, replay_cache: replay_cache)
end
defp nip98_event(method, url, overrides \\ %{}) do defp nip98_event(method, url, overrides \\ %{}) do
now = System.system_time(:second) now = System.system_time(:second)
@@ -51,7 +66,7 @@ defmodule Parrhesia.API.AuthTest do
"created_at" => now, "created_at" => now,
"kind" => 27_235, "kind" => 27_235,
"tags" => [["method", method], ["u", url]], "tags" => [["method", method], ["u", url]],
"content" => "", "content" => "token-#{System.unique_integer([:positive, :monotonic])}",
"sig" => String.duplicate("b", 128) "sig" => String.duplicate("b", 128)
} }

View File

@@ -1,16 +1,9 @@
defmodule Parrhesia.API.EventsTest do defmodule Parrhesia.API.EventsTest do
use ExUnit.Case, async: false use Parrhesia.IntegrationCase, async: false, sandbox: true
alias Ecto.Adapters.SQL.Sandbox
alias Parrhesia.API.Events alias Parrhesia.API.Events
alias Parrhesia.API.RequestContext alias Parrhesia.API.RequestContext
alias Parrhesia.Protocol.EventValidator alias Parrhesia.Protocol.EventValidator
alias Parrhesia.Repo
setup do
:ok = Sandbox.checkout(Repo)
:ok
end
test "publish stores valid events through the shared API" do test "publish stores valid events through the shared API" do
event = valid_event() event = valid_event()

View File

@@ -1,5 +1,5 @@
defmodule Parrhesia.API.IdentityTest do defmodule Parrhesia.API.IdentityTest do
use ExUnit.Case, async: false use Parrhesia.IntegrationCase, async: false
alias Parrhesia.API.Auth alias Parrhesia.API.Auth
alias Parrhesia.API.Identity alias Parrhesia.API.Identity

View File

@@ -1,17 +1,10 @@
defmodule Parrhesia.API.StreamTest do defmodule Parrhesia.API.StreamTest do
use ExUnit.Case, async: false use Parrhesia.IntegrationCase, async: false, sandbox: true
alias Ecto.Adapters.SQL.Sandbox
alias Parrhesia.API.Events alias Parrhesia.API.Events
alias Parrhesia.API.RequestContext alias Parrhesia.API.RequestContext
alias Parrhesia.API.Stream alias Parrhesia.API.Stream
alias Parrhesia.Protocol.EventValidator alias Parrhesia.Protocol.EventValidator
alias Parrhesia.Repo
setup do
:ok = Sandbox.checkout(Repo)
:ok
end
test "subscribe streams catch-up events followed by eose" do test "subscribe streams catch-up events followed by eose" do
event = valid_event() event = valid_event()

View File

@@ -1,16 +1,9 @@
defmodule Parrhesia.API.SyncTest do defmodule Parrhesia.API.SyncTest do
use ExUnit.Case, async: false use Parrhesia.IntegrationCase, async: false, sandbox: true
alias Ecto.Adapters.SQL.Sandbox
alias Parrhesia.API.Admin alias Parrhesia.API.Admin
alias Parrhesia.API.Sync alias Parrhesia.API.Sync
alias Parrhesia.API.Sync.Manager alias Parrhesia.API.Sync.Manager
alias Parrhesia.Repo
setup do
:ok = Sandbox.checkout(Repo)
:ok
end
test "put_server stores normalized config and persists it across restart" do test "put_server stores normalized config and persists it across restart" do
{manager, path, pid} = start_sync_manager() {manager, path, pid} = start_sync_manager()

View File

@@ -1,12 +1,18 @@
defmodule Parrhesia.ApplicationTest do defmodule Parrhesia.ApplicationTest do
use ExUnit.Case, async: false use Parrhesia.IntegrationCase, async: false
alias Parrhesia.PostgresRepos
test "starts the core supervision tree" do test "starts the core supervision tree" do
assert is_pid(Process.whereis(Parrhesia.Supervisor)) assert is_pid(Process.whereis(Parrhesia.Supervisor))
assert is_pid(Process.whereis(Parrhesia.Telemetry)) assert is_pid(Process.whereis(Parrhesia.Telemetry))
assert is_pid(Process.whereis(Parrhesia.ConnectionStats))
assert is_pid(Process.whereis(Parrhesia.Config)) assert is_pid(Process.whereis(Parrhesia.Config))
assert is_pid(Process.whereis(Parrhesia.Web.EventIngestLimiter))
assert is_pid(Process.whereis(Parrhesia.Web.IPEventIngestLimiter))
assert is_pid(Process.whereis(Parrhesia.Storage.Supervisor)) assert is_pid(Process.whereis(Parrhesia.Storage.Supervisor))
assert is_pid(Process.whereis(Parrhesia.Subscriptions.Supervisor)) assert is_pid(Process.whereis(Parrhesia.Subscriptions.Supervisor))
assert is_pid(Process.whereis(Parrhesia.Fanout.Dispatcher))
assert is_pid(Process.whereis(Parrhesia.Auth.Supervisor)) assert is_pid(Process.whereis(Parrhesia.Auth.Supervisor))
assert is_pid(Process.whereis(Parrhesia.Sync.Supervisor)) assert is_pid(Process.whereis(Parrhesia.Sync.Supervisor))
assert is_pid(Process.whereis(Parrhesia.Policy.Supervisor)) assert is_pid(Process.whereis(Parrhesia.Policy.Supervisor))
@@ -19,8 +25,10 @@ defmodule Parrhesia.ApplicationTest do
end) end)
assert is_pid(Process.whereis(Parrhesia.Auth.Challenges)) assert is_pid(Process.whereis(Parrhesia.Auth.Challenges))
assert is_pid(Process.whereis(Parrhesia.Auth.Nip98ReplayCache))
assert is_pid(Process.whereis(Parrhesia.API.Identity.Manager)) assert is_pid(Process.whereis(Parrhesia.API.Identity.Manager))
assert is_pid(Process.whereis(Parrhesia.API.Sync.Manager)) assert is_pid(Process.whereis(Parrhesia.API.Sync.Manager))
assert Enum.all?(PostgresRepos.started_repos(), &is_pid(Process.whereis(&1)))
if negentropy_enabled?() do if negentropy_enabled?() do
assert is_pid(Process.whereis(Parrhesia.Negentropy.Sessions)) assert is_pid(Process.whereis(Parrhesia.Negentropy.Sessions))

View File

@@ -36,6 +36,31 @@ defmodule Parrhesia.Auth.Nip98Test do
Nip98.validate_authorization_header(header, "POST", url, max_age_seconds: 180) Nip98.validate_authorization_header(header, "POST", url, max_age_seconds: 180)
end end
test "rejects replayed authorization headers" do
url = "http://example.com/management"
event = nip98_event("POST", url)
header = "Nostr " <> Base.encode64(JSON.encode!(event))
replay_cache =
start_supervised!({Parrhesia.Auth.Nip98ReplayCache, name: nil})
assert {:ok, _event} =
Nip98.validate_authorization_header(
header,
"POST",
url,
replay_cache: replay_cache
)
assert {:error, :replayed_auth_event} =
Nip98.validate_authorization_header(
header,
"POST",
url,
replay_cache: replay_cache
)
end
defp nip98_event(method, url, overrides \\ %{}) do defp nip98_event(method, url, overrides \\ %{}) do
now = System.system_time(:second) now = System.system_time(:second)
@@ -44,7 +69,7 @@ defmodule Parrhesia.Auth.Nip98Test do
"created_at" => now, "created_at" => now,
"kind" => 27_235, "kind" => 27_235,
"tags" => [["method", method], ["u", url]], "tags" => [["method", method], ["u", url]],
"content" => "", "content" => "token-#{System.unique_integer([:positive, :monotonic])}",
"sig" => String.duplicate("b", 128) "sig" => String.duplicate("b", 128)
} }

View File

@@ -1,23 +1,40 @@
defmodule Parrhesia.ConfigTest do defmodule Parrhesia.ConfigTest do
use ExUnit.Case, async: true use ExUnit.Case, async: true
alias Parrhesia.Web.Listener
test "returns configured relay limits/policies/features" do test "returns configured relay limits/policies/features" do
assert Parrhesia.Config.get([:metadata, :name]) == "Parrhesia"
assert Parrhesia.Config.get([:metadata, :hide_version?]) == true
assert Parrhesia.Config.get([:limits, :max_frame_bytes]) == 1_048_576 assert Parrhesia.Config.get([:limits, :max_frame_bytes]) == 1_048_576
assert Parrhesia.Config.get([:limits, :max_event_bytes]) == 262_144 assert Parrhesia.Config.get([:limits, :max_event_bytes]) == 262_144
assert Parrhesia.Config.get([:limits, :max_event_future_skew_seconds]) == 900 assert Parrhesia.Config.get([:limits, :max_event_future_skew_seconds]) == 900
assert Parrhesia.Config.get([:limits, :max_event_ingest_per_window]) == 120 assert Parrhesia.Config.get([:limits, :max_event_ingest_per_window]) == 120
assert Parrhesia.Config.get([:limits, :max_tags_per_event]) == 256
assert Parrhesia.Config.get([:limits, :max_tag_values_per_filter]) == 128
assert Parrhesia.Config.get([:limits, :ip_max_event_ingest_per_window]) == 1_000
assert Parrhesia.Config.get([:limits, :ip_event_ingest_window_seconds]) == 1
assert Parrhesia.Config.get([:limits, :relay_max_event_ingest_per_window]) == 10_000
assert Parrhesia.Config.get([:limits, :relay_event_ingest_window_seconds]) == 1
assert Parrhesia.Config.get([:limits, :event_ingest_window_seconds]) == 1 assert Parrhesia.Config.get([:limits, :event_ingest_window_seconds]) == 1
assert Parrhesia.Config.get([:limits, :auth_max_age_seconds]) == 600 assert Parrhesia.Config.get([:limits, :auth_max_age_seconds]) == 600
assert Parrhesia.Config.get([:limits, :max_outbound_queue]) == 256 assert Parrhesia.Config.get([:limits, :max_outbound_queue]) == 256
assert Parrhesia.Config.get([:limits, :max_filter_limit]) == 500 assert Parrhesia.Config.get([:limits, :max_filter_limit]) == 500
assert Parrhesia.Config.get([:database, :separate_read_pool?]) == false
assert Parrhesia.Config.get([:relay_url]) == "ws://localhost:4413/relay" assert Parrhesia.Config.get([:relay_url]) == "ws://localhost:4413/relay"
assert Parrhesia.Config.get([:policies, :auth_required_for_writes]) == false assert Parrhesia.Config.get([:policies, :auth_required_for_writes]) == false
assert Parrhesia.Config.get([:policies, :marmot_media_max_imeta_tags_per_event]) == 8 assert Parrhesia.Config.get([:policies, :marmot_media_max_imeta_tags_per_event]) == 8
assert Parrhesia.Config.get([:policies, :marmot_media_reject_mip04_v1]) == true assert Parrhesia.Config.get([:policies, :marmot_media_reject_mip04_v1]) == true
assert Parrhesia.Config.get([:policies, :marmot_push_max_trigger_age_seconds]) == 120 assert Parrhesia.Config.get([:policies, :marmot_push_max_trigger_age_seconds]) == 120
assert Parrhesia.Config.get([:features, :verify_event_signatures_locked?]) == false
assert Parrhesia.Config.get([:features, :verify_event_signatures]) == false assert Parrhesia.Config.get([:features, :verify_event_signatures]) == false
assert Parrhesia.Config.get([:features, :nip_50_search]) == true assert Parrhesia.Config.get([:features, :nip_50_search]) == true
assert Parrhesia.Config.get([:features, :marmot_push_notifications]) == false assert Parrhesia.Config.get([:features, :marmot_push_notifications]) == false
assert Application.get_env(:parrhesia, :listeners, %{})
|> Keyword.get(:public)
|> then(&Listener.from_opts(listener: &1))
|> Map.get(:max_connections) == 20_000
end end
test "returns default for unknown keys" do test "returns default for unknown keys" do

View File

@@ -1,5 +1,5 @@
defmodule Parrhesia.E2E.NakCliTest do defmodule Parrhesia.E2E.NakCliTest do
use ExUnit.Case, async: false use Parrhesia.IntegrationCase, async: false
@moduletag :nak_e2e @moduletag :nak_e2e

View File

@@ -1,5 +1,5 @@
defmodule Parrhesia.Fanout.MultiNodeTest do defmodule Parrhesia.Fanout.MultiNodeTest do
use ExUnit.Case, async: false use Parrhesia.IntegrationCase, async: false
alias Parrhesia.Fanout.MultiNode alias Parrhesia.Fanout.MultiNode
alias Parrhesia.Subscriptions.Index alias Parrhesia.Subscriptions.Index

Some files were not shown because too many files have changed in this diff Show More