rtr client 开发

This commit is contained in:
xiuting.xu 2026-04-15 15:43:59 +08:00
parent c1d3112a45
commit 17c1a02c90
27 changed files with 1991 additions and 121 deletions

View File

@ -32,3 +32,4 @@ rustls-pemfile = "2"
rustls-pki-types = "1.14.0"
socket2 = "0.5"
tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] }
rpki_rs = { package = "rpki", version = "0.18", features = ["rtr", "crypto"] }

302
README.md
View File

@ -2,39 +2,88 @@
默认运行平台Ubuntu/Linux。
## 目录
- [协议与规范](#协议与规范)
- [RTR](#rtr)
- [SLURM](#slurm)
- [CCR](#ccr)
- [RTR Server](#rtr-server)
- [环境变量](#环境变量)
- [说明](#说明)
- [快速启动](#快速启动)
- [Docker 启动](#docker-启动)
- [本地运行(推荐先用脚本)](#本地运行推荐先用脚本)
- [本地手动运行(最小示例)](#本地手动运行最小示例)
- [CCR 输入说明](#ccr-输入说明)
- [Client 使用](#client-使用)
- [作用与选择](#作用与选择)
- [rtr_debug_client本地](#rtr_debug_client本地)
- [rtr_debug_clientDocker](#rtr_debug_clientdocker)
- [rpki-rs-test-clientDocker](#rpki-rs-test-clientdocker)
- [FRRDocker黑盒客户端](#frrdocker黑盒客户端)
- [RTR Debug Client](#rtr-debug-client)
- [Client 启动示例](#client-启动示例)
- [存储模型与边界约束(多视图)](#存储模型与边界约束多视图)
- [唯一写入口](#唯一写入口)
- [恢复约束](#恢复约束)
- [边界防回归测试](#边界防回归测试)
- [Deploy Layout](#deploy-layout)
- [Quick Start](#quick-start)
- [Compose 入口](#compose-入口)
- [角色附加命令](#角色附加命令)
## 协议与规范
### RTR
- [RFC 6810: The Resource Public Key Infrastructure (RPKI) to Router Protocol](https://www.rfc-editor.org/rfc/rfc6810.html)
- [RFC 8210: The Resource Public Key Infrastructure (RPKI) to Router Protocol, Version 1](https://www.rfc-editor.org/rfc/rfc8210.html)
- [Draft: The Resource Public Key Infrastructure (RPKI) to Router Protocol, Version 2](https://www.ietf.org/archive/id/draft-ietf-sidrops-8210bis-25.html)
### SLURM
- [RFC 8416: Simplified Local Internet Number Resource Management with the RPKI (SLURM)](https://www.rfc-editor.org/rfc/rfc8416.html)
- [Draft: ASPA extensions for SLURM](https://www.ietf.org/archive/id/draft-ietf-sidrops-aspa-slurm-04.html)
### CCR
- [Draft: A Concise Cryptographic Representation of RPKI Signed Objects (CCR)](https://www.ietf.org/archive/id/draft-ietf-sidrops-rpki-ccr-02.html)
## RTR Server
RTR Server 运行时从 `CCR` 目录中扫描最新的 `.ccr` 文件作为输入源。当前 `main` 路径不再读取 `vrps.txt` / `aspas.txt` / `router-keys.txt`,而是统一从 CCR 快照加载:
RTR Server 运行时从 `CCR` 目录中扫描最新的 `.ccr` 文件作为输入源:
- `VRP`
- `VAP / ASPA`
- `VAP (Validated ASPA Payload) / ASPA`
相关实现位置:
- [`src/main.rs`](src/main.rs)
- [`src/rtr/ccr.rs`](src/rtr/ccr.rs)
- [`src/source/ccr.rs`](src/source/ccr.rs)
### 环境变量
| 变量名 | 说明 | 示例 |
| --- | --- | --- |
| `RPKI_RTR_ENABLE_TLS` | 是否额外启用 TLS 监听。支持 `true/false``1/0``yes/no``on/off`。 | `true` |
| `RPKI_RTR_TCP_ADDR` | TCP 监听地址。 | `0.0.0.0:323` |
| `RPKI_RTR_TLS_ADDR` | TLS 监听地址。 | `0.0.0.0:324` |
| `RPKI_RTR_DB_PATH` | RocksDB 路径。 | `./rtr-db` |
| `RPKI_RTR_CCR_DIR` | CCR 目录路径;程序会扫描其中最新的 `.ccr` 文件。 | `./data` |
| `RPKI_RTR_TLS_CERT_PATH` | TLS 服务端证书路径。 | `./certs/server-dns.crt` |
| `RPKI_RTR_TLS_KEY_PATH` | TLS 服务端私钥路径。 | `./certs/server-dns.key` |
| `RPKI_RTR_TLS_CLIENT_CA_PATH` | 用于校验 router 客户端证书的 CA 证书路径。 | `./certs/client-ca.crt` |
| `RPKI_RTR_MAX_DELTA` | 最多保留多少条 delta。 | `100` |
| `RPKI_RTR_PRUNE_DELTA_BY_SNAPSHOT_SIZE` | 是否启用“累计 delta 估算 wire size 不小于 snapshot 时,继续裁剪最老 delta”的策略。 | `false` |
| `RPKI_RTR_STRICT_CCR_VALIDATION` | 是否对 CCR 中的非法 VRP / VAP 采用严格模式;`true` 表示整份 CCR 拒绝,`false` 表示跳过非法项并告警。 | `false` |
| `RPKI_RTR_REFRESH_INTERVAL_SECS` | 刷新 CCR 目录并重新加载最新 `.ccr` 的间隔,单位秒。 | `300` |
| `RPKI_RTR_MAX_CONNECTIONS` | 最大并发 RTR 客户端连接数。 | `512` |
| `RPKI_RTR_NOTIFY_QUEUE_SIZE` | Serial Notify 广播队列大小。 | `1024` |
| `RPKI_RTR_TCP_KEEPALIVE_SECS` | TCP keepalive 时间,单位秒;设为 `0` 表示禁用。 | `60` |
| `RPKI_RTR_WARN_INSECURE_TCP` | 纯 TCP 模式下是否输出不安全警告。 | `true` |
| `RPKI_RTR_REQUIRE_TLS_SERVER_DNS_NAME_SAN` | 严格模式TLS 服务端证书不包含 `subjectAltName dNSName` 时拒绝启动。 | `false` |
| 变量名 | 说明 | 默认值 | 示例 |
| --- | --- | --- | --- |
| `RPKI_RTR_ENABLE_TLS` | 是否额外启用 TLS 监听。支持 `true/false``1/0``yes/no``on/off`。 | `false` | `true` |
| `RPKI_RTR_TCP_ADDR` | TCP 监听地址。 | `0.0.0.0:323` | `0.0.0.0:323` |
| `RPKI_RTR_TLS_ADDR` | TLS 监听地址。 | `0.0.0.0:324` | `0.0.0.0:324` |
| `RPKI_RTR_DB_PATH` | RocksDB 路径。 | `./rtr-db` | `./rtr-db` |
| `RPKI_RTR_CCR_DIR` | CCR 目录路径;程序会扫描其中最新的 `.ccr` 文件。 | `./data` | `./data` |
| `RPKI_RTR_SLURM_DIR` | SLURM 目录路径;为空或未设置表示禁用 SLURM。 | `未设置(禁用)` | `./slurm` |
| `RPKI_RTR_TLS_CERT_PATH` | TLS 服务端证书路径。 | `./certs/server.crt` | `./certs/server-dns.crt` |
| `RPKI_RTR_TLS_KEY_PATH` | TLS 服务端私钥路径。 | `./certs/server.key` | `./certs/server-dns.key` |
| `RPKI_RTR_TLS_CLIENT_CA_PATH` | 用于校验 router 客户端证书的 CA 证书路径。 | `./certs/client-ca.crt` | `./certs/client-ca.crt` |
| `RPKI_RTR_MAX_DELTA` | 最多保留多少条 delta。 | `100` | `100` |
| `RPKI_RTR_PRUNE_DELTA_BY_SNAPSHOT_SIZE` | 是否启用“累计 delta 估算 wire size 不小于 snapshot 时,继续裁剪最老 delta”的策略。 | `false` | `false` |
| `RPKI_RTR_STRICT_CCR_VALIDATION` | 是否对 CCR 中的非法 VRP / VAP 采用严格模式;`true` 表示整份 CCR 拒绝,`false` 表示跳过非法项并告警。 | `false` | `false` |
| `RPKI_RTR_REFRESH_INTERVAL_SECS` | 刷新 CCR 目录并重新加载最新 `.ccr` 的间隔,单位秒,必须 `>= 1`。 | `300` | `300` |
| `RPKI_RTR_MAX_CONNECTIONS` | 最大并发 RTR 客户端连接数。 | `512` | `512` |
| `RPKI_RTR_NOTIFY_QUEUE_SIZE` | Serial Notify 广播队列大小。 | `1024` | `1024` |
| `RPKI_RTR_TCP_KEEPALIVE_SECS` | TCP keepalive 时间,单位秒;设为 `0` 表示禁用。 | `60` | `60` |
| `RPKI_RTR_WARN_INSECURE_TCP` | 纯 TCP 模式下是否输出不安全警告。 | `true` | `true` |
| `RPKI_RTR_REQUIRE_TLS_SERVER_DNS_NAME_SAN` | 严格模式TLS 服务端证书不包含 `subjectAltName dNSName` 时拒绝启动。 | `false` | `false` |
### 说明
@ -46,76 +95,130 @@ RTR Server 运行时从 `CCR` 目录中扫描最新的 `.ccr` 文件作为输入
- `RPKI_RTR_TCP_KEEPALIVE_SECS=0` 表示关闭 keepalive非零值表示在整个连接生命周期内启用 keepalive。
- `RPKI_RTR_PRUNE_DELTA_BY_SNAPSHOT_SIZE=true` 时,除了 `RPKI_RTR_MAX_DELTA` 的固定条数裁剪外,还会在累计 delta 估算 wire size 不小于 snapshot 时继续删除最老 delta。
## 启动示例
## 快速启动
### Bash
### Docker 启动
纯 TCP 模式:
```bash
docker compose -f deploy/server/docker-compose.yml up -d --build
docker compose -f deploy/server/docker-compose.yml logs -f rpki-rtr
docker compose -f deploy/server/docker-compose.yml down
```
### 本地运行(推荐先用脚本)
```sh
sh ./scripts/start-rtr-server-tcp.sh
```
TLS / mutual TLS 模式:
```sh
sh ./scripts/start-rtr-server-tls.sh
```
### 手动启动
#### 纯 TCP
```sh
export RPKI_RTR_ENABLE_TLS=false
export RPKI_RTR_TCP_ADDR=0.0.0.0:323
export RPKI_RTR_DB_PATH=./rtr-db
export RPKI_RTR_CCR_DIR=./data
export RPKI_RTR_PRUNE_DELTA_BY_SNAPSHOT_SIZE=false
export RPKI_RTR_STRICT_CCR_VALIDATION=false
export RPKI_RTR_TCP_KEEPALIVE_SECS=60
export RPKI_RTR_WARN_INSECURE_TCP=true
cargo run --bin rpki
```
#### TLS / mutual TLS
```sh
export RPKI_RTR_ENABLE_TLS=true
export RPKI_RTR_TCP_ADDR=0.0.0.0:323
export RPKI_RTR_TLS_ADDR=0.0.0.0:324
export RPKI_RTR_DB_PATH=./rtr-db
export RPKI_RTR_CCR_DIR=./data
export RPKI_RTR_PRUNE_DELTA_BY_SNAPSHOT_SIZE=false
export RPKI_RTR_STRICT_CCR_VALIDATION=false
export RPKI_RTR_TLS_CERT_PATH=./certs/server-dns.crt
export RPKI_RTR_TLS_KEY_PATH=./certs/server-dns.key
export RPKI_RTR_TLS_CLIENT_CA_PATH=./certs/client-ca.crt
export RPKI_RTR_TCP_KEEPALIVE_SECS=60
export RPKI_RTR_WARN_INSECURE_TCP=true
export RPKI_RTR_REQUIRE_TLS_SERVER_DNS_NAME_SAN=true
cargo run --bin rpki
```
示例脚本:
脚本入口:
- [`scripts/start-rtr-server-tcp.sh`](scripts/start-rtr-server-tcp.sh)
- [`scripts/start-rtr-server-tls.sh`](scripts/start-rtr-server-tls.sh)
- [`scripts/start-rtr-server.sh`](scripts/start-rtr-server.sh)
### 本地手动运行(最小示例)
纯 TCP
```sh
export RPKI_RTR_ENABLE_TLS=false
export RPKI_RTR_CCR_DIR=./data
cargo run --bin rpki
```
TLS / mutual TLS
```sh
export RPKI_RTR_ENABLE_TLS=true
export RPKI_RTR_CCR_DIR=./data
export RPKI_RTR_TLS_CERT_PATH=./certs/server-dns.crt
export RPKI_RTR_TLS_KEY_PATH=./certs/server-dns.key
export RPKI_RTR_TLS_CLIENT_CA_PATH=./certs/client-ca.crt
cargo run --bin rpki
```
## CCR 输入说明
当前会从 `RPKI_RTR_CCR_DIR` 指向的目录中扫描最新 `.ccr` 文件,并从中提取:
- `VRP`
- `VAP / ASPA`
- `VAP (Validated ASPA Payload) / ASPA`
测试样例可参考:
- [`data/20260324T000037Z-sng1.ccr`](data/20260324T000037Z-sng1.ccr)
- [`data/20260324T000138Z-zur1.ccr`](data/20260324T000138Z-zur1.ccr)
## Client 使用
客户端的作用是从不同角度验证 RTR 服务端行为,覆盖“协议调试、自动化测试、黑盒互通”三类需求。
### 作用与选择
| Client | 主要作用 | 适用场景 |
| --- | --- | --- |
| `rtr_debug_client` | 面向协议交互调试,便于手动发起 `reset/serial` 并观察响应 | 开发联调、定位会话/报文问题、快速复现 |
| `rpki-rs-test-client` | 基于 `rpki-rs` 客户端 API 做自动化验证 | CI/CD、回归测试、批量步骤校验 |
| `FRR` | 真实路由软件黑盒接入,验证与生产侧客户端互通 | 互操作验证、运维演练、端到端验证 |
建议使用顺序:
1. 先用 `rtr_debug_client` 快速确认服务可连通、协议版本和基础响应正常。
2. 再用 `rpki-rs-test-client` 做可重复的自动化步骤校验。
3. 最后用 `FRR` 做黑盒互通验证,确认真实客户端接入行为。
### rtr_debug_client本地
```sh
cargo run --bin rtr_debug_client -- 127.0.0.1:323 1 reset
```
说明:
- 适合手工排查:你可以快速切换 TCP/TLS、版本号、请求类型来观察响应差异。
- 适合问题定位:当服务端日志出现异常时,可用最小参数复现问题流量。
### rtr_debug_clientDocker
```bash
docker compose -f deploy/client/docker-compose.yml up --build
docker compose -f deploy/client/docker-compose.yml logs -f rtr-debug-client
docker compose -f deploy/client/docker-compose.yml down
```
多实例压测/联调:
```bash
docker compose -f deploy/client/docker-compose.clients.yml up -d
docker compose -f deploy/client/docker-compose.clients.yml logs -f
docker compose -f deploy/client/docker-compose.clients.yml down
```
### rpki-rs-test-clientDocker
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml up --build
docker compose -f deploy/rpki-rs-client/docker-compose.yml run --rm \
rpki-rs-test-client 127.0.0.1:323 2 reset --steps 1 --assert-min-records 1
docker compose -f deploy/rpki-rs-client/docker-compose.yml down
```
说明:
- 适合自动化场景:可作为流水线中的协议检查器。
- 适合回归校验:通过参数化步骤(`--steps`)和断言(如 `--assert-min-records`)稳定验证行为。
### FRRDocker黑盒客户端
```bash
docker compose -f deploy/frr/docker-compose.yml up -d
docker exec -it frr-rpki-client vtysh -c "show rpki cache-connection"
docker exec -it frr-rpki-client vtysh -c "show rpki prefix-table"
docker compose -f deploy/frr/docker-compose.yml down
```
说明:
- 适合真实互通验证FRR 更接近生产客户端行为。
- 适合观察业务结果:可直接查看缓存连接状态和前缀表结果。
## RTR Debug Client
@ -204,3 +307,62 @@ RocksDB 中的核心状态按版本独立保存:
```sh
cargo test --test test_store_boundary -- --nocapture
```
## Deploy Layout
`deploy/` 目录按角色拆分为四套部署与测试入口:
- `server/`: 本仓库 RTR Server`src/main.rs`)容器化部署
- `client/`: 本仓库 `rtr_debug_client` 容器化部署
- `rpki-rs-client/`: 基于外部 `rpki-rs` client API 的测试客户端容器化部署
- `frr/`: FRR 作为黑盒 RTR Client 的配置与 compose
### Quick Start
最短路径(本地容器环境):
```bash
docker compose -f deploy/server/docker-compose.yml up -d --build
docker compose -f deploy/server/docker-compose.yml logs -f rpki-rtr
docker compose -f deploy/server/docker-compose.yml down
```
### Compose 入口
| 角色 | Compose 文件 |
| --- | --- |
| Server | `deploy/server/docker-compose.yml` |
| Debug Client单实例 | `deploy/client/docker-compose.yml` |
| Debug Client多实例 | `deploy/client/docker-compose.clients.yml` |
| rpki-rs Client | `deploy/rpki-rs-client/docker-compose.yml` |
| FRR Client | `deploy/frr/docker-compose.yml` |
通用操作模板:
```bash
docker compose -f <compose-file> up -d --build
docker compose -f <compose-file> logs -f
docker compose -f <compose-file> down
```
### 角色附加命令
rpki-rs Client覆盖默认参数运行
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml run --rm \
rpki-rs-test-client 127.0.0.1:323 2 reset --steps 1 --assert-min-records 1
```
FRR Client检查连接与前缀
```bash
docker exec -it frr-rpki-client vtysh -c "show rpki cache-connection"
docker exec -it frr-rpki-client vtysh -c "show rpki prefix-table"
```
更多部署细节:
- `deploy/server/DEPLOYMENT.md`
- `deploy/frr/README.md`
- `deploy/frr/README.zh.md`

168
deploy/README.md Normal file
View File

@ -0,0 +1,168 @@
# Deploy Layout
`deploy/` 目录按角色拆分为四套部署与测试入口:
- `server/`: 本仓库 RTR Server`src/main.rs`)容器化部署
- `client/`: 本仓库 `rtr_debug_client` 容器化部署
- `rpki-rs-client/`: 基于外部 `rpki-rs` client API 的测试客户端容器化部署
- `frr/`: FRR 作为黑盒 RTR Client 的配置与 compose
---
## 1) Server
路径:
- `deploy/server/Dockerfile`
- `deploy/server/docker-compose.yml`
- `deploy/server/supervisord.conf`
- `deploy/server/DEPLOYMENT.md`
单独 build 镜像:
```bash
docker build -f deploy/server/Dockerfile -t rpki-rtr:latest .
```
启动:
```bash
docker compose -f deploy/server/docker-compose.yml up -d --build
```
停止:
```bash
docker compose -f deploy/server/docker-compose.yml down
```
日志:
```bash
docker compose -f deploy/server/docker-compose.yml logs -f rpki-rtr
```
---
## 2) Debug Client
路径:
- `deploy/client/Dockerfile`
- `deploy/client/docker-compose.yml`
- `deploy/client/docker-compose.clients.yml`
单独 build 镜像:
```bash
docker build -f deploy/client/Dockerfile -t rpki-rtr-debug-client:latest .
```
单实例启动(交互调试):
```bash
docker compose -f deploy/client/docker-compose.yml up --build
```
单实例停止:
```bash
docker compose -f deploy/client/docker-compose.yml down
```
单实例日志:
```bash
docker compose -f deploy/client/docker-compose.yml logs -f rtr-debug-client
```
多实例启动5 个并发 client
```bash
docker compose -f deploy/client/docker-compose.clients.yml up -d
```
多实例停止:
```bash
docker compose -f deploy/client/docker-compose.clients.yml down
```
多实例日志:
```bash
docker compose -f deploy/client/docker-compose.clients.yml logs -f
```
---
## 3) rpki-rs Client
路径:
- `deploy/rpki-rs-client/Dockerfile`
- `deploy/rpki-rs-client/docker-compose.yml`
单独 build 镜像:
```bash
docker build -f deploy/rpki-rs-client/Dockerfile -t rpki-rs-test-client:latest .
```
默认启动(自动 serial 测试):
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml up --build
```
覆盖默认参数运行:
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml run --rm \
rpki-rs-test-client 127.0.0.1:323 2 reset --steps 1 --assert-min-records 1
```
停止:
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml down
```
日志:
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml logs -f rpki-rs-test-client
```
---
## 4) FRR Client
路径:
- `deploy/frr/docker-compose.yml`
- `deploy/frr/daemons.example`
- `deploy/frr/frr.conf.example`
- `deploy/frr/README.md`
- `deploy/frr/README.zh.md`
启动:
```bash
docker compose -f deploy/frr/docker-compose.yml up -d
```
检查连接:
```bash
docker exec -it frr-rpki-client vtysh -c "show rpki cache-connection"
docker exec -it frr-rpki-client vtysh -c "show rpki prefix-table"
```
停止:
```bash
docker compose -f deploy/frr/docker-compose.yml down
```
日志:
```bash
docker compose -f deploy/frr/docker-compose.yml logs -f frr-rpki-client
```

View File

@ -0,0 +1,11 @@
services:
rtr-debug-client:
build:
context: ../..
dockerfile: deploy/client/Dockerfile
image: rpki-rtr-debug-client:latest
network_mode: host
command: ["127.0.0.1:323", "2", "reset", "--keep-after-error", "--summary-only"]
restart: unless-stopped
stdin_open: true
tty: true

View File

@ -1,10 +0,0 @@
version: "3.9"
services:
rtr-debug-client:
build:
context: ..
dockerfile: deploy/Dockerfile.client
image: rpki-rtr-debug-client:latest
stdin_open: true
tty: true

64
deploy/frr/README.md Normal file
View File

@ -0,0 +1,64 @@
# FRR Minimal RTR Client Config
中文文档: [README.zh.md](./README.zh.md)
This folder provides a minimal FRR setup for black-box interop testing
against this repository's RTR server defaults.
Server defaults in this repo:
- TCP: `0.0.0.0:323`
- TLS: `0.0.0.0:324`
Reference:
- `src/main.rs`
## Files
- `daemons.example`: sample `/etc/frr/daemons`
- `frr.conf.example`: sample `/etc/frr/frr.conf`
## How to apply on an FRR host
1. Copy `daemons.example` to `/etc/frr/daemons`.
2. Copy `frr.conf.example` to `/etc/frr/frr.conf`.
3. Restart FRR:
```bash
sudo systemctl restart frr
```
## Verify
```bash
vtysh -c "show rpki configuration"
vtysh -c "show rpki cache-server"
vtysh -c "show rpki cache-connection"
vtysh -c "show rpki prefix-table"
```
If `show rpki cache-connection` is connected and `show rpki prefix-table`
contains VRPs, the RTR client path is working.
## Docker quick start
From repository root:
```bash
docker compose -f deploy/frr/docker-compose.yml up -d
docker exec -it frr-rpki-client vtysh -c "show rpki cache-connection"
docker exec -it frr-rpki-client vtysh -c "show rpki prefix-table"
```
Stop:
```bash
docker compose -f deploy/frr/docker-compose.yml down
```
## Notes
- This setup targets RTR over TCP (`rpki cache tcp`).
- Keep protocol-level conformance checks in Rust tests and
`src/bin/rtr_debug_client`.
- `network_mode: host` expects your RTR server to be reachable at
`127.0.0.1:323` from the Docker host.

71
deploy/frr/README.zh.md Normal file
View File

@ -0,0 +1,71 @@
# FRR 最小化 RTR 客户端配置
本目录提供一个 FRR 最小配置,用于和本仓库 RTR Server 做黑盒互通测试。
本仓库默认 RTR 监听地址:
- TCP`0.0.0.0:323`
- TLS`0.0.0.0:324`
参考实现:
- `src/main.rs`
## 文件说明
- `daemons.example`:示例 `/etc/frr/daemons`
- `frr.conf.example`:示例 `/etc/frr/frr.conf`
## 在 FRR 主机上应用
1. 复制 `daemons.example``/etc/frr/daemons`
2. 复制 `frr.conf.example``/etc/frr/frr.conf`
3. 重启 FRR
```bash
sudo systemctl restart frr
```
## 验证命令
```bash
vtysh -c "show rpki configuration"
vtysh -c "show rpki cache-server"
vtysh -c "show rpki cache-connection"
vtysh -c "show rpki prefix-table"
```
`show rpki cache-connection` 显示已连接,且 `show rpki prefix-table` 出现 VRP 时,说明 RTR 客户端链路工作正常。
## Docker 快速启动
在仓库根目录执行:
```bash
docker compose -f deploy/frr/docker-compose.yml up -d
docker exec -it frr-rpki-client vtysh -c "show rpki cache-connection"
docker exec -it frr-rpki-client vtysh -c "show rpki prefix-table"
```
停止:
```bash
docker compose -f deploy/frr/docker-compose.yml down
```
## 可测试范围
- FRR 作为 RTR Client 与本仓库 Server 的 TCP 建链能力
- FRR 侧基础会话状态可见性(`cache-server` / `cache-connection`
- VRP 下发与导入是否成功(`prefix-table` 是否有条目)
- 基于服务端数据变化触发的前缀表更新(可通过替换 `data` 后观察)
- 黑盒互通回归:用于确认“路由器客户端视角”功能可用
## 不覆盖范围
- 不替代协议级单元测试/集成测试PDU 细节、异常路径、边界条件)
- 不替代 `src/bin/rtr_debug_client` 的逐报文调试能力
- 默认示例以 TCP 为主TLS/mTLS 需按你的证书与 FRR 配置单独扩展
## 说明
- 当前示例主要针对 RTR over TCP`rpki cache tcp`
- `network_mode: host` 模式下,容器内访问 `127.0.0.1:323` 指向 Docker Host请确保本机 RTR Server 可达

View File

@ -0,0 +1,7 @@
# Minimal FRR daemons config for RPKI testing
zebra=yes
bgpd=yes
# Enable bgpd RPKI module
bgpd_options=" -A 127.0.0.1 -M rpki"

View File

@ -0,0 +1,10 @@
services:
frr-rpki-client:
image: quay.io/frrouting/frr:10.2.2
container_name: frr-rpki-client
restart: unless-stopped
network_mode: host
privileged: true
volumes:
- ./frr/daemons.example:/etc/frr/daemons:ro
- ./frr/frr.conf.example:/etc/frr/frr.conf:ro

View File

@ -0,0 +1,22 @@
frr version 10.2
frr defaults traditional
hostname rpki-lab
service integrated-vtysh-config
!
debug rpki
!
rpki
rpki polling_period 10
rpki timeout 10
rpki retry_interval 10
rpki expire_interval 7200
rpki cache tcp 127.0.0.1 323 preference 1
exit
!
router bgp 65001
bgp router-id 192.0.2.1
!
address-family ipv4 unicast
exit-address-family
!
line vty

View File

@ -0,0 +1,24 @@
FROM rust:1.89-bookworm AS builder
WORKDIR /build
RUN apt-get update \
&& apt-get install -y --no-install-recommends clang libclang-dev pkg-config \
&& rm -rf /var/lib/apt/lists/*
COPY Cargo.toml Cargo.lock ./
COPY src ./src
RUN cargo build --release --bin rpki_rs_test_client
FROM debian:bookworm-slim AS runtime
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /build/target/release/rpki_rs_test_client /usr/local/bin/rpki_rs_test_client
ENTRYPOINT ["/usr/local/bin/rpki_rs_test_client"]

View File

@ -0,0 +1,11 @@
services:
rpki-rs-test-client:
build:
context: ../..
dockerfile: deploy/rpki-rs-client/Dockerfile
image: rpki-rs-test-client:latest
container_name: rpki-rs-test-client
network_mode: host
command: ["127.0.0.1:323", "2", "serial", "--steps", "2", "--follow"]
stdin_open: true
tty: true

View File

@ -10,9 +10,9 @@ This project runs `src/main.rs` as a long-running server that:
## Files
- `deploy/Dockerfile`
- `deploy/supervisord.conf`
- `deploy/docker-compose.yml`
- `deploy/server/Dockerfile`
- `deploy/server/supervisord.conf`
- `deploy/server/docker-compose.yml`
## Runtime Paths in Container
@ -24,17 +24,17 @@ This project runs `src/main.rs` as a long-running server that:
## Start
```bash
docker compose -f deploy/docker-compose.yml up -d --build
docker compose -f deploy/server/docker-compose.yml up -d --build
```
## Stop
```bash
docker compose -f deploy/docker-compose.yml down
docker compose -f deploy/server/docker-compose.yml down
```
## Logs
```bash
docker compose -f deploy/docker-compose.yml logs -f rpki-rtr
docker compose -f deploy/server/docker-compose.yml logs -f rpki-rtr
```

View File

@ -26,7 +26,7 @@ RUN apt-get update \
WORKDIR /app
COPY --from=builder /build/target/release/rpki /usr/local/bin/rpki
COPY deploy/supervisord.conf /etc/supervisor/conf.d/rpki-rtr.conf
COPY deploy/server/supervisord.conf /etc/supervisor/conf.d/rpki-rtr.conf
RUN mkdir -p /app/data /app/rtr-db /app/certs /app/slurm /var/log/supervisor

View File

@ -3,8 +3,8 @@ version: "3.9"
services:
rpki-rtr:
build:
context: ..
dockerfile: deploy/Dockerfile
context: ../..
dockerfile: deploy/server/Dockerfile
image: rpki-rtr:latest
container_name: rpki-rtr
restart: unless-stopped
@ -21,8 +21,8 @@ services:
RPKI_RTR_STRICT_CCR_VALIDATION: "false"
RPKI_RTR_REFRESH_INTERVAL_SECS: "300"
volumes:
- ../data:/app/data:ro
- ../rtr-db:/app/rtr-db
- ../data:/app/slurm:ro
- ../../data:/app/data:ro
- ../../rtr-db:/app/rtr-db
- ../../data:/app/slurm:ro
# TLS mode example:
# - ../certs:/app/certs:ro
# - ../../certs:/app/certs:ro

View File

@ -0,0 +1,112 @@
# rpki_rs_test_client
`rpki_rs_test_client` 是一个基于 `rpki-rs` RTR 客户端接口的测试工具,参数风格对齐 `rtr_debug_client`
实现上直接调用外部 crate 的 client API`Client` / `PayloadTarget`),不重复实现 RTR 客户端状态机。
## 构建
```bash
cargo build --bin rpki_rs_test_client
```
## 基本用法
```bash
cargo run --bin rpki_rs_test_client -- <addr> <version> [reset|serial|serial <session_id> <serial>] [options]
```
默认值:
- `addr`: `127.0.0.1:323`
- `version`: `2`
- `mode`: `reset`
## 常用参数
- `--steps <n>`: 执行 `client.step()` 次数(默认 `1`
- `--follow`: bootstrap 结束后持续执行 `client.step()`(常驻模式)
- `--print-records`: 打印当前收敛后的 payload 记录
- `--assert-min-records <n>`: 断言收敛记录数下限
- `--assert-substr <text>`: 在 payload 的 `Debug` 输出中做字符串断言(可重复)
TLS 参数:
- `--tls`
- `--ca-cert <path>`
- `--server-name <name>`
- `--client-cert <path>`
- `--client-key <path>`
## 限制说明
- 当前 `rpki-rs v0.18` client API 不支持显式覆盖初始版本,因此这里只接受 `version=2`
- 支持 `serial`(无参数)模式:会基于 client 内部 state 自动走 serial 更新。
- 当前不支持 `serial <session_id> <serial>` 显式注入状态(传入会直接报错)。
## 示例
TCP 连通 + 最小记录数断言:
```bash
cargo run --bin rpki_rs_test_client -- \
127.0.0.1:323 \
2 reset \
--steps 1 \
--assert-min-records 1
```
自动 serial无需传 sid/serial
```bash
cargo run --bin rpki_rs_test_client -- \
127.0.0.1:323 \
2 serial --steps 2 --follow
```
结合 `mini_data` 的内容做字符串断言:
```bash
cargo run --bin rpki_rs_test_client -- \
127.0.0.1:323 \
2 reset \
--assert-substr "10.0.1.0" \
--assert-substr "65003"
```
TLS 场景:
```bash
cargo run --bin rpki_rs_test_client -- \
127.0.0.1:324 \
2 reset \
--tls \
--ca-cert tests/fixtures/tls/client-ca.crt \
--server-name localhost
```
## Docker 启动deploy
构建并启动Linux 服务器,`host` 网络):
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml up --build
```
按需覆盖运行参数(覆盖 compose 默认 `command`
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml run --rm \
rpki-rs-test-client 127.0.0.1:323 2 reset --steps 1 --assert-min-records 1
```
常驻跟进模式(先 bootstrap再持续 serial/notify
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml run --rm \
rpki-rs-test-client 127.0.0.1:323 2 serial --steps 2 --follow
```
停止:
```bash
docker compose -f deploy/rpki-rs-client/docker-compose.yml down
```

View File

@ -0,0 +1,563 @@
use std::env;
use std::io;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use rustls::{ClientConfig as RustlsClientConfig, RootCertStore};
use rustls_pki_types::{CertificateDer, PrivateKeyDer, ServerName};
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::net::TcpStream;
use tokio_rustls::TlsConnector;
use rpki_rs::rtr::client::{Client, PayloadError, PayloadTarget};
use rpki_rs::rtr::payload::{Action, Payload, Timing};
const DEFAULT_TIMEOUT_SECS: u64 = 10;
const DEFAULT_STEPS: usize = 1;
trait AsyncStream: AsyncRead + AsyncWrite + Unpin + Send {}
impl<T> AsyncStream for T where T: AsyncRead + AsyncWrite + Unpin + Send {}
type DynStream = Box<dyn AsyncStream>;
#[derive(Debug, Clone)]
struct Config {
addr: String,
version: u8,
mode: QueryMode,
steps: usize,
follow: bool,
transport: TransportConfig,
assert_substr: Vec<String>,
assert_min_records: Option<usize>,
print_records: bool,
}
#[derive(Debug, Clone, Copy)]
enum QueryMode {
Reset,
SerialAuto,
Serial { session_id: u16, serial: u32 },
}
impl Config {
fn from_args() -> io::Result<Self> {
let mut args = env::args().skip(1);
let mut positional = Vec::new();
let mut steps = DEFAULT_STEPS;
let mut follow = false;
let mut transport = TransportConfig::Tcp;
let mut assert_substr = Vec::new();
let mut assert_min_records = None;
let mut print_records = false;
while let Some(arg) = args.next() {
match arg.as_str() {
"--version" => {
let _ = args.next().ok_or_else(|| {
io::Error::new(io::ErrorKind::InvalidInput, "--version requires value")
})?;
// rpki-rs v0.18 client only exposes Client::new without
// initial-version override. Keep this option as reserved.
}
"--steps" => {
let v = args.next().ok_or_else(|| {
io::Error::new(io::ErrorKind::InvalidInput, "--steps requires value")
})?;
steps = parse_usize_arg(&v, "--steps")?;
}
"--follow" => {
follow = true;
}
"--tls" => {
if matches!(transport, TransportConfig::Tcp) {
transport = TransportConfig::Tls(TlsConfig::default());
}
}
"--ca-cert" => {
let v = args.next().ok_or_else(|| {
io::Error::new(io::ErrorKind::InvalidInput, "--ca-cert requires path")
})?;
ensure_tls(&mut transport)?.ca_cert = Some(PathBuf::from(v));
}
"--server-name" => {
let v = args.next().ok_or_else(|| {
io::Error::new(io::ErrorKind::InvalidInput, "--server-name requires value")
})?;
ensure_tls(&mut transport)?.server_name = Some(v);
}
"--client-cert" => {
let v = args.next().ok_or_else(|| {
io::Error::new(io::ErrorKind::InvalidInput, "--client-cert requires path")
})?;
ensure_tls(&mut transport)?.client_cert = Some(PathBuf::from(v));
}
"--client-key" => {
let v = args.next().ok_or_else(|| {
io::Error::new(io::ErrorKind::InvalidInput, "--client-key requires path")
})?;
ensure_tls(&mut transport)?.client_key = Some(PathBuf::from(v));
}
"--assert-substr" => {
let v = args.next().ok_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidInput,
"--assert-substr requires value",
)
})?;
assert_substr.push(v);
}
"--assert-min-records" => {
let v = args.next().ok_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidInput,
"--assert-min-records requires value",
)
})?;
assert_min_records = Some(parse_usize_arg(&v, "--assert-min-records")?);
}
"--print-records" => {
print_records = true;
}
"--timeout" => {
let _ = args.next().ok_or_else(|| {
io::Error::new(io::ErrorKind::InvalidInput, "--timeout requires value")
})?;
// This binary relies on rpki-rs client's built-in IO timeout.
}
_ if arg.starts_with("--") => {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
format!("unknown option '{}'", arg),
));
}
_ => positional.push(arg),
}
}
let mut positional = positional.into_iter();
let addr = positional
.next()
.unwrap_or_else(|| "127.0.0.1:323".to_string());
let version = positional
.next()
.map(|v| parse_u8_arg(&v, "version"))
.transpose()?
.unwrap_or(2);
let mode = match positional.next().as_deref() {
None | Some("reset") => QueryMode::Reset,
Some("serial") if positional.clone().next().is_none() => QueryMode::SerialAuto,
Some("serial") => {
let session_id = parse_u16_arg(
&positional.next().ok_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidInput,
"serial mode requires session_id and serial",
)
})?,
"session_id",
)?;
let serial = parse_u32_arg(
&positional.next().ok_or_else(|| {
io::Error::new(io::ErrorKind::InvalidInput, "serial mode requires serial")
})?,
"serial",
)?;
QueryMode::Serial { session_id, serial }
}
Some(other) => {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
format!("invalid mode '{}', expected 'reset' or 'serial'", other),
));
}
};
let transport = finalize_transport(transport, &addr)?;
Ok(Self {
addr,
version,
mode,
steps,
follow,
transport,
assert_substr,
assert_min_records,
print_records,
})
}
}
#[derive(Debug, Clone)]
enum TransportConfig {
Tcp,
Tls(TlsConfig),
}
#[derive(Debug, Clone, Default)]
struct TlsConfig {
server_name: Option<String>,
ca_cert: Option<PathBuf>,
client_cert: Option<PathBuf>,
client_key: Option<PathBuf>,
}
fn ensure_tls(transport: &mut TransportConfig) -> io::Result<&mut TlsConfig> {
if matches!(transport, TransportConfig::Tcp) {
*transport = TransportConfig::Tls(TlsConfig::default());
}
match transport {
TransportConfig::Tls(cfg) => Ok(cfg),
TransportConfig::Tcp => Err(io::Error::other("tls configuration unavailable")),
}
}
fn finalize_transport(transport: TransportConfig, addr: &str) -> io::Result<TransportConfig> {
match transport {
TransportConfig::Tcp => Ok(TransportConfig::Tcp),
TransportConfig::Tls(mut cfg) => {
let ca_cert = cfg.ca_cert.take().ok_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidInput,
"TLS mode requires --ca-cert <path>",
)
})?;
match (&cfg.client_cert, &cfg.client_key) {
(Some(_), Some(_)) | (None, None) => {}
_ => {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
"TLS client auth requires both --client-cert and --client-key",
));
}
}
let server_name = cfg
.server_name
.take()
.or_else(|| default_server_name_for_addr(addr))
.ok_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidInput,
"TLS mode requires --server-name or parseable host",
)
})?;
Ok(TransportConfig::Tls(TlsConfig {
server_name: Some(server_name),
ca_cert: Some(ca_cert),
client_cert: cfg.client_cert,
client_key: cfg.client_key,
}))
}
}
}
#[derive(Debug, Default)]
struct InMemoryTarget {
records: Vec<Payload>,
timing: Option<Timing>,
announced: u64,
withdrawn: u64,
}
impl InMemoryTarget {
fn dump_text(&self) -> String {
self.records
.iter()
.map(|p| format!("{:?}", p))
.collect::<Vec<_>>()
.join("\n")
}
}
impl PayloadTarget for InMemoryTarget {
type Update = Vec<(Action, Payload)>;
fn start(&mut self, reset: bool) -> Self::Update {
if reset {
self.records.clear();
}
Vec::new()
}
fn apply(&mut self, update: Self::Update, timing: Timing) -> Result<(), PayloadError> {
for (action, payload) in update {
match action {
Action::Announce => {
self.announced += 1;
if self.records.iter().any(|p| p == &payload) {
return Err(PayloadError::DuplicateAnnounce);
}
self.records.push(payload);
}
Action::Withdraw => {
self.withdrawn += 1;
if let Some(pos) = self.records.iter().position(|p| p == &payload) {
self.records.swap_remove(pos);
} else {
return Err(PayloadError::UnknownWithdraw);
}
}
}
}
self.timing = Some(timing);
Ok(())
}
}
#[tokio::main]
async fn main() -> io::Result<()> {
let config = Config::from_args()?;
println!("== rpki_rs_test_client ==");
println!("target : {}", config.addr);
println!("version : {}", config.version);
println!(
"mode : {}",
match config.mode {
QueryMode::Reset => "reset".to_string(),
QueryMode::SerialAuto => "serial(auto)".to_string(),
QueryMode::Serial { session_id, serial } => {
format!("serial sid={} serial={}", session_id, serial)
}
}
);
println!("steps : {}", config.steps);
println!("follow : {}", config.follow);
println!(
"timeout : {}s (from rpki-rs client IO timeout)",
DEFAULT_TIMEOUT_SECS
);
if config.version != 2 {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
"rpki-rs v0.18 client API does not expose initial-version override; please use version 2",
));
}
if matches!(config.mode, QueryMode::Serial { .. }) {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
"rpki-rs v0.18 Client::new cannot bootstrap explicit serial state; use reset mode",
));
}
let stream = connect_stream(&config).await?;
let target = InMemoryTarget::default();
let mut client = Client::new(stream, target, None);
let bootstrap_steps = match config.mode {
QueryMode::SerialAuto if config.steps < 2 => 2,
_ => config.steps,
};
if bootstrap_steps != config.steps {
println!(
"steps adjusted : {} -> {} (serial(auto) needs at least 2 steps)",
config.steps, bootstrap_steps
);
}
for idx in 0..bootstrap_steps {
client.step().await.map_err(|err| {
io::Error::new(err.kind(), format!("step {} failed: {}", idx + 1, err))
})?;
println!("[step] bootstrap {} ok", idx + 1);
}
if config.follow {
println!("[follow] enabled, entering continuous step loop");
let mut step_index = bootstrap_steps;
loop {
step_index += 1;
client.step().await.map_err(|err| {
io::Error::new(err.kind(), format!("step {} failed: {}", step_index, err))
})?;
println!("[step] follow {} ok", step_index);
}
}
let negotiated_state = client.state();
let target = client.into_target();
println!("state : {:?}", negotiated_state);
if let Some(timing) = target.timing {
println!(
"timing : refresh={} retry={} expire={}",
timing.refresh, timing.retry, timing.expire
);
}
println!("records : {}", target.records.len());
println!(
"updates : announce={} withdraw={}",
target.announced, target.withdrawn
);
if config.print_records {
println!("-- records --");
for rec in &target.records {
println!("{:?}", rec);
}
}
run_assertions(&config, &target)?;
println!("[assert] passed");
Ok(())
}
fn run_assertions(config: &Config, target: &InMemoryTarget) -> io::Result<()> {
if let Some(min) = config.assert_min_records
&& target.records.len() < min
{
return Err(io::Error::other(format!(
"assertion failed: records {} < {}",
target.records.len(),
min
)));
}
if !config.assert_substr.is_empty() {
let dump = target.dump_text();
for needle in &config.assert_substr {
if !dump.contains(needle) {
return Err(io::Error::other(format!(
"assertion failed: missing substring '{}'",
needle
)));
}
}
}
Ok(())
}
async fn connect_stream(config: &Config) -> io::Result<DynStream> {
match &config.transport {
TransportConfig::Tcp => Ok(Box::new(TcpStream::connect(&config.addr).await?)),
TransportConfig::Tls(tls) => connect_tls_stream(&config.addr, tls).await,
}
}
async fn connect_tls_stream(addr: &str, tls: &TlsConfig) -> io::Result<DynStream> {
let stream = TcpStream::connect(addr).await?;
let connector = build_tls_connector(tls)?;
let server_name_str = tls
.server_name
.as_ref()
.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidInput, "missing TLS server name"))?;
let server_name = ServerName::try_from(server_name_str.clone()).map_err(|err| {
io::Error::new(
io::ErrorKind::InvalidInput,
format!("invalid TLS server name '{}': {}", server_name_str, err),
)
})?;
let tls_stream = connector.connect(server_name, stream).await.map_err(|err| {
io::Error::new(
io::ErrorKind::ConnectionAborted,
format!("TLS handshake failed: {}", err),
)
})?;
Ok(Box::new(tls_stream))
}
fn build_tls_connector(tls: &TlsConfig) -> io::Result<TlsConnector> {
let ca_cert_path = tls
.ca_cert
.as_ref()
.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidInput, "missing TLS CA cert"))?;
let ca_certs = load_certs(ca_cert_path)?;
let mut roots = RootCertStore::empty();
let (added, _) = roots.add_parsable_certificates(ca_certs);
if added == 0 {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
format!("no valid CA certs in {}", ca_cert_path.display()),
));
}
let builder = RustlsClientConfig::builder().with_root_certificates(roots);
let cfg = match (&tls.client_cert, &tls.client_key) {
(Some(cert_path), Some(key_path)) => {
let certs = load_certs(cert_path)?;
let key = load_private_key(key_path)?;
builder.with_client_auth_cert(certs, key).map_err(|err| {
io::Error::new(
io::ErrorKind::InvalidInput,
format!("invalid client cert/key: {}", err),
)
})?
}
(None, None) => builder.with_no_client_auth(),
_ => unreachable!(),
};
Ok(TlsConnector::from(Arc::new(cfg)))
}
fn load_certs(path: &Path) -> io::Result<Vec<CertificateDer<'static>>> {
let mut reader = std::io::BufReader::new(std::fs::File::open(path)?);
let certs = rustls_pemfile::certs(&mut reader)
.collect::<Result<Vec<_>, _>>()
.map_err(|err| io::Error::new(io::ErrorKind::InvalidData, err))?;
if certs.is_empty() {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
format!("no certs found in {}", path.display()),
));
}
Ok(certs)
}
fn load_private_key(path: &Path) -> io::Result<PrivateKeyDer<'static>> {
let mut reader = std::io::BufReader::new(std::fs::File::open(path)?);
rustls_pemfile::private_key(&mut reader)
.map_err(|err| io::Error::new(io::ErrorKind::InvalidData, err))?
.ok_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidData,
format!("no private key in {}", path.display()),
)
})
}
fn default_server_name_for_addr(addr: &str) -> Option<String> {
if let Some(rest) = addr.strip_prefix('[') {
return rest.split(']').next().map(str::to_string);
}
addr.rsplit_once(':').map(|(host, _)| host.to_string())
}
fn parse_u8_arg(value: &str, name: &str) -> io::Result<u8> {
value.parse::<u8>().map_err(|err| {
io::Error::new(
io::ErrorKind::InvalidInput,
format!("invalid {} '{}': {}", name, value, err),
)
})
}
fn parse_u16_arg(value: &str, name: &str) -> io::Result<u16> {
value.parse::<u16>().map_err(|err| {
io::Error::new(
io::ErrorKind::InvalidInput,
format!("invalid {} '{}': {}", name, value, err),
)
})
}
fn parse_u32_arg(value: &str, name: &str) -> io::Result<u32> {
value.parse::<u32>().map_err(|err| {
io::Error::new(
io::ErrorKind::InvalidInput,
format!("invalid {} '{}': {}", name, value, err),
)
})
}
fn parse_usize_arg(value: &str, name: &str) -> io::Result<usize> {
value.parse::<usize>().map_err(|err| {
io::Error::new(
io::ErrorKind::InvalidInput,
format!("invalid {} '{}': {}", name, value, err),
)
})
}

View File

@ -134,6 +134,12 @@ impl AppConfig {
err
)
})?;
if secs == 0 {
return Err(anyhow!(
"invalid RPKI_RTR_REFRESH_INTERVAL_SECS '{}': must be >= 1",
value
));
}
config.refresh_interval = Duration::from_secs(secs);
}
if let Some(value) = env_var("RPKI_RTR_MAX_CONNECTIONS")? {

View File

@ -292,7 +292,7 @@ impl Timing {
impl Default for Timing {
fn default() -> Self {
Self {
refresh: 3600,
refresh: 60,
retry: 600,
expire: 7200,
}

View File

@ -11,7 +11,9 @@ use tokio::io::AsyncWrite;
use anyhow::bail;
use serde::Serialize;
use std::slice;
use std::sync::Once;
use tokio::io::{AsyncRead, AsyncReadExt, AsyncWriteExt};
use tracing::debug;
pub const HEADER_LEN: usize = 8;
pub const MAX_PDU_LEN: u32 = 65535;
@ -22,6 +24,9 @@ pub const END_OF_DATA_V1_LEN: u32 = 24;
pub const ZERO_16: u16 = 0;
pub const ZERO_8: u8 = 0;
static ROUTER_KEY_RESERVED_ZERO_NONZERO_LOG_ONCE: Once = Once::new();
static ASPA_RESERVED_ZERO_NONZERO_LOG_ONCE: Once = Once::new();
macro_rules! common {
( $type:ident ) => {
#[allow(dead_code)]
@ -1003,6 +1008,15 @@ impl RouterKey {
let asn = Asn::from(u32::from_be_bytes(body[20..24].try_into().unwrap()));
let subject_public_key_info = Arc::<[u8]>::from(body[24..].to_vec());
if header.zero() != 0 {
ROUTER_KEY_RESERVED_ZERO_NONZERO_LOG_ONCE.call_once(|| {
debug!(
"received RouterKey PDU with non-zero reserved field (zero={}); ignoring per protocol compatibility",
header.zero()
);
});
}
let res = Self {
header,
flags: header.flags(),
@ -1015,10 +1029,25 @@ impl RouterKey {
}
pub async fn write<A: AsyncWrite + Unpin>(&self, w: &mut A) -> Result<(), io::Error> {
self.validate()?;
let length = Self::BASE_LEN + self.subject_public_key_info.len();
let length_u32 = u32::try_from(length).map_err(|_| {
io::Error::new(
io::ErrorKind::InvalidData,
"RouterKey PDU length exceeds u32",
)
})?;
if length_u32 > MAX_PDU_LEN {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
format!(
"RouterKey PDU too large: {} octets exceeds max {}",
length_u32, MAX_PDU_LEN
),
));
}
let header =
HeaderWithFlags::new(self.header.version(), Self::PDU, self.flags, length as u32);
let header = HeaderWithFlags::new(self.header.version(), Self::PDU, self.flags, length_u32);
w.write_all(&[
header.version(),
@ -1028,7 +1057,7 @@ impl RouterKey {
])
.await?;
w.write_all(&(length as u32).to_be_bytes()).await?;
w.write_all(&length_u32.to_be_bytes()).await?;
w.write_all(self.ski.as_ref()).await?;
w.write_all(&self.asn.into_u32().to_be_bytes()).await?;
w.write_all(&self.subject_public_key_info).await?;
@ -1043,9 +1072,10 @@ impl RouterKey {
subject_public_key_info: Arc<[u8]>,
) -> Self {
let length = Self::BASE_LEN + subject_public_key_info.len();
let wire_length = u32::try_from(length).unwrap_or(u32::MAX);
Self {
header: HeaderWithFlags::new(version, Self::PDU, flags, length as u32),
header: HeaderWithFlags::new(version, Self::PDU, flags, wire_length),
flags,
ski,
asn,
@ -1072,16 +1102,17 @@ impl RouterKey {
"unexpected PDU type for RouterKey",
));
}
if usize::try_from(self.header.length()).unwrap_or(0) < Self::BASE_LEN {
let total_len = usize::try_from(self.header.length()).unwrap_or(0);
if total_len < Self::BASE_LEN {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"RouterKey PDU shorter than fixed wire size",
));
}
if self.header.zero() != 0 {
if self.header.length() > MAX_PDU_LEN {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"RouterKey reserved zero octet must be zero",
"RouterKey PDU length exceeds MAX_PDU_LEN",
));
}
if self.header.flags().into_u8() & !0x01 != 0 {
@ -1165,6 +1196,15 @@ impl Aspa {
provider_asns.push(u32::from_be_bytes(chunk.try_into().unwrap()));
}
if header.zero() != 0 {
ASPA_RESERVED_ZERO_NONZERO_LOG_ONCE.call_once(|| {
debug!(
"received ASPA PDU with non-zero reserved field (zero={}); ignoring per protocol compatibility",
header.zero()
);
});
}
let res = Self {
header,
customer_asn,
@ -1175,13 +1215,26 @@ impl Aspa {
}
pub async fn write<A: AsyncWrite + Unpin>(&self, w: &mut A) -> Result<(), io::Error> {
self.validate()?;
let length = Self::BASE_LEN + (self.provider_asns.len() * 4);
let length_u32 = u32::try_from(length).map_err(|_| {
io::Error::new(io::ErrorKind::InvalidData, "ASPA PDU length exceeds u32")
})?;
if length_u32 > MAX_PDU_LEN {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
format!(
"ASPA PDU too large: {} octets exceeds max {}",
length_u32, MAX_PDU_LEN
),
));
}
let header = HeaderWithFlags::new(
self.header.version(),
Self::PDU,
self.header.flags(),
length as u32,
length_u32,
);
w.write_all(&[
@ -1192,7 +1245,7 @@ impl Aspa {
])
.await?;
w.write_all(&(length as u32).to_be_bytes()).await?;
w.write_all(&length_u32.to_be_bytes()).await?;
w.write_all(&self.customer_asn.to_be_bytes()).await?;
for asn in &self.provider_asns {
@ -1203,9 +1256,10 @@ impl Aspa {
}
pub fn new(version: u8, flags: Flags, customer_asn: u32, provider_asns: Vec<u32>) -> Self {
let length = Self::BASE_LEN + (provider_asns.len() * 4);
let wire_length = u32::try_from(length).unwrap_or(u32::MAX);
Self {
header: HeaderWithFlags::new(version, Self::PDU, flags, length as u32),
header: HeaderWithFlags::new(version, Self::PDU, flags, wire_length),
customer_asn,
provider_asns,
}
@ -1225,18 +1279,18 @@ impl Aspa {
"ASPA PDU shorter than fixed wire size",
));
}
if self.header.length() > MAX_PDU_LEN {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"ASPA PDU length exceeds MAX_PDU_LEN",
));
}
if (total_len - Self::BASE_LEN) % 4 != 0 {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"ASPA provider list length must be a multiple of four octets",
));
}
if self.header.zero() != 0 {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"ASPA reserved zero octet must be zero",
));
}
if self.header.flags().into_u8() & !0x01 != 0 {
return Err(io::Error::new(
io::ErrorKind::InvalidData,

View File

@ -98,13 +98,17 @@ fn apply_slurm_to_payloads_from_dir(
}
fn read_slurm_files(slurm_dir: &str) -> Result<Vec<(String, SlurmFile)>> {
let mut paths = std::fs::read_dir(slurm_dir)
let mut paths = Vec::<PathBuf>::new();
for entry in std::fs::read_dir(slurm_dir)
.map_err(|err| anyhow!("failed to read SLURM directory '{}': {}", slurm_dir, err))?
.filter_map(|entry| entry.ok())
.map(|entry| entry.path())
.filter(|path| path.is_file())
.filter(|path| path.extension().and_then(|ext| ext.to_str()) == Some("slurm"))
.collect::<Vec<PathBuf>>();
{
let entry = entry
.map_err(|err| anyhow!("failed to enumerate SLURM directory '{}': {}", slurm_dir, err))?;
let path = entry.path();
if path.is_file() && path.extension().and_then(|ext| ext.to_str()) == Some("slurm") {
paths.push(path);
}
}
paths.sort_by_key(|path| {
path.file_name()

View File

@ -114,7 +114,7 @@ async fn router_key_length_matches_wire_size() {
}
#[tokio::test]
async fn router_key_read_rejects_reserved_zero_octet() {
async fn router_key_read_accepts_non_zero_reserved_zero_octet() {
let (mut client, mut server) = duplex(1024);
let mut bytes = vec![1, RouterKey::PDU, 1, 1];
bytes.extend_from_slice(&(8u32 + 20 + 4 + 4).to_be_bytes());
@ -124,9 +124,9 @@ async fn router_key_read_rejects_reserved_zero_octet() {
client.write_all(&bytes).await.unwrap();
let err = RouterKey::read(&mut server).await.unwrap_err();
assert_eq!(err.kind(), std::io::ErrorKind::InvalidData);
assert!(err.to_string().contains("zero octet"));
let decoded = RouterKey::read(&mut server).await.unwrap();
assert_eq!(decoded.asn(), Asn::from(64496u32));
assert_eq!(decoded.spki(), &[1, 2, 3, 4]);
}
#[tokio::test]
@ -159,6 +159,19 @@ async fn aspa_read_rejects_unsorted_provider_list() {
assert!(err.to_string().contains("strictly increasing"));
}
#[tokio::test]
async fn aspa_read_accepts_non_zero_reserved_zero_octet() {
let (mut client, mut server) = duplex(1024);
let mut bytes = vec![2, Aspa::PDU, 1, 9];
bytes.extend_from_slice(&(16u32).to_be_bytes());
bytes.extend_from_slice(&64496u32.to_be_bytes());
bytes.extend_from_slice(&64497u32.to_be_bytes());
client.write_all(&bytes).await.unwrap();
let _decoded = Aspa::read(&mut server).await.unwrap();
}
#[tokio::test]
async fn aspa_read_rejects_withdraw_with_providers() {
let (mut client, mut server) = duplex(1024);

43
tests/test_pipeline.rs Normal file
View File

@ -0,0 +1,43 @@
use std::fs;
use std::path::PathBuf;
use rpki::source::pipeline::{PayloadLoadConfig, load_payloads_from_latest_sources};
use tempfile::tempdir;
fn data_dir() -> String {
PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.join("data")
.to_string_lossy()
.to_string()
}
#[test]
fn load_payloads_rejects_entire_slurm_set_when_any_file_is_invalid() {
let slurm_dir = tempdir().expect("create temp slurm dir");
let valid = r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
}
}"#;
fs::write(slurm_dir.path().join("01-valid.slurm"), valid).expect("write valid slurm");
fs::write(slurm_dir.path().join("02-invalid.slurm"), "{").expect("write invalid slurm");
let config = PayloadLoadConfig {
ccr_dir: data_dir(),
slurm_dir: Some(slurm_dir.path().to_string_lossy().to_string()),
strict_ccr_validation: false,
};
let err = load_payloads_from_latest_sources(&config).unwrap_err();
let text = err.to_string();
assert!(text.contains("failed to parse SLURM file"));
assert!(text.contains("02-invalid.slurm"));
}

View File

@ -24,7 +24,47 @@ fn sample_ski_b64() -> String {
STANDARD_NO_PAD.encode(sample_ski())
}
fn log_slurm_input(name: &str, json: &str) {
println!("[{name}] SLURM input:\n{json}");
}
fn log_slurm_ok(name: &str, slurm: &SlurmFile) {
println!(
"[{name}] parsed ok: version={}, prefix_filters={}, bgpsec_filters={}, aspa_filters={}, prefix_assertions={}, bgpsec_assertions={}, aspa_assertions={}",
slurm.version().as_u32(),
slurm.validation_output_filters().prefix_filters.len(),
slurm.validation_output_filters().bgpsec_filters.len(),
slurm.validation_output_filters().aspa_filters.len(),
slurm.locally_added_assertions().prefix_assertions.len(),
slurm.locally_added_assertions().bgpsec_assertions.len(),
slurm.locally_added_assertions().aspa_assertions.len(),
);
}
fn log_slurm_err(name: &str, err: &impl std::fmt::Display) {
println!("[{name}] rejected: {err}");
}
fn log_payload_result(name: &str, payloads: &[Payload]) {
println!("[{name}] payload result count={}", payloads.len());
for payload in payloads {
println!("[{name}] payload: {:?}", payload);
}
}
fn assert_invalid_slurm(json: &str, expected: &str) {
log_slurm_input("invalid_slurm", json);
let err = SlurmFile::from_slice(json.as_bytes()).unwrap_err();
log_slurm_err("invalid_slurm", &err);
let err_text = err.to_string();
assert!(
err_text.contains(expected),
"expected error containing '{expected}', got '{err_text}'"
);
}
#[test]
// Parses a baseline RFC 8416 v1 SLURM file with prefix and BGPsec entries.
fn parses_rfc8416_v1_slurm() {
let ski_b64 = sample_ski_b64();
let router_public_key = STANDARD_NO_PAD.encode(sample_spki());
@ -50,7 +90,9 @@ fn parses_rfc8416_v1_slurm() {
}}"#
);
log_slurm_input("parses_rfc8416_v1_slurm", &json);
let slurm = SlurmFile::from_slice(json.as_bytes()).unwrap();
log_slurm_ok("parses_rfc8416_v1_slurm", &slurm);
assert_eq!(slurm.version(), SlurmVersion::V1);
assert_eq!(slurm.validation_output_filters().prefix_filters.len(), 1);
@ -62,6 +104,7 @@ fn parses_rfc8416_v1_slurm() {
}
#[test]
// Parses a v2 SLURM file carrying the ASPA extension members from the draft.
fn parses_v2_slurm_with_aspa_extensions() {
let json = r#"{
"slurmVersion": 2,
@ -81,7 +124,9 @@ fn parses_v2_slurm_with_aspa_extensions() {
}
}"#;
log_slurm_input("parses_v2_slurm_with_aspa_extensions", json);
let slurm = SlurmFile::from_slice(json.as_bytes()).unwrap();
log_slurm_ok("parses_v2_slurm_with_aspa_extensions", &slurm);
assert_eq!(slurm.version(), SlurmVersion::V2);
assert_eq!(slurm.validation_output_filters().aspa_filters.len(), 1);
@ -89,6 +134,7 @@ fn parses_v2_slurm_with_aspa_extensions() {
}
#[test]
// Rejects ASPA members in a v1 file because they are not part of RFC 8416 v1.
fn rejects_v1_file_with_aspa_members() {
let json = r#"{
"slurmVersion": 1,
@ -103,11 +149,253 @@ fn rejects_v1_file_with_aspa_members() {
}
}"#;
log_slurm_input("rejects_v1_file_with_aspa_members", json);
let err = SlurmFile::from_slice(json.as_bytes()).unwrap_err();
log_slurm_err("rejects_v1_file_with_aspa_members", &err);
assert!(err.to_string().contains("unknown field"));
}
#[test]
// Rejects malformed v1 top-level objects and nested containers that violate RFC 8416 member rules.
fn rejects_invalid_v1_file_structure() {
let cases = [
(
r#"{
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
}
}"#,
"missing field `slurmVersion`",
),
(
r#"{
"slurmVersion": 3,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
}
}"#,
"unsupported slurmVersion 3",
),
(
r#"{
"slurmVersion": 1,
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
}
}"#,
"missing field `validationOutputFilters`",
),
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
}
}"#,
"missing field `locallyAddedAssertions`",
),
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
},
"extra": true
}"#,
"unknown field `extra`",
),
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
}
}"#,
"missing field `prefixFilters`",
),
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
}
}"#,
"missing field `bgpsecFilters`",
),
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
}
}"#,
"unknown field `aspaFilters`",
),
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"bgpsecAssertions": []
}
}"#,
"missing field `prefixAssertions`",
),
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": []
}
}"#,
"missing field `bgpsecAssertions`",
),
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": []
}
}"#,
"unknown field `aspaAssertions`",
),
];
for (json, expected) in cases {
assert_invalid_slurm(json, expected);
}
}
#[test]
// Rejects malformed v1 member objects that contain unknown fields or omit mandatory members.
fn rejects_invalid_v1_member_structure() {
let ski_b64 = sample_ski_b64();
let router_public_key = STANDARD_NO_PAD.encode(sample_spki());
let cases = vec![
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [
{ "prefix": "192.0.2.0/24", "unexpected": true }
],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
}
}"#
.to_string(),
"unknown field `unexpected`",
),
(
format!(
r#"{{
"slurmVersion": 1,
"validationOutputFilters": {{
"prefixFilters": [],
"bgpsecFilters": [
{{ "asn": 64496, "SKI": "{ski_b64}", "unexpected": true }}
]
}},
"locallyAddedAssertions": {{
"prefixAssertions": [],
"bgpsecAssertions": []
}}
}}"#
),
"unknown field `unexpected`",
),
(
r#"{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [
{ "prefix": "198.51.100.0/24" }
],
"bgpsecAssertions": []
}
}"#
.to_string(),
"missing field `asn`",
),
(
format!(
r#"{{
"slurmVersion": 1,
"validationOutputFilters": {{
"prefixFilters": [],
"bgpsecFilters": []
}},
"locallyAddedAssertions": {{
"prefixAssertions": [],
"bgpsecAssertions": [
{{ "asn": 64501, "SKI": "{ski_b64}", "routerPublicKey": "{router_public_key}", "unexpected": true }}
]
}}
}}"#
),
"unknown field `unexpected`",
),
];
for (json, expected) in cases {
assert_invalid_slurm(&json, expected);
}
}
#[test]
// Rejects non-canonical prefixes and unsorted ASPA provider lists during validation.
fn rejects_non_canonical_prefixes_and_unsorted_aspa_providers() {
let non_canonical = r#"{
"slurmVersion": 1,
@ -122,7 +410,15 @@ fn rejects_non_canonical_prefixes_and_unsorted_aspa_providers() {
"bgpsecAssertions": []
}
}"#;
log_slurm_input(
"rejects_non_canonical_prefixes_and_unsorted_aspa_providers.non_canonical",
non_canonical,
);
let non_canonical_err = SlurmFile::from_slice(non_canonical.as_bytes()).unwrap_err();
log_slurm_err(
"rejects_non_canonical_prefixes_and_unsorted_aspa_providers.non_canonical",
&non_canonical_err,
);
assert!(non_canonical_err.to_string().contains("not canonical"));
let unsorted_aspa = r#"{
@ -140,11 +436,213 @@ fn rejects_non_canonical_prefixes_and_unsorted_aspa_providers() {
]
}
}"#;
log_slurm_input(
"rejects_non_canonical_prefixes_and_unsorted_aspa_providers.unsorted_aspa",
unsorted_aspa,
);
let aspa_err = SlurmFile::from_slice(unsorted_aspa.as_bytes()).unwrap_err();
log_slurm_err(
"rejects_non_canonical_prefixes_and_unsorted_aspa_providers.unsorted_aspa",
&aspa_err,
);
assert!(aspa_err.to_string().contains("strictly increasing"));
}
#[test]
// Rejects malformed v2 top-level objects and nested containers that violate the ASPA SLURM draft.
fn rejects_invalid_v2_file_structure() {
let cases = [
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": []
}
}"#,
"missing field `aspaFilters`",
),
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": []
}
}"#,
"missing field `aspaAssertions`",
),
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": [],
"extra": true
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": []
}
}"#,
"unknown field `extra`",
),
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": [],
"extra": true
}
}"#,
"unknown field `extra`",
),
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": []
},
"extra": true
}"#,
"unknown field `extra`",
),
];
for (json, expected) in cases {
assert_invalid_slurm(json, expected);
}
}
#[test]
// Rejects malformed v2 ASPA member objects that omit required fields or contain unknown fields.
fn rejects_invalid_v2_member_structure() {
let cases = [
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": [
{ "comment": "missing customer" }
]
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": []
}
}"#,
"missing field `customerAsn`",
),
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": [
{ "customerAsn": 64496, "unexpected": true }
]
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": []
}
}"#,
"unknown field `unexpected`",
),
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": [
{ "customerAsn": 64500 }
]
}
}"#,
"missing field `providerAsns`",
),
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": [
{ "providerAsns": [64501] }
]
}
}"#,
"missing field `customerAsn`",
),
(
r#"{
"slurmVersion": 2,
"validationOutputFilters": {
"prefixFilters": [],
"bgpsecFilters": [],
"aspaFilters": []
},
"locallyAddedAssertions": {
"prefixAssertions": [],
"bgpsecAssertions": [],
"aspaAssertions": [
{ "customerAsn": 64500, "providerAsns": [64501], "unexpected": true }
]
}
}"#,
"unknown field `unexpected`",
),
];
for (json, expected) in cases {
assert_invalid_slurm(json, expected);
}
}
#[test]
// Applies filters first, then adds assertions, while removing duplicate payloads.
fn applies_filters_before_assertions_and_excludes_duplicates() {
let ski = Ski::from_bytes(sample_ski());
let spki = sample_spki();
@ -178,7 +676,12 @@ fn applies_filters_before_assertions_and_excludes_duplicates() {
}}
}}"#
);
log_slurm_input("applies_filters_before_assertions_and_excludes_duplicates", &json);
let slurm = SlurmFile::from_slice(json.as_bytes()).unwrap();
log_slurm_ok(
"applies_filters_before_assertions_and_excludes_duplicates",
&slurm,
);
let input = vec![
Payload::RouteOrigin(RouteOrigin::new(
@ -196,6 +699,10 @@ fn applies_filters_before_assertions_and_excludes_duplicates() {
];
let output = slurm.apply(&input);
log_payload_result(
"applies_filters_before_assertions_and_excludes_duplicates",
&output,
);
assert_eq!(output.len(), 4);
assert!(output.iter().any(|payload| matches!(
@ -221,6 +728,7 @@ fn applies_filters_before_assertions_and_excludes_duplicates() {
}
#[test]
// Rejects non-RFC hex SKI encoding and ASPA assertions that self-reference the customer ASN.
fn rejects_hex_encoded_ski_and_aspa_customer_in_providers() {
let ski_hex = hex::encode(sample_ski());
let router_public_key = STANDARD_NO_PAD.encode(sample_spki());
@ -241,7 +749,15 @@ fn rejects_hex_encoded_ski_and_aspa_customer_in_providers() {
}}
}}"#
);
log_slurm_input(
"rejects_hex_encoded_ski_and_aspa_customer_in_providers.invalid_ski",
&invalid_ski,
);
let ski_err = SlurmFile::from_slice(invalid_ski.as_bytes()).unwrap_err();
log_slurm_err(
"rejects_hex_encoded_ski_and_aspa_customer_in_providers.invalid_ski",
&ski_err,
);
let ski_err_text = ski_err.to_string();
assert!(
ski_err_text.contains("invalid SKI base64")
@ -263,13 +779,22 @@ fn rejects_hex_encoded_ski_and_aspa_customer_in_providers() {
]
}
}"#;
log_slurm_input(
"rejects_hex_encoded_ski_and_aspa_customer_in_providers.invalid_aspa",
invalid_aspa,
);
let aspa_err = SlurmFile::from_slice(invalid_aspa.as_bytes()).unwrap_err();
log_slurm_err(
"rejects_hex_encoded_ski_and_aspa_customer_in_providers.invalid_aspa",
&aspa_err,
);
assert!(aspa_err
.to_string()
.contains("providerAsns must not contain customerAsn"));
}
#[test]
// Merges non-overlapping SLURM files and upgrades the merged policy version as needed.
fn merges_multiple_slurm_files_without_conflict() {
let a = r#"{
"slurmVersion": 1,
@ -301,6 +826,9 @@ fn merges_multiple_slurm_files_without_conflict() {
}
}"#;
log_slurm_input("merges_multiple_slurm_files_without_conflict.a", a);
log_slurm_input("merges_multiple_slurm_files_without_conflict.b", b);
let merged = SlurmFile::merge_named(vec![
(
"a.slurm".to_string(),
@ -312,6 +840,7 @@ fn merges_multiple_slurm_files_without_conflict() {
),
])
.unwrap();
log_slurm_ok("merges_multiple_slurm_files_without_conflict.merged", &merged);
assert_eq!(merged.version(), SlurmVersion::V2);
assert_eq!(merged.locally_added_assertions().prefix_assertions.len(), 1);
@ -319,6 +848,7 @@ fn merges_multiple_slurm_files_without_conflict() {
}
#[test]
// Rejects multiple SLURM files whose policy scopes overlap and would conflict when merged.
fn rejects_conflicting_multiple_slurm_files() {
let a = r#"{
"slurmVersion": 1,
@ -348,6 +878,9 @@ fn rejects_conflicting_multiple_slurm_files() {
}
}"#;
log_slurm_input("rejects_conflicting_multiple_slurm_files.a", a);
log_slurm_input("rejects_conflicting_multiple_slurm_files.b", b);
let err = SlurmFile::merge_named(vec![
(
"a.slurm".to_string(),
@ -359,6 +892,7 @@ fn rejects_conflicting_multiple_slurm_files() {
),
])
.unwrap_err();
log_slurm_err("rejects_conflicting_multiple_slurm_files", &err);
assert!(err.to_string().contains("conflicting SLURM files"));
}