This guide covers how to test edgeProxy locally and in deployment environments using the mock backend server.
Mock Backend Server
The tests/mock-backend/ directory contains a lightweight Go HTTP server that simulates real backend services for testing purposes.
Features
- Multi-region simulation: Configure different regions per instance
- Request tracking: Counts requests per backend
- Multiple endpoints: Root, health, info, and latency endpoints
- JSON responses: Structured responses for easy parsing
- Minimal footprint: ~8MB binary, low memory usage
Building the Mock Server
cd tests/mock-backend
go build -o mock-backend main.go
GOOS=linux GOARCH=amd64 go build -o mock-backend-linux-amd64 main.go
Running Locally
Start multiple instances to simulate different backends:
./mock-backend -port 9001 -region eu -id mock-eu-1
./mock-backend -port 9002 -region eu -id mock-eu-2
./mock-backend -port 9003 -region us -id mock-us-1
CLI Options
| Flag | Default | Description |
|---|
-port | 9001 | TCP port to listen on |
-region | eu | Region identifier (eu, us, sa, ap) |
-id | mock-{region}-{port} | Unique backend identifier |
Endpoints
| Endpoint | Description | Response |
|---|
/ | Root | Text with backend info |
/health | Health check | OK - {id} ({region}) |
/api/info | JSON info | Full backend details |
/api/latency | Minimal JSON | For latency testing |
Example Response (/api/info)
{
"backend_id": "mock-eu-1",
"region": "eu",
"hostname": "ip-172-31-29-183",
"port": "9001",
"request_count": 42,
"uptime_secs": 3600,
"timestamp": "2025-12-08T00:11:43Z",
"message": "Hello from mock backend!"
}
Local Testing Setup
Add mock backends to your local routing.db:
DELETE FROM backends WHERE id LIKE 'mock-%';
INSERT INTO backends (id, app, region, wg_ip, port, healthy, weight, soft_limit, hard_limit)
VALUES
('mock-eu-1', 'test', 'eu', '127.0.0.1', 9001, 1, 2, 100, 150),
('mock-eu-2', 'test', 'eu', '127.0.0.1', 9002, 1, 2, 100, 150),
('mock-us-1', 'test', 'us', '127.0.0.1', 9003, 1, 2, 100, 150);
2. Start Mock Backends
./tests/mock-backend/mock-backend -port 9001 -region eu -id mock-eu-1 &
./tests/mock-backend/mock-backend -port 9002 -region eu -id mock-eu-2 &
./tests/mock-backend/mock-backend -port 9003 -region us -id mock-us-1 &
3. Run edgeProxy
EDGEPROXY_REGION=eu \
EDGEPROXY_LISTEN_ADDR=0.0.0.0:8080 \
cargo run --release
4. Test Requests
curl http://localhost:8080/api/info
for i in {1..10}; do
curl -s http://localhost:8080/api/info | grep backend_id
done
curl http://localhost:8080/health
EC2 Deployment Testing
1. Deploy Mock Server to EC2
cd tests/mock-backend
GOOS=linux GOARCH=amd64 go build -o mock-backend-linux-amd64 main.go
scp -i ~/.ssh/edgeproxy-key.pem mock-backend-linux-amd64 ubuntu@<EC2-IP>:/tmp/
ssh -i ~/.ssh/edgeproxy-key.pem ubuntu@<EC2-IP>
sudo mv /tmp/mock-backend-linux-amd64 /opt/edgeproxy/mock-backend
sudo chmod +x /opt/edgeproxy/mock-backend
2. Start Mock Backends on EC2
cd /opt/edgeproxy
nohup ./mock-backend -port 9001 -region eu -id mock-eu-1 > /tmp/mock-9001.log 2>&1 &
nohup ./mock-backend -port 9002 -region eu -id mock-eu-2 > /tmp/mock-9002.log 2>&1 &
nohup ./mock-backend -port 9003 -region us -id mock-us-1 > /tmp/mock-9003.log 2>&1 &
ps aux | grep mock-backend
curl localhost:9001/health
curl localhost:9002/health
curl localhost:9003/health
sqlite3 /opt/edgeproxy/routing.db "
DELETE FROM backends WHERE id LIKE 'mock-%';
INSERT INTO backends (id, app, region, wg_ip, port, healthy, weight, soft_limit, hard_limit)
VALUES
('mock-eu-1', 'test', 'eu', '127.0.0.1', 9001, 1, 2, 100, 150),
('mock-eu-2', 'test', 'eu', '127.0.0.1', 9002, 1, 2, 100, 150),
('mock-us-1', 'test', 'us', '127.0.0.1', 9003, 1, 2, 100, 150);
SELECT id, region, port, healthy FROM backends WHERE deleted=0;
"
Backend Fields Explained
| Field | Type | Description | Example |
|---|
id | TEXT | Unique identifier for the backend. Used in logs and client affinity. | mock-eu-1 |
app | TEXT | Application name. Groups backends serving the same app. | test |
region | TEXT | Geographic region code. Used for geo-routing decisions. Valid: eu, us, sa, ap. | eu |
wg_ip | TEXT | Backend IP address. Use 127.0.0.1 for local testing, WireGuard IPs (10.50.x.x) in production. | 127.0.0.1 |
port | INTEGER | TCP port the backend listens on. | 9001 |
healthy | INTEGER | Health status. 1 = healthy (receives traffic), 0 = unhealthy (excluded from routing). | 1 |
weight | INTEGER | Relative weight for load balancing. Higher weight = more traffic. Range: 1-10. | 2 |
soft_limit | INTEGER | Comfortable connection count. Above this, the backend is considered "loaded" and less preferred. | 100 |
hard_limit | INTEGER | Maximum connections. At or above this limit, backend is excluded from new connections. | 150 |
Example Data Breakdown
('mock-eu-1', 'test', 'eu', '127.0.0.1', 9001, 1, 2, 100, 150)
| Value | Field | Meaning |
|---|
mock-eu-1 | id | Backend identifier, first EU mock server |
test | app | Application name for testing |
eu | region | Located in Europe region |
127.0.0.1 | wg_ip | Localhost (same machine as proxy) |
9001 | port | Listening on port 9001 |
1 | healthy | Backend is healthy and active |
2 | weight | Medium priority (scale 1-10) |
100 | soft_limit | Comfortable with up to 100 connections |
150 | hard_limit | Maximum 150 connections allowed |
Load Balancer Scoring
The proxy uses these fields to calculate a score for each backend:
score = geo_score * 100 + (connections / soft_limit) / weight
- geo_score: 0 (same country), 1 (same region), 2 (local POP region), 3 (global fallback)
- connections: Current active connections (from metrics)
- soft_limit: Divides load factor
- weight: Higher weight reduces the score (more preferred)
Lowest score wins. Backends with healthy=0 or at hard_limit are excluded.
4. Test from External Client
curl http://<EC2-PUBLIC-IP>:8080/api/info
curl http://<EC2-PUBLIC-IP>:8080/health
for i in {1..5}; do
curl -s http://<EC2-PUBLIC-IP>:8080/api/info
echo ""
done
Testing Scenarios
Client Affinity
Client affinity (sticky sessions) binds clients to the same backend:
for i in {1..5}; do
curl -s http://localhost:8080/api/info | grep backend_id
done
Load Distribution
To test load distribution, simulate different clients:
curl localhost:9001/api/info | grep request_count
curl localhost:9002/api/info | grep request_count
curl localhost:9003/api/info | grep request_count
Backend Health
Test health-based routing by stopping a backend:
pkill -f 'mock-backend.*9001'
curl http://localhost:8080/api/info
Geo-Routing
The proxy routes clients to backends in their region:
- Configure backends in multiple regions
- Test from different geographic locations
- Observe routing decisions in proxy logs
Monitoring During Tests
edgeProxy Logs
sudo journalctl -u edgeproxy -f
Mock Backend Logs
tail -f /tmp/mock-9001.log
tail -f /tmp/mock-9002.log
tail -f /tmp/mock-9003.log
Request Distribution
echo "mock-eu-1: $(curl -s localhost:9001/api/info | grep -o '"request_count":[0-9]*')"
echo "mock-eu-2: $(curl -s localhost:9002/api/info | grep -o '"request_count":[0-9]*')"
echo "mock-us-1: $(curl -s localhost:9003/api/info | grep -o '"request_count":[0-9]*')"
Cleanup
Local
EC2
sudo pkill -f mock-backend
sudo fuser -k 9001/tcp 9002/tcp 9003/tcp
Troubleshooting
Mock Backend Won't Start
sudo ss -tlnp | grep 9001
sudo fuser -k 9001/tcp
Proxy Can't Connect to Backend
- Verify backend is running:
curl localhost:9001/health
- Check routing.db configuration
- Verify
wg_ip matches (use 127.0.0.1 for local testing)
- Check firewall rules on EC2
Requests Timeout
- Check edgeProxy is running:
sudo systemctl status edgeproxy
- Verify backend health in routing.db
- Check connection limits aren't exceeded
Unit Tests
edgeProxy has comprehensive unit test coverage following the Hexagonal Architecture pattern with Sans-IO design. All tests are written in Rust using the built-in test framework.
Test Summary
| Metric | Value |
|---|
| Total Tests | 875 |
| Line Coverage | 98.71% |
| Region Coverage | 98.71% |
| Function Coverage | 99.58% |
| Files with 100% | 22 |
Coverage Evolution
The project achieved significant coverage improvements through systematic testing:
| Phase | Coverage | Tests | Key Improvements |
|---|
| Initial (stable) | 94.43% | 780 | Basic unit tests |
| Refactoring | 94.92% | 782 | Sans-IO pattern adoption |
| Nightly build | 98.32% | 782 | coverage(off) for I/O |
| Edge case tests | 98.50% | 784 | Circuit breaker, metrics |
| TLS & pool | 98.89% | 786 | TLS, connection pool |
| Replication v0.4.0 | 98.71% | 875 | Merkle tree, mDNS, delta sync |
Sans-IO Architecture Benefits
The Sans-IO pattern separates pure business logic from I/O operations:
┌─────────────────────────────────────────────────────────────────────┐
│ TESTABLE (100% covered) │
│ ┌──────────────────────────────────────────────────────────────┐ │
│ │ Pure Functions: process_message(), pick_backend(), etc. │ │
│ │ - No network calls │ │
│ │ - No database access │ │
│ │ - Returns actions to execute │ │
│ └──────────────────────────────────────────────────────────────┘ │
├─────────────────────────────────────────────────────────────────────┤
│ I/O WRAPPERS (excluded) │
│ ┌──────────────────────────────────────────────────────────────┐ │
│ │ Async handlers: start(), run(), handle_connection() │ │
│ │ - Marked with #[cfg_attr(coverage_nightly, coverage(off))] │ │
│ │ - Thin wrappers that execute actions │ │
│ └──────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
This approach ensures:
- All business logic is testable without mocking network
- 100% coverage of decision-making code
- Clear separation between logic and I/O
Running Tests
cargo test
cargo test -- --nocapture
cargo test domain::services::load_balancer
cargo test infrastructure::
cargo test -- --test-threads=4
cargo test -- --test-threads=1
Tests by Module
Inbound Adapters
| Module | Tests | Coverage | Description |
|---|
adapters::inbound::api_server | 38 | 99.57% | Auto-Discovery API, registration, heartbeat |
adapters::inbound::dns_server | 44 | 97.80% | DNS server, geo-routing resolution |
adapters::inbound::tcp_server | 27 | 96.23% | TCP connections, proxy logic |
adapters::inbound::tls_server | 29 | 94.18% | TLS termination, certificates |
Outbound Adapters
| Module | Tests | Coverage | Description |
|---|
adapters::outbound::dashmap_metrics_store | 20 | 100.00% | Connection metrics, RTT tracking |
adapters::outbound::dashmap_binding_repo | 21 | 100.00% | Client affinity, TTL, GC |
adapters::outbound::replication_backend_repo | 28 | 99.85% | Distributed SQLite replication |
adapters::outbound::sqlite_backend_repo | 20 | 99.26% | SQLite backend storage |
adapters::outbound::prometheus_metrics_store | 19 | 98.70% | Prometheus metrics export |
adapters::outbound::maxmind_geo_resolver | 18 | 95.86% | GeoIP resolution |
adapters::outbound::postgres_backend_repo | 19 | 88.31% | PostgreSQL backend (stub) |
Domain Layer
| Module | Tests | Coverage | Description |
|---|
domain::entities | 12 | 100.00% | Backend, Binding, ClientKey |
domain::value_objects | 26 | 96.40% | RegionCode, country mapping |
domain::services::load_balancer | 25 | 98.78% | Scoring algorithm, geo-routing |
Application Layer
| Module | Tests | Coverage | Description |
|---|
application::proxy_service | 26 | 99.43% | Use case orchestration |
config | 24 | 100.00% | Configuration loading |
Infrastructure Layer
| Module | Tests | Coverage | Description |
|---|
infrastructure::circuit_breaker | 22 | 98.30% | Circuit breaker pattern |
infrastructure::config_watcher | 17 | 95.30% | Hot reload configuration |
infrastructure::rate_limiter | 14 | 93.55% | Token bucket rate limiting |
infrastructure::health_checker | 17 | 92.00% | Active health checks |
infrastructure::connection_pool | 17 | 93.71% | TCP connection pooling |
infrastructure::shutdown | 11 | 93.65% | Graceful shutdown |
Replication Layer (v0.4.0)
| Module | Tests | Coverage | Description |
|---|
replication::types | 45 | 98.81% | HLC timestamps, ChangeSet, NodeId |
replication::config | 12 | 99.34% | Replication configuration |
replication::sync | 38 | 97.63% | Change detection, LWW conflict resolution |
replication::gossip | 42 | 98.80% | SWIM protocol, cluster membership |
replication::transport | 48 | 98.55% | QUIC transport, Sans-IO message encoding |
replication::agent | 18 | 99.77% | Replication orchestration |
replication::merkle | 40 | 98.92% | Merkle tree anti-entropy |
replication::mdns | 25 | 99.02% | mDNS auto-discovery |
Tests by Layer (Hexagonal Architecture)

Infrastructure Components Test Details
Circuit Breaker Tests (22 tests)
cargo test infrastructure::circuit_breaker
| Test | Description |
|---|
test_circuit_breaker_new | Initial state is Closed |
test_circuit_breaker_default | Default configuration |
test_allow_when_closed | Requests pass in Closed state |
test_record_success_in_closed | Success tracking |
test_record_failure_in_closed | Failure tracking |
test_transitions_to_open | Opens after threshold failures |
test_deny_when_open | Blocks requests in Open state |
test_circuit_transitions_to_half_open | Timeout triggers Half-Open |
test_half_open_allows_limited | Limited requests in Half-Open |
test_half_open_to_closed | Recovers to Closed on success |
test_half_open_to_open | Returns to Open on failure |
test_failure_window_resets | Window resets on success |
test_get_metrics | Metrics retrieval |
test_concurrent_record | Thread-safe operations |
Rate Limiter Tests (14 tests)
cargo test infrastructure::rate_limiter
| Test | Description |
|---|
test_rate_limit_config_default | Default: 100 req/s, burst 10 |
test_rate_limiter_new | Creates with config |
test_check_allows_initial_burst | Burst requests allowed |
test_check_different_clients_isolated | Per-IP isolation |
test_remaining | Token count tracking |
test_clear_client | Reset individual client |
test_clear_all | Reset all clients |
test_check_with_cost | Variable cost requests |
test_cleanup_removes_stale | GC removes old entries |
test_refill_over_time | Token replenishment |
test_concurrent_access | Thread-safe operations |
Health Checker Tests (17 tests)
cargo test infrastructure::health_checker
| Test | Description |
|---|
test_health_checker_new | Creates with config |
test_health_check_config_default | Default intervals |
test_health_status_default | Initial unknown state |
test_tcp_check_success | TCP probe success |
test_tcp_check_failure | TCP probe failure |
test_tcp_check_timeout | TCP timeout handling |
test_update_status_becomes_healthy | Threshold transitions |
test_update_status_becomes_unhealthy | Failure transitions |
test_on_health_change_callback | Change notifications |
test_check_backend_success | Backend check OK |
test_check_backend_failure | Backend check fail |
Connection Pool Tests (17 tests)
cargo test infrastructure::connection_pool
| Test | Description |
|---|
test_connection_pool_new | Pool creation |
test_pool_config_default | Default: 10 max, 60s idle |
test_acquire_creates_connection | New connection on empty pool |
test_release_returns_connection | Connection reuse |
test_pool_exhausted | Max connections error |
test_acquire_timeout | Connection timeout |
test_discard_closes_connection | Explicit discard |
test_stats | Pool statistics |
test_pooled_connection_is_expired | Lifetime check |
test_pooled_connection_is_idle_expired | Idle timeout check |
Graceful Shutdown Tests (11 tests)
cargo test infrastructure::shutdown
| Test | Description |
|---|
test_shutdown_controller_new | Controller creation |
test_connection_guard | RAII guard creation |
test_connection_tracking | Active count tracking |
test_multiple_connection_guards | Concurrent guards |
test_shutdown_initiates_once | Single shutdown |
test_subscribe_receives_shutdown | Broadcast notification |
test_wait_for_drain_immediate | No connections case |
test_wait_for_drain_with_connections | Waits for drain |
test_wait_for_drain_timeout | Timeout behavior |
Config Watcher Tests (17 tests)
cargo test infrastructure::config_watcher
| Test | Description |
|---|
test_config_watcher_new | Watcher creation |
test_watch_file | File monitoring |
test_watch_nonexistent_file | Error handling |
test_unwatch_file | Remove from watch |
test_set_and_get | Config values |
test_get_or | Default values |
test_subscribe_value_change | Change notifications |
test_no_change_on_same_value | No spurious events |
test_check_files_detects_modification | File change detection |
test_hot_value_get_set | HotValue wrapper |
Code Coverage
edgeProxy uses cargo-llvm-cov for code coverage measurement with LLVM instrumentation.
Installation
cargo install cargo-llvm-cov
rustup component add llvm-tools-preview
rustup toolchain install nightly
rustup run nightly rustup component add llvm-tools-preview
Running Coverage
cargo llvm-cov
rustup run nightly cargo llvm-cov
rustup run nightly cargo llvm-cov --summary-only
rustup run nightly cargo llvm-cov --html
rustup run nightly cargo llvm-cov --lcov --output-path lcov.info
open target/llvm-cov/html/index.html
Important: Use rustup run nightly to enable #[coverage(off)] attributes. With stable Rust, I/O code will be included in coverage metrics, resulting in ~94% coverage instead of ~99%.
Coverage Results
Final Coverage: 98.71% (7,159 lines, 98.98% line coverage)
Note: Coverage measured with cargo +nightly llvm-cov to enable coverage(off) attributes on I/O code.
Coverage by Layer
| Layer | Regions | Coverage | Status |
|---|
| Domain | 761 | 99.47% | ✓ Excellent |
| Application | 706 | 99.72% | ✓ Excellent |
| Inbound Adapters | 2,100 | 98.90% | ✓ Excellent |
| Outbound Adapters | 1,450 | 98.62% | ✓ Excellent |
| Infrastructure | 455 | 95.30% | ✓ Very Good |
| Replication | 7,159 | 98.98% | ✓ Excellent |
| Config | 286 | 100.00% | ✓ Complete |
Detailed Coverage by File
Core Components (100% Coverage)
| File | Lines | Coverage |
|---|
config.rs | 286 | 100.00% |
domain/entities.rs | 130 | 100.00% |
adapters/outbound/dashmap_metrics_store.rs | 224 | 100.00% |
adapters/outbound/dashmap_binding_repo.rs | 287 | 100.00% |
Inbound Adapters
| File | Lines | Covered | Coverage |
|---|
adapters/inbound/api_server.rs | 928 | 924 | 99.57% |
adapters/inbound/dns_server.rs | 774 | 757 | 97.80% |
adapters/inbound/tcp_server.rs | 849 | 817 | 96.23% |
adapters/inbound/tls_server.rs | 996 | 938 | 94.18% |
Outbound Adapters
| File | Lines | Covered | Coverage |
|---|
adapters/outbound/replication_backend_repo.rs | 677 | 676 | 99.85% |
adapters/outbound/sqlite_backend_repo.rs | 404 | 401 | 99.26% |
adapters/outbound/prometheus_metrics_store.rs | 307 | 303 | 98.70% |
adapters/outbound/maxmind_geo_resolver.rs | 145 | 139 | 95.86% |
adapters/outbound/postgres_backend_repo.rs | 231 | 204 | 88.31% |
Infrastructure Layer
| File | Regions | Coverage |
|---|
infrastructure/circuit_breaker.rs | 353 | 98.30% |
infrastructure/config_watcher.rs | 744 | 95.30% |
infrastructure/rate_limiter.rs | 589 | 93.55% |
infrastructure/health_checker.rs | 950 | 92.00% |
infrastructure/connection_pool.rs | 1224 | 93.71% |
infrastructure/shutdown.rs | 488 | 93.65% |
Replication Layer (v0.4.0)
| File | Regions | Coverage |
|---|
replication/types.rs | 1186 | 98.81% |
replication/config.rs | 303 | 99.34% |
replication/sync.rs | 1875 | 97.28% |
replication/gossip.rs | 2350 | 98.80% |
replication/transport.rs | 1668 | 98.55% |
replication/agent.rs | 981 | 99.77% |
replication/merkle.rs | 1030 | 98.92% |
replication/mdns.rs | 619 | 99.02% |
Coverage Exclusions (Sans-IO Pattern)
The Sans-IO pattern separates pure business logic from I/O operations. Code that performs actual I/O is excluded from coverage using #[cfg_attr(coverage_nightly, coverage(off))]:
| Code | Reason |
|---|
main.rs | Entry point, composition root |
handle_packet() (dns_server) | Network I/O dependent |
proxy_bidirectional() (tcp_server) | Real TCP socket operations |
start(), run() (servers) | Async event loops with network I/O |
start_event_loop(), start_flush_loop() (agent) | Background async loops |
request() (transport) | QUIC network operations |
release(), acquire(), clear() (connection_pool) | Async connection management |
handle_connection() (transport) | QUIC connection handling |
start(), execute_actions() (gossip) | UDP gossip I/O |
start(), start_discovery() (mdns) | mDNS network I/O |
SkipServerVerification impl | TLS callback (cannot unit test) |
Test modules (#[cfg(test)]) | Test code is not production code |
Remaining Uncovered Regions (162 total)
The 162 uncovered regions fall into these categories:
| Category | Regions | Reason |
|---|
| Database errors | 20 | DB connection failures (unreachable paths) |
| Network I/O | 45 | Async network operations excluded |
| CAS retry loops | 25 | Atomic compare-and-swap retries |
| Tracing calls | 18 | tracing::warn!() in error branches |
| TLS/QUIC callbacks | 30 | Crypto callbacks (cannot unit test) |
| Signal handlers | 10 | OS signal handling |
| mDNS callbacks | 14 | mDNS event handling |
These represent edge cases that require:
- External system failures (DB, network)
- Specific concurrent conditions (CAS retries)
- TLS/QUIC handshake callbacks
- OS-level signal handling
All business logic is 100% covered - only I/O wrappers and unreachable error paths remain.
Testing Philosophy
edgeProxy follows these testing principles:
- Domain logic is pure and fully tested:
LoadBalancer scoring algorithm has no external dependencies
- Adapters test through interfaces: Mock implementations of traits for unit testing
- Integration tests use real components: Mock backend server for E2E testing
- Network code has coverage exclusions: I/O-bound code is tested via integration tests
- Infrastructure is modular: Each component can be tested in isolation
Continuous Integration
test:
script:
- cargo test
- rustup run nightly cargo llvm-cov --fail-under-lines 98
coverage:
script:
- rustup run nightly cargo llvm-cov --html
artifacts:
paths:
- target/llvm-cov/html/
The --fail-under-lines 98 flag ensures coverage doesn't drop below 98% in CI.
New Tests Added (v0.3.1)
| Module | Test | Description |
|---|
circuit_breaker | test_allow_request_when_already_half_open | Tests idempotent HalfOpen transition |
circuit_breaker | test_record_success_when_open | Tests success recording in Open state |
prometheus_metrics_store | test_global_metrics | Tests aggregated global metrics |
prometheus_metrics_store | test_concurrent_decrement | Tests concurrent counter operations |
types | test_hlc_compare_same_time_different_counter | Tests HLC counter tiebreaker |
types | test_hlc_compare_same_time_same_counter | Tests HLC equality case |
New Tests Added (v0.4.0)
Merkle Tree Tests (40 tests)
| Test | Description |
|---|
test_merkle_tree_new | Tree creation and initialization |
test_merkle_tree_insert | Single row insertion |
test_merkle_tree_update | Row update changes hash |
test_merkle_tree_remove | Row removal |
test_merkle_tree_root_hash | Root hash calculation |
test_merkle_tree_diff | Detecting differences between trees |
test_merkle_tree_diff_at_depth | Multi-level diff traversal |
test_merkle_tree_get_hash | Hash retrieval at depth |
test_merkle_tree_leaves | Leaf node access |
test_merkle_message_serialization | Message roundtrip |
test_merkle_message_range_response | Range response message |
test_merkle_message_data_request | Data request message |
test_hash_to_prefix_at_depth_zero | Prefix calculation at depth 0 |
mDNS Discovery Tests (25 tests)
| Test | Description |
|---|
test_mdns_discovery_new | Discovery service creation |
test_mdns_config_service_type | Service type configuration |
test_discovered_peer_debug | DiscoveredPeer debug format |
test_gossip_addr | Gossip address accessor |
test_transport_addr | Transport address accessor |
test_notify_discovered_logs_valid_peer | Valid peer notification logging |
test_notify_discovered_different_cluster | Cross-cluster peer filtering |
test_notify_discovered_channel_closed | Channel error handling |
Delta Sync Tests
| Test | Description |
|---|
test_field_op_serialization | FieldOp enum roundtrip |
test_delta_data_new | DeltaData creation |
test_change_data_full | Full row change data |
test_change_data_delta | Delta change data |
test_apply_delta_change | Delta application to row |
Transport Sans-IO Tests (48 tests)
| Test | Description |
|---|
test_encode_message_* | Message encoding for all types |
test_decode_message_* | Message decoding with validation |
test_validate_broadcast | Broadcast checksum validation |
test_create_sync_request | SyncRequest message creation |
test_extract_sync_response | Response changeset extraction |
test_message_type_name | Message type string conversion |
test_count_broadcast_changes | Change counting |
test_get_broadcast_seq | Sequence number extraction |
Configuration Tests (v0.4.0)
Configuration tests validate all environment variable combinations and service configurations.
Running Configuration Tests
task test:config-all
task test:config-default
task test:config-all-services
task test:config-regions
task test:config-api
task test:config-tls
task test:config-replication
task test:config-binding
task test:config-debug
task test:config-cleanup
Test Scenarios
| Test | Environment Variables | Expected Result |
|---|
| Default | LISTEN_ADDR, DB_PATH, REGION | TCP proxy on 8080 |
| All Services | All vars enabled | 6 ports listening |
| Regions | REGION=sa/us/eu/ap | Each region starts |
| API | API_ENABLED=true | 6 endpoints working |
| TLS | TLS_ENABLED=true | Self-signed cert |
| Replication | REPLICATION_ENABLED=true | Gossip + Transport |
| Binding | BINDING_TTL_SECS=300 | Custom TTL works |
| Debug | DEBUG=1 | Debug logging |
API Endpoint Tests
| Endpoint | Method | Test | Expected |
|---|
/health | GET | Health check | {"status":"ok"} |
/api/v1/register | POST | Register backend | {"registered":true} |
/api/v1/backends | GET | List backends | Array of backends |
/api/v1/backends/:id | GET | Get backend | Backend details |
/api/v1/heartbeat/:id | POST | Update heartbeat | {"status":"ok"} |
/api/v1/backends/:id | DELETE | Remove backend | {"deregistered":true} |
AWS Deployment Tests (v0.4.0)
Production deployment tests on AWS Ireland (eu-west-1).
Deployment Details
| Property | Value |
|---|
| Instance | 34.240.78.199 |
| Region | eu-west-1 (Ireland) |
| Instance Type | t3.micro |
| OS | Ubuntu 22.04 |
| Binary | /opt/edgeproxy/edge-proxy |
| Service | systemd (edgeproxy.service) |
Service Configuration
[Service]
Environment=EDGEPROXY_LISTEN_ADDR=0.0.0.0:8080
Environment=EDGEPROXY_DB_PATH=/opt/edgeproxy/routing.db
Environment=EDGEPROXY_REGION=eu
Environment=EDGEPROXY_DB_RELOAD_SECS=5
Environment=EDGEPROXY_BINDING_TTL_SECS=600
Environment=EDGEPROXY_BINDING_GC_INTERVAL_SECS=60
Environment=EDGEPROXY_TLS_ENABLED=true
Environment=EDGEPROXY_TLS_LISTEN_ADDR=0.0.0.0:8443
Environment=EDGEPROXY_API_ENABLED=true
Environment=EDGEPROXY_API_LISTEN_ADDR=0.0.0.0:8081
Environment=EDGEPROXY_HEARTBEAT_TTL_SECS=60
Environment=EDGEPROXY_DNS_ENABLED=true
Environment=EDGEPROXY_DNS_LISTEN_ADDR=0.0.0.0:5353
Environment=EDGEPROXY_DNS_DOMAIN=internal
Environment=EDGEPROXY_REPLICATION_ENABLED=true
Environment=EDGEPROXY_REPLICATION_NODE_ID=pop-eu-ireland-1
Environment=EDGEPROXY_REPLICATION_GOSSIP_ADDR=0.0.0.0:4001
Environment=EDGEPROXY_REPLICATION_TRANSPORT_ADDR=0.0.0.0:4002
Environment=EDGEPROXY_REPLICATION_DB_PATH=/opt/edgeproxy/state.db
Environment=EDGEPROXY_REPLICATION_CLUSTER_NAME=edgeproxy-prod
Test Results (2025-12-08)
Port Connectivity
| Service | Port | Protocol | Status |
|---|
| TCP Proxy | 8080 | TCP | OK |
| TLS Server | 8443 | TCP | OK |
| API Server | 8081 | TCP | OK |
| DNS Server | 5353 | UDP | OK |
| Gossip | 4001 | UDP | OK |
| Transport | 4002 | UDP | OK |
API Endpoint Tests
| Endpoint | Status | Response |
|---|
GET /health | OK | {"status":"ok","version":"0.2.0"} |
POST /api/v1/register | OK | {"registered":true} |
GET /api/v1/backends | OK | Lists registered backends |
GET /api/v1/backends/:id | OK | Returns backend details |
POST /api/v1/heartbeat/:id | OK | {"status":"ok"} |
DELETE /api/v1/backends/:id | OK | {"deregistered":true} |
TLS Certificate
subject=CN = rcgen self signed cert
issuer=CN = rcgen self signed cert
Replication State
- State DB:
/opt/edgeproxy/state.db (36KB)
- Node ID:
pop-eu-ireland-1
- Cluster:
edgeproxy-prod
Running Tests on AWS
ssh -i .keys/edgeproxy-hub.pem ubuntu@34.240.78.199
sudo systemctl status edgeproxy
sudo journalctl -u edgeproxy -f
curl http://127.0.0.1:8081/health | jq .
curl http://34.240.78.199:8081/health
Security Group Rules
| Port | Protocol | Source | Description |
|---|
| 22 | TCP | Your IP | SSH |
| 8080 | TCP | 0.0.0.0/0 | TCP Proxy |
| 8081 | TCP | 0.0.0.0/0 | API Server |
| 8443 | TCP | 0.0.0.0/0 | TLS Server |
| 5353 | UDP | 0.0.0.0/0 | DNS Server |
| 4001 | UDP | VPC CIDR | Gossip (internal) |
| 4002 | UDP | VPC CIDR | Transport (internal) |
Latency Results (Brazil to Ireland)
| Test | Latency |
|---|
| API Health Check | ~408ms |
| 100 requests | 42.8s total (~428ms avg) |
Note: Latency is expected due to geographic distance (Brazil to Ireland ~9,000km)