Skip to main content

Testing

This guide covers how to test edgeProxy locally and in deployment environments using the mock backend server.

Mock Backend Server

The tests/mock-backend/ directory contains a lightweight Go HTTP server that simulates real backend services for testing purposes.

Features

  • Multi-region simulation: Configure different regions per instance
  • Request tracking: Counts requests per backend
  • Multiple endpoints: Root, health, info, and latency endpoints
  • JSON responses: Structured responses for easy parsing
  • Minimal footprint: ~8MB binary, low memory usage

Building the Mock Server

# Native build (for local testing)
cd tests/mock-backend
go build -o mock-backend main.go

# Cross-compile for Linux AMD64 (for EC2/cloud deployment)
GOOS=linux GOARCH=amd64 go build -o mock-backend-linux-amd64 main.go

Running Locally

Start multiple instances to simulate different backends:

# Terminal 1: EU backend 1
./mock-backend -port 9001 -region eu -id mock-eu-1

# Terminal 2: EU backend 2
./mock-backend -port 9002 -region eu -id mock-eu-2

# Terminal 3: US backend
./mock-backend -port 9003 -region us -id mock-us-1

CLI Options

FlagDefaultDescription
-port9001TCP port to listen on
-regioneuRegion identifier (eu, us, sa, ap)
-idmock-{region}-{port}Unique backend identifier

Endpoints

EndpointDescriptionResponse
/RootText with backend info
/healthHealth checkOK - {id} ({region})
/api/infoJSON infoFull backend details
/api/latencyMinimal JSONFor latency testing

Example Response (/api/info)

{
"backend_id": "mock-eu-1",
"region": "eu",
"hostname": "ip-172-31-29-183",
"port": "9001",
"request_count": 42,
"uptime_secs": 3600,
"timestamp": "2025-12-08T00:11:43Z",
"message": "Hello from mock backend!"
}

Local Testing Setup

1. Configure routing.db

Add mock backends to your local routing.db:

-- Clear existing test backends
DELETE FROM backends WHERE id LIKE 'mock-%';

-- Add mock backends
INSERT INTO backends (id, app, region, wg_ip, port, healthy, weight, soft_limit, hard_limit)
VALUES
('mock-eu-1', 'test', 'eu', '127.0.0.1', 9001, 1, 2, 100, 150),
('mock-eu-2', 'test', 'eu', '127.0.0.1', 9002, 1, 2, 100, 150),
('mock-us-1', 'test', 'us', '127.0.0.1', 9003, 1, 2, 100, 150);

2. Start Mock Backends

# Start all 3 backends
./tests/mock-backend/mock-backend -port 9001 -region eu -id mock-eu-1 &
./tests/mock-backend/mock-backend -port 9002 -region eu -id mock-eu-2 &
./tests/mock-backend/mock-backend -port 9003 -region us -id mock-us-1 &

3. Run edgeProxy

EDGEPROXY_REGION=eu \
EDGEPROXY_LISTEN_ADDR=0.0.0.0:8080 \
cargo run --release

4. Test Requests

# Simple test
curl http://localhost:8080/api/info

# Multiple requests (observe load balancing)
for i in {1..10}; do
curl -s http://localhost:8080/api/info | grep backend_id
done

# Health check
curl http://localhost:8080/health

EC2 Deployment Testing

1. Deploy Mock Server to EC2

# Build for Linux
cd tests/mock-backend
GOOS=linux GOARCH=amd64 go build -o mock-backend-linux-amd64 main.go

# Copy to EC2
scp -i ~/.ssh/edgeproxy-key.pem mock-backend-linux-amd64 ubuntu@<EC2-IP>:/tmp/

# SSH and setup
ssh -i ~/.ssh/edgeproxy-key.pem ubuntu@<EC2-IP>
sudo mv /tmp/mock-backend-linux-amd64 /opt/edgeproxy/mock-backend
sudo chmod +x /opt/edgeproxy/mock-backend

2. Start Mock Backends on EC2

# Start 3 instances
cd /opt/edgeproxy
nohup ./mock-backend -port 9001 -region eu -id mock-eu-1 > /tmp/mock-9001.log 2>&1 &
nohup ./mock-backend -port 9002 -region eu -id mock-eu-2 > /tmp/mock-9002.log 2>&1 &
nohup ./mock-backend -port 9003 -region us -id mock-us-1 > /tmp/mock-9003.log 2>&1 &

# Verify
ps aux | grep mock-backend
curl localhost:9001/health
curl localhost:9002/health
curl localhost:9003/health

3. Configure routing.db on EC2

sqlite3 /opt/edgeproxy/routing.db "
DELETE FROM backends WHERE id LIKE 'mock-%';
INSERT INTO backends (id, app, region, wg_ip, port, healthy, weight, soft_limit, hard_limit)
VALUES
('mock-eu-1', 'test', 'eu', '127.0.0.1', 9001, 1, 2, 100, 150),
('mock-eu-2', 'test', 'eu', '127.0.0.1', 9002, 1, 2, 100, 150),
('mock-us-1', 'test', 'us', '127.0.0.1', 9003, 1, 2, 100, 150);
SELECT id, region, port, healthy FROM backends WHERE deleted=0;
"

Backend Fields Explained

FieldTypeDescriptionExample
idTEXTUnique identifier for the backend. Used in logs and client affinity.mock-eu-1
appTEXTApplication name. Groups backends serving the same app.test
regionTEXTGeographic region code. Used for geo-routing decisions. Valid: eu, us, sa, ap.eu
wg_ipTEXTBackend IP address. Use 127.0.0.1 for local testing, WireGuard IPs (10.50.x.x) in production.127.0.0.1
portINTEGERTCP port the backend listens on.9001
healthyINTEGERHealth status. 1 = healthy (receives traffic), 0 = unhealthy (excluded from routing).1
weightINTEGERRelative weight for load balancing. Higher weight = more traffic. Range: 1-10.2
soft_limitINTEGERComfortable connection count. Above this, the backend is considered "loaded" and less preferred.100
hard_limitINTEGERMaximum connections. At or above this limit, backend is excluded from new connections.150

Example Data Breakdown

('mock-eu-1', 'test', 'eu', '127.0.0.1', 9001, 1, 2, 100, 150)
ValueFieldMeaning
mock-eu-1idBackend identifier, first EU mock server
testappApplication name for testing
euregionLocated in Europe region
127.0.0.1wg_ipLocalhost (same machine as proxy)
9001portListening on port 9001
1healthyBackend is healthy and active
2weightMedium priority (scale 1-10)
100soft_limitComfortable with up to 100 connections
150hard_limitMaximum 150 connections allowed

Load Balancer Scoring

The proxy uses these fields to calculate a score for each backend:

score = geo_score * 100 + (connections / soft_limit) / weight
  • geo_score: 0 (same country), 1 (same region), 2 (local POP region), 3 (global fallback)
  • connections: Current active connections (from metrics)
  • soft_limit: Divides load factor
  • weight: Higher weight reduces the score (more preferred)

Lowest score wins. Backends with healthy=0 or at hard_limit are excluded.

4. Test from External Client

# From your local machine
curl http://<EC2-PUBLIC-IP>:8080/api/info
curl http://<EC2-PUBLIC-IP>:8080/health

# Multiple requests to see load balancing
for i in {1..5}; do
curl -s http://<EC2-PUBLIC-IP>:8080/api/info
echo ""
done

Testing Scenarios

Client Affinity

Client affinity (sticky sessions) binds clients to the same backend:

# All requests from same IP go to same backend
for i in {1..5}; do
curl -s http://localhost:8080/api/info | grep backend_id
done
# Expected: All show the same backend_id

Load Distribution

To test load distribution, simulate different clients:

# Use different source IPs or wait for TTL expiration
# Check request_count on each backend
curl localhost:9001/api/info | grep request_count
curl localhost:9002/api/info | grep request_count
curl localhost:9003/api/info | grep request_count

Backend Health

Test health-based routing by stopping a backend:

# Stop mock-eu-1
pkill -f 'mock-backend.*9001'

# Requests should now go to healthy backends
curl http://localhost:8080/api/info
# Expected: Routes to mock-eu-2 or mock-us-1

Geo-Routing

The proxy routes clients to backends in their region:

  1. Configure backends in multiple regions
  2. Test from different geographic locations
  3. Observe routing decisions in proxy logs

Monitoring During Tests

edgeProxy Logs

# On EC2
sudo journalctl -u edgeproxy -f

# Look for:
# - Backend selection logs
# - Connection counts
# - GeoIP resolution

Mock Backend Logs

# Check individual backend logs
tail -f /tmp/mock-9001.log
tail -f /tmp/mock-9002.log
tail -f /tmp/mock-9003.log

Request Distribution

# Quick check of request distribution
echo "mock-eu-1: $(curl -s localhost:9001/api/info | grep -o '"request_count":[0-9]*')"
echo "mock-eu-2: $(curl -s localhost:9002/api/info | grep -o '"request_count":[0-9]*')"
echo "mock-us-1: $(curl -s localhost:9003/api/info | grep -o '"request_count":[0-9]*')"

Cleanup

Local

# Kill all mock backends
pkill -f mock-backend

EC2

# Kill mock backends
sudo pkill -f mock-backend

# Or kill by port
sudo fuser -k 9001/tcp 9002/tcp 9003/tcp

Troubleshooting

Mock Backend Won't Start

# Check if port is in use
sudo ss -tlnp | grep 9001

# Kill existing process
sudo fuser -k 9001/tcp

Proxy Can't Connect to Backend

  1. Verify backend is running: curl localhost:9001/health
  2. Check routing.db configuration
  3. Verify wg_ip matches (use 127.0.0.1 for local testing)
  4. Check firewall rules on EC2

Requests Timeout

  1. Check edgeProxy is running: sudo systemctl status edgeproxy
  2. Verify backend health in routing.db
  3. Check connection limits aren't exceeded

Unit Tests

edgeProxy has comprehensive unit test coverage following the Hexagonal Architecture pattern with Sans-IO design. All tests are written in Rust using the built-in test framework.

Test Summary

MetricValue
Total Tests875
Line Coverage98.71%
Region Coverage98.71%
Function Coverage99.58%
Files with 100%22

Coverage Evolution

The project achieved significant coverage improvements through systematic testing:

PhaseCoverageTestsKey Improvements
Initial (stable)94.43%780Basic unit tests
Refactoring94.92%782Sans-IO pattern adoption
Nightly build98.32%782coverage(off) for I/O
Edge case tests98.50%784Circuit breaker, metrics
TLS & pool98.89%786TLS, connection pool
Replication v0.4.098.71%875Merkle tree, mDNS, delta sync

Sans-IO Architecture Benefits

The Sans-IO pattern separates pure business logic from I/O operations:

┌─────────────────────────────────────────────────────────────────────┐
│ TESTABLE (100% covered) │
│ ┌──────────────────────────────────────────────────────────────┐ │
│ │ Pure Functions: process_message(), pick_backend(), etc. │ │
│ │ - No network calls │ │
│ │ - No database access │ │
│ │ - Returns actions to execute │ │
│ └──────────────────────────────────────────────────────────────┘ │
├─────────────────────────────────────────────────────────────────────┤
│ I/O WRAPPERS (excluded) │
│ ┌──────────────────────────────────────────────────────────────┐ │
│ │ Async handlers: start(), run(), handle_connection() │ │
│ │ - Marked with #[cfg_attr(coverage_nightly, coverage(off))] │ │
│ │ - Thin wrappers that execute actions │ │
│ └──────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘

This approach ensures:

  • All business logic is testable without mocking network
  • 100% coverage of decision-making code
  • Clear separation between logic and I/O

Running Tests

# Run all tests
cargo test

# Run tests with output
cargo test -- --nocapture

# Run tests for a specific module
cargo test domain::services::load_balancer

# Run infrastructure tests only
cargo test infrastructure::

# Run tests in parallel (default)
cargo test -- --test-threads=4

# Run single-threaded (for debugging)
cargo test -- --test-threads=1

Tests by Module

Inbound Adapters

ModuleTestsCoverageDescription
adapters::inbound::api_server3899.57%Auto-Discovery API, registration, heartbeat
adapters::inbound::dns_server4497.80%DNS server, geo-routing resolution
adapters::inbound::tcp_server2796.23%TCP connections, proxy logic
adapters::inbound::tls_server2994.18%TLS termination, certificates

Outbound Adapters

ModuleTestsCoverageDescription
adapters::outbound::dashmap_metrics_store20100.00%Connection metrics, RTT tracking
adapters::outbound::dashmap_binding_repo21100.00%Client affinity, TTL, GC
adapters::outbound::replication_backend_repo2899.85%Distributed SQLite replication
adapters::outbound::sqlite_backend_repo2099.26%SQLite backend storage
adapters::outbound::prometheus_metrics_store1998.70%Prometheus metrics export
adapters::outbound::maxmind_geo_resolver1895.86%GeoIP resolution
adapters::outbound::postgres_backend_repo1988.31%PostgreSQL backend (stub)

Domain Layer

ModuleTestsCoverageDescription
domain::entities12100.00%Backend, Binding, ClientKey
domain::value_objects2696.40%RegionCode, country mapping
domain::services::load_balancer2598.78%Scoring algorithm, geo-routing

Application Layer

ModuleTestsCoverageDescription
application::proxy_service2699.43%Use case orchestration
config24100.00%Configuration loading

Infrastructure Layer

ModuleTestsCoverageDescription
infrastructure::circuit_breaker2298.30%Circuit breaker pattern
infrastructure::config_watcher1795.30%Hot reload configuration
infrastructure::rate_limiter1493.55%Token bucket rate limiting
infrastructure::health_checker1792.00%Active health checks
infrastructure::connection_pool1793.71%TCP connection pooling
infrastructure::shutdown1193.65%Graceful shutdown

Replication Layer (v0.4.0)

ModuleTestsCoverageDescription
replication::types4598.81%HLC timestamps, ChangeSet, NodeId
replication::config1299.34%Replication configuration
replication::sync3897.63%Change detection, LWW conflict resolution
replication::gossip4298.80%SWIM protocol, cluster membership
replication::transport4898.55%QUIC transport, Sans-IO message encoding
replication::agent1899.77%Replication orchestration
replication::merkle4098.92%Merkle tree anti-entropy
replication::mdns2599.02%mDNS auto-discovery

Tests by Layer (Hexagonal Architecture)

Tests by Layer

Infrastructure Components Test Details

Circuit Breaker Tests (22 tests)

cargo test infrastructure::circuit_breaker
TestDescription
test_circuit_breaker_newInitial state is Closed
test_circuit_breaker_defaultDefault configuration
test_allow_when_closedRequests pass in Closed state
test_record_success_in_closedSuccess tracking
test_record_failure_in_closedFailure tracking
test_transitions_to_openOpens after threshold failures
test_deny_when_openBlocks requests in Open state
test_circuit_transitions_to_half_openTimeout triggers Half-Open
test_half_open_allows_limitedLimited requests in Half-Open
test_half_open_to_closedRecovers to Closed on success
test_half_open_to_openReturns to Open on failure
test_failure_window_resetsWindow resets on success
test_get_metricsMetrics retrieval
test_concurrent_recordThread-safe operations

Rate Limiter Tests (14 tests)

cargo test infrastructure::rate_limiter
TestDescription
test_rate_limit_config_defaultDefault: 100 req/s, burst 10
test_rate_limiter_newCreates with config
test_check_allows_initial_burstBurst requests allowed
test_check_different_clients_isolatedPer-IP isolation
test_remainingToken count tracking
test_clear_clientReset individual client
test_clear_allReset all clients
test_check_with_costVariable cost requests
test_cleanup_removes_staleGC removes old entries
test_refill_over_timeToken replenishment
test_concurrent_accessThread-safe operations

Health Checker Tests (17 tests)

cargo test infrastructure::health_checker
TestDescription
test_health_checker_newCreates with config
test_health_check_config_defaultDefault intervals
test_health_status_defaultInitial unknown state
test_tcp_check_successTCP probe success
test_tcp_check_failureTCP probe failure
test_tcp_check_timeoutTCP timeout handling
test_update_status_becomes_healthyThreshold transitions
test_update_status_becomes_unhealthyFailure transitions
test_on_health_change_callbackChange notifications
test_check_backend_successBackend check OK
test_check_backend_failureBackend check fail

Connection Pool Tests (17 tests)

cargo test infrastructure::connection_pool
TestDescription
test_connection_pool_newPool creation
test_pool_config_defaultDefault: 10 max, 60s idle
test_acquire_creates_connectionNew connection on empty pool
test_release_returns_connectionConnection reuse
test_pool_exhaustedMax connections error
test_acquire_timeoutConnection timeout
test_discard_closes_connectionExplicit discard
test_statsPool statistics
test_pooled_connection_is_expiredLifetime check
test_pooled_connection_is_idle_expiredIdle timeout check

Graceful Shutdown Tests (11 tests)

cargo test infrastructure::shutdown
TestDescription
test_shutdown_controller_newController creation
test_connection_guardRAII guard creation
test_connection_trackingActive count tracking
test_multiple_connection_guardsConcurrent guards
test_shutdown_initiates_onceSingle shutdown
test_subscribe_receives_shutdownBroadcast notification
test_wait_for_drain_immediateNo connections case
test_wait_for_drain_with_connectionsWaits for drain
test_wait_for_drain_timeoutTimeout behavior

Config Watcher Tests (17 tests)

cargo test infrastructure::config_watcher
TestDescription
test_config_watcher_newWatcher creation
test_watch_fileFile monitoring
test_watch_nonexistent_fileError handling
test_unwatch_fileRemove from watch
test_set_and_getConfig values
test_get_orDefault values
test_subscribe_value_changeChange notifications
test_no_change_on_same_valueNo spurious events
test_check_files_detects_modificationFile change detection
test_hot_value_get_setHotValue wrapper

Code Coverage

Coverage Tools

edgeProxy uses cargo-llvm-cov for code coverage measurement with LLVM instrumentation.

Installation

# Install cargo-llvm-cov
cargo install cargo-llvm-cov

# Install LLVM tools (required for coverage)
rustup component add llvm-tools-preview

# Install nightly toolchain (for coverage(off) support)
rustup toolchain install nightly
rustup run nightly rustup component add llvm-tools-preview

Running Coverage

# Basic coverage report (stable Rust - includes I/O code)
cargo llvm-cov

# Coverage with nightly (RECOMMENDED - excludes I/O code marked with coverage(off))
rustup run nightly cargo llvm-cov

# Summary only
rustup run nightly cargo llvm-cov --summary-only

# Coverage with HTML report
rustup run nightly cargo llvm-cov --html

# Coverage with LCOV output
rustup run nightly cargo llvm-cov --lcov --output-path lcov.info

# Open HTML report
open target/llvm-cov/html/index.html

Important: Use rustup run nightly to enable #[coverage(off)] attributes. With stable Rust, I/O code will be included in coverage metrics, resulting in ~94% coverage instead of ~99%.

Coverage Results

Final Coverage: 98.71% (7,159 lines, 98.98% line coverage)

Note: Coverage measured with cargo +nightly llvm-cov to enable coverage(off) attributes on I/O code.

Coverage by Layer

LayerRegionsCoverageStatus
Domain76199.47%✓ Excellent
Application70699.72%✓ Excellent
Inbound Adapters2,10098.90%✓ Excellent
Outbound Adapters1,45098.62%✓ Excellent
Infrastructure45595.30%✓ Very Good
Replication7,15998.98%✓ Excellent
Config286100.00%✓ Complete

Detailed Coverage by File

Core Components (100% Coverage)
FileLinesCoverage
config.rs286100.00%
domain/entities.rs130100.00%
adapters/outbound/dashmap_metrics_store.rs224100.00%
adapters/outbound/dashmap_binding_repo.rs287100.00%
Inbound Adapters
FileLinesCoveredCoverage
adapters/inbound/api_server.rs92892499.57%
adapters/inbound/dns_server.rs77475797.80%
adapters/inbound/tcp_server.rs84981796.23%
adapters/inbound/tls_server.rs99693894.18%
Outbound Adapters
FileLinesCoveredCoverage
adapters/outbound/replication_backend_repo.rs67767699.85%
adapters/outbound/sqlite_backend_repo.rs40440199.26%
adapters/outbound/prometheus_metrics_store.rs30730398.70%
adapters/outbound/maxmind_geo_resolver.rs14513995.86%
adapters/outbound/postgres_backend_repo.rs23120488.31%
Infrastructure Layer
FileRegionsCoverage
infrastructure/circuit_breaker.rs35398.30%
infrastructure/config_watcher.rs74495.30%
infrastructure/rate_limiter.rs58993.55%
infrastructure/health_checker.rs95092.00%
infrastructure/connection_pool.rs122493.71%
infrastructure/shutdown.rs48893.65%
Replication Layer (v0.4.0)
FileRegionsCoverage
replication/types.rs118698.81%
replication/config.rs30399.34%
replication/sync.rs187597.28%
replication/gossip.rs235098.80%
replication/transport.rs166898.55%
replication/agent.rs98199.77%
replication/merkle.rs103098.92%
replication/mdns.rs61999.02%

Coverage Exclusions (Sans-IO Pattern)

The Sans-IO pattern separates pure business logic from I/O operations. Code that performs actual I/O is excluded from coverage using #[cfg_attr(coverage_nightly, coverage(off))]:

CodeReason
main.rsEntry point, composition root
handle_packet() (dns_server)Network I/O dependent
proxy_bidirectional() (tcp_server)Real TCP socket operations
start(), run() (servers)Async event loops with network I/O
start_event_loop(), start_flush_loop() (agent)Background async loops
request() (transport)QUIC network operations
release(), acquire(), clear() (connection_pool)Async connection management
handle_connection() (transport)QUIC connection handling
start(), execute_actions() (gossip)UDP gossip I/O
start(), start_discovery() (mdns)mDNS network I/O
SkipServerVerification implTLS callback (cannot unit test)
Test modules (#[cfg(test)])Test code is not production code

Remaining Uncovered Regions (162 total)

The 162 uncovered regions fall into these categories:

CategoryRegionsReason
Database errors20DB connection failures (unreachable paths)
Network I/O45Async network operations excluded
CAS retry loops25Atomic compare-and-swap retries
Tracing calls18tracing::warn!() in error branches
TLS/QUIC callbacks30Crypto callbacks (cannot unit test)
Signal handlers10OS signal handling
mDNS callbacks14mDNS event handling

These represent edge cases that require:

  • External system failures (DB, network)
  • Specific concurrent conditions (CAS retries)
  • TLS/QUIC handshake callbacks
  • OS-level signal handling

All business logic is 100% covered - only I/O wrappers and unreachable error paths remain.

Testing Philosophy

edgeProxy follows these testing principles:

  1. Domain logic is pure and fully tested: LoadBalancer scoring algorithm has no external dependencies
  2. Adapters test through interfaces: Mock implementations of traits for unit testing
  3. Integration tests use real components: Mock backend server for E2E testing
  4. Network code has coverage exclusions: I/O-bound code is tested via integration tests
  5. Infrastructure is modular: Each component can be tested in isolation

Continuous Integration

# Example CI configuration for coverage
test:
script:
- cargo test
- rustup run nightly cargo llvm-cov --fail-under-lines 98

coverage:
script:
- rustup run nightly cargo llvm-cov --html
artifacts:
paths:
- target/llvm-cov/html/

The --fail-under-lines 98 flag ensures coverage doesn't drop below 98% in CI.

New Tests Added (v0.3.1)

ModuleTestDescription
circuit_breakertest_allow_request_when_already_half_openTests idempotent HalfOpen transition
circuit_breakertest_record_success_when_openTests success recording in Open state
prometheus_metrics_storetest_global_metricsTests aggregated global metrics
prometheus_metrics_storetest_concurrent_decrementTests concurrent counter operations
typestest_hlc_compare_same_time_different_counterTests HLC counter tiebreaker
typestest_hlc_compare_same_time_same_counterTests HLC equality case

New Tests Added (v0.4.0)

Merkle Tree Tests (40 tests)

TestDescription
test_merkle_tree_newTree creation and initialization
test_merkle_tree_insertSingle row insertion
test_merkle_tree_updateRow update changes hash
test_merkle_tree_removeRow removal
test_merkle_tree_root_hashRoot hash calculation
test_merkle_tree_diffDetecting differences between trees
test_merkle_tree_diff_at_depthMulti-level diff traversal
test_merkle_tree_get_hashHash retrieval at depth
test_merkle_tree_leavesLeaf node access
test_merkle_message_serializationMessage roundtrip
test_merkle_message_range_responseRange response message
test_merkle_message_data_requestData request message
test_hash_to_prefix_at_depth_zeroPrefix calculation at depth 0

mDNS Discovery Tests (25 tests)

TestDescription
test_mdns_discovery_newDiscovery service creation
test_mdns_config_service_typeService type configuration
test_discovered_peer_debugDiscoveredPeer debug format
test_gossip_addrGossip address accessor
test_transport_addrTransport address accessor
test_notify_discovered_logs_valid_peerValid peer notification logging
test_notify_discovered_different_clusterCross-cluster peer filtering
test_notify_discovered_channel_closedChannel error handling

Delta Sync Tests

TestDescription
test_field_op_serializationFieldOp enum roundtrip
test_delta_data_newDeltaData creation
test_change_data_fullFull row change data
test_change_data_deltaDelta change data
test_apply_delta_changeDelta application to row

Transport Sans-IO Tests (48 tests)

TestDescription
test_encode_message_*Message encoding for all types
test_decode_message_*Message decoding with validation
test_validate_broadcastBroadcast checksum validation
test_create_sync_requestSyncRequest message creation
test_extract_sync_responseResponse changeset extraction
test_message_type_nameMessage type string conversion
test_count_broadcast_changesChange counting
test_get_broadcast_seqSequence number extraction

Configuration Tests (v0.4.0)

Configuration tests validate all environment variable combinations and service configurations.

Running Configuration Tests

# Run all configuration tests
task test:config-all

# Run individual tests
task test:config-default # TCP only (minimal config)
task test:config-all-services # All services enabled
task test:config-regions # Test sa, us, eu, ap regions
task test:config-api # API endpoints
task test:config-tls # TLS with self-signed cert
task test:config-replication # Replication (gossip + transport)
task test:config-binding # Binding TTL configuration
task test:config-debug # Debug mode

# Cleanup between tests
task test:config-cleanup

Test Scenarios

TestEnvironment VariablesExpected Result
DefaultLISTEN_ADDR, DB_PATH, REGIONTCP proxy on 8080
All ServicesAll vars enabled6 ports listening
RegionsREGION=sa/us/eu/apEach region starts
APIAPI_ENABLED=true6 endpoints working
TLSTLS_ENABLED=trueSelf-signed cert
ReplicationREPLICATION_ENABLED=trueGossip + Transport
BindingBINDING_TTL_SECS=300Custom TTL works
DebugDEBUG=1Debug logging

API Endpoint Tests

EndpointMethodTestExpected
/healthGETHealth check{"status":"ok"}
/api/v1/registerPOSTRegister backend{"registered":true}
/api/v1/backendsGETList backendsArray of backends
/api/v1/backends/:idGETGet backendBackend details
/api/v1/heartbeat/:idPOSTUpdate heartbeat{"status":"ok"}
/api/v1/backends/:idDELETERemove backend{"deregistered":true}

AWS Deployment Tests (v0.4.0)

Production deployment tests on AWS Ireland (eu-west-1).

Deployment Details

PropertyValue
Instance34.240.78.199
Regioneu-west-1 (Ireland)
Instance Typet3.micro
OSUbuntu 22.04
Binary/opt/edgeproxy/edge-proxy
Servicesystemd (edgeproxy.service)

Service Configuration

[Service]
Environment=EDGEPROXY_LISTEN_ADDR=0.0.0.0:8080
Environment=EDGEPROXY_DB_PATH=/opt/edgeproxy/routing.db
Environment=EDGEPROXY_REGION=eu
Environment=EDGEPROXY_DB_RELOAD_SECS=5
Environment=EDGEPROXY_BINDING_TTL_SECS=600
Environment=EDGEPROXY_BINDING_GC_INTERVAL_SECS=60
Environment=EDGEPROXY_TLS_ENABLED=true
Environment=EDGEPROXY_TLS_LISTEN_ADDR=0.0.0.0:8443
Environment=EDGEPROXY_API_ENABLED=true
Environment=EDGEPROXY_API_LISTEN_ADDR=0.0.0.0:8081
Environment=EDGEPROXY_HEARTBEAT_TTL_SECS=60
Environment=EDGEPROXY_DNS_ENABLED=true
Environment=EDGEPROXY_DNS_LISTEN_ADDR=0.0.0.0:5353
Environment=EDGEPROXY_DNS_DOMAIN=internal
Environment=EDGEPROXY_REPLICATION_ENABLED=true
Environment=EDGEPROXY_REPLICATION_NODE_ID=pop-eu-ireland-1
Environment=EDGEPROXY_REPLICATION_GOSSIP_ADDR=0.0.0.0:4001
Environment=EDGEPROXY_REPLICATION_TRANSPORT_ADDR=0.0.0.0:4002
Environment=EDGEPROXY_REPLICATION_DB_PATH=/opt/edgeproxy/state.db
Environment=EDGEPROXY_REPLICATION_CLUSTER_NAME=edgeproxy-prod

Test Results (2025-12-08)

Port Connectivity

ServicePortProtocolStatus
TCP Proxy8080TCPOK
TLS Server8443TCPOK
API Server8081TCPOK
DNS Server5353UDPOK
Gossip4001UDPOK
Transport4002UDPOK

API Endpoint Tests

EndpointStatusResponse
GET /healthOK{"status":"ok","version":"0.2.0"}
POST /api/v1/registerOK{"registered":true}
GET /api/v1/backendsOKLists registered backends
GET /api/v1/backends/:idOKReturns backend details
POST /api/v1/heartbeat/:idOK{"status":"ok"}
DELETE /api/v1/backends/:idOK{"deregistered":true}

TLS Certificate

subject=CN = rcgen self signed cert
issuer=CN = rcgen self signed cert

Replication State

  • State DB: /opt/edgeproxy/state.db (36KB)
  • Node ID: pop-eu-ireland-1
  • Cluster: edgeproxy-prod

Running Tests on AWS

# SSH to instance
ssh -i .keys/edgeproxy-hub.pem ubuntu@34.240.78.199

# Check service status
sudo systemctl status edgeproxy

# View logs
sudo journalctl -u edgeproxy -f

# Test API locally
curl http://127.0.0.1:8081/health | jq .

# Test from external (requires Security Group rules)
curl http://34.240.78.199:8081/health

Security Group Rules

PortProtocolSourceDescription
22TCPYour IPSSH
8080TCP0.0.0.0/0TCP Proxy
8081TCP0.0.0.0/0API Server
8443TCP0.0.0.0/0TLS Server
5353UDP0.0.0.0/0DNS Server
4001UDPVPC CIDRGossip (internal)
4002UDPVPC CIDRTransport (internal)

Latency Results (Brazil to Ireland)

TestLatency
API Health Check~408ms
100 requests42.8s total (~428ms avg)

Note: Latency is expected due to geographic distance (Brazil to Ireland ~9,000km)