6.4 KiB
Production Deployment Guide
This guide explains how to build and deploy the Orbital Simulator in production environments.
Building for Production
1. Rust API Server
Build the optimized API server:
# Build without GUI dependencies (recommended for servers)
cargo build --release --bin api_server --no-default-features
# Or build with GUI support (if system libraries are available)
cargo build --release --bin api_server --features gui
The compiled binary will be at target/release/api_server
.
2. Web Frontend
Build the optimized web frontend:
cd web
npm install
npm run build
The built files will be in web/dist/
.
3. Desktop GUI (Optional)
Build the desktop application:
# Install Tauri CLI if not already installed
cargo install tauri-cli
# Build the desktop app
cargo tauri build --features gui
The built application will be in src-tauri/target/release/bundle/
.
Deployment Options
Option 1: Single Server Deployment
Deploy both API and web frontend on the same server:
# 1. Copy the API server binary
cp target/release/api_server /opt/orbital_simulator/
# 2. Copy the web files
cp -r web/dist /opt/orbital_simulator/static
# 3. Create systemd service
sudo tee /etc/systemd/system/orbital-simulator.service > /dev/null <<EOF
[Unit]
Description=Orbital Simulator API Server
After=network.target
[Service]
Type=simple
User=orbital
WorkingDirectory=/opt/orbital_simulator
ExecStart=/opt/orbital_simulator/api_server
Restart=always
RestartSec=10
Environment=STATIC_DIR=/opt/orbital_simulator/static
[Install]
WantedBy=multi-user.target
EOF
# 4. Start the service
sudo systemctl enable orbital-simulator
sudo systemctl start orbital-simulator
Option 2: Separate API and Web Servers
API Server (Backend)
# Deploy API server only
cp target/release/api_server /opt/orbital_simulator/
Configure reverse proxy (nginx example):
server {
listen 80;
server_name api.yourdomain.com;
location / {
proxy_pass http://localhost:4395;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Web Frontend (Static Hosting)
Deploy web/dist/
to any static hosting service:
- Nginx: Copy files to web root
- Apache: Copy files to document root
- CDN: Upload to AWS S3, Cloudflare, etc.
Update the API endpoint in the frontend if needed.
Option 3: Docker Deployment
Create Dockerfile
:
# Multi-stage build
FROM rust:1.70 as rust-builder
WORKDIR /app
COPY Cargo.toml Cargo.lock ./
COPY src ./src
COPY build.rs ./
RUN cargo build --release --bin api_server --no-default-features
FROM node:18 as web-builder
WORKDIR /app
COPY web/package*.json ./
RUN npm install
COPY web ./
RUN npm run build
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=rust-builder /app/target/release/api_server ./
COPY --from=web-builder /app/dist ./static
EXPOSE 4395
CMD ["./api_server"]
Build and run:
docker build -t orbital-simulator .
docker run -p 4395:4395 orbital-simulator
Configuration
Environment Variables
PORT
: Server port (default: 4395)HOST
: Server host (default: 0.0.0.0)STATIC_DIR
: Directory for static files (default: web/dist)LOG_LEVEL
: Logging level (debug, info, warn, error)MAX_SIMULATIONS
: Maximum concurrent simulations (default: 10)
Example Production Configuration
export PORT=8080
export HOST=0.0.0.0
export LOG_LEVEL=info
export MAX_SIMULATIONS=50
export STATIC_DIR=/var/www/orbital-simulator
Performance Tuning
Rust API Server
-
Enable release optimizations in
Cargo.toml
:[profile.release] opt-level = 3 lto = true codegen-units = 1 panic = 'abort'
-
Limit concurrent simulations to prevent resource exhaustion
-
Use connection pooling for database connections (if added)
Web Frontend
- Enable gzip compression in your web server
- Set appropriate cache headers for static assets
- Use a CDN for global distribution
- Implement lazy loading for large datasets
Monitoring and Logging
Metrics to Monitor
- API response times
- Active simulation count
- Memory usage
- CPU usage
- WebSocket connection count
Logging
The application uses the log
crate. Configure logging level:
RUST_LOG=info ./api_server
Log levels:
debug
: Detailed debugging informationinfo
: General operational informationwarn
: Warning messageserror
: Error messages only
Health Checks
The API server provides health check endpoints:
GET /api/health
: Basic health checkGET /api/metrics
: System metrics (if enabled)
Security Considerations
- Use HTTPS in production
- Configure CORS appropriately for your domain
- Implement rate limiting to prevent abuse
- Sanitize user inputs in configuration uploads
- Run with minimal privileges (non-root user)
- Keep dependencies updated regularly
Backup and Recovery
Data to Backup
- Configuration files
- Simulation results (if persisted)
- User data (if user accounts are added)
Recovery Procedures
- Restore application binary
- Restore configuration files
- Restart services
- Verify functionality
Scaling
Horizontal Scaling
- Deploy multiple API server instances behind a load balancer
- Use Redis for session storage (if sessions are added)
- Implement proper service discovery
Vertical Scaling
- Increase server resources (CPU, RAM)
- Optimize simulation algorithms
- Use faster storage (SSD)
Troubleshooting
Common Issues
- Port already in use: Change PORT environment variable
- Static files not found: Check STATIC_DIR path
- High memory usage: Limit concurrent simulations
- Slow performance: Enable release optimizations
Debug Mode
Run with debug logging:
RUST_LOG=debug ./api_server
Performance Profiling
Use perf
or valgrind
to profile the application:
perf record --call-graph=dwarf ./api_server
perf report
Support
For issues and questions:
- Check the logs for error messages
- Verify configuration settings
- Test with minimal configuration
- Consult the main README.md for additional information