Documentation Index
Fetch the complete documentation index at: https://docs.osmedeus.org/llms.txt
Use this file to discover all available pages before exploring further.
Setup (30 seconds)
# Enable cloud feature
osmedeus config set cloud.enabled true
# Set credentials (pick your provider)
osmedeus cloud config set providers.aws.access_key_id ${AWS_ACCESS_KEY_ID}
osmedeus cloud config set providers.aws.secret_access_key ${AWS_SECRET_ACCESS_KEY}
osmedeus cloud config set providers.aws.region ap-southeast-1
osmedeus cloud config set defaults.provider aws
# SSH keys
osmedeus cloud config set ssh.private_key_path ~/.ssh/id_rsa
osmedeus cloud config set ssh.public_key_path ~/.ssh/id_rsa.pub
# Clean the setup scripts first
osmedeus cloud config set setup.commands.clear ""
# Worker setup commands
osmedeus cloud config set setup.commands.add "curl -fsSL https://www.osmedeus.org/install.sh | bash"
osmedeus cloud config set setup.commands.add "osmedeus install base --preset"
Configuration
osmedeus cloud config list # View all settings
osmedeus cloud config set <key> <value> # Set a value
osmedeus cloud config set <key>.add <value> # Append to list
osmedeus cloud config clean # Reset to defaults
# Provider credentials
osmedeus cloud config set providers.<provider>.<key> <value>
# Instance type
osmedeus cloud config set providers.aws.instance_type t3.large
# Spot instances (70-80% cheaper)
osmedeus cloud config set providers.aws.use_spot true
# Cost limits
osmedeus cloud config set limits.max_hourly_spend 1.00
osmedeus cloud config set limits.max_total_spend 10.00
osmedeus cloud config set limits.max_instances 10
Infrastructure
osmedeus cloud create --provider aws -n 3 # Create instances
osmedeus cloud list # List active infra
osmedeus cloud destroy <infra-id> # Destroy by ID
osmedeus cloud destroy all --force # Destroy everything
osmedeus cloud setup --reuse-with "1.2.3.4,5.6.7.8" # Setup existing machines
Workflow Mode
# Basic
osmedeus cloud run -f fast -t example.com
osmedeus cloud run -m enum-subdomain -t example.com --timeout 30m
# Multiple instances
osmedeus cloud run -f general -t example.com --instances 3 --provider aws
# Multiple targets distributed across workers
osmedeus cloud run -f fast -T targets.txt --instances 5
osmedeus cloud run -f fast -T targets.txt --chunk-size 10 # 10 targets per worker
osmedeus cloud run -f fast -T targets.txt --chunk-count 3 # Split into 3 chunks
# Reuse existing infrastructure
osmedeus cloud run -f fast -t example.com --reuse
osmedeus cloud run -f fast -t example.com --reuse-with "1.2.3.4,5.6.7.8"
# Sync results back + auto-destroy
osmedeus cloud run -f fast -t example.com --sync-back --auto-destroy
Custom Command Mode
Run arbitrary commands on cloud instances (mutually exclusive with -f/-m):
# Single command
osmedeus cloud run --custom-cmd "nmap -sV {{Target}}" -t example.com
# Multiple sequential commands
osmedeus cloud run \
--custom-cmd "subfinder -d {{Target}} -o /tmp/osm-custom/subs.txt" \
--custom-cmd "cat /tmp/osm-custom/subs.txt | httpx -o /tmp/osm-custom/live.txt" \
-t example.com
# Post-commands (run only if all custom-cmds succeed)
osmedeus cloud run \
--custom-cmd "nuclei -u {{Target}} -o /tmp/osm-custom/results.txt" \
--custom-post-cmd "cat /tmp/osm-custom/results.txt | notify" \
-t example.com
# Sync results back
osmedeus cloud run \
--custom-cmd "nmap -sV {{Target}} -oA /tmp/osm-custom/scan" \
--sync-path "/tmp/osm-custom/" \
--sync-dest "./my-results" \
-t example.com
# Distribute targets across workers
osmedeus cloud run \
--custom-cmd "cat {{Target}} | httpx -o /tmp/osm-custom/live.txt" \
--sync-path "/tmp/osm-custom/live.txt" \
-T targets.txt --instances 5 --auto-destroy
Template Variables
| Variable | Description |
|---|
{{Target}} | Target string or chunk file path (with -T) |
{{public_ip}} | Worker’s public IP |
{{private_ip}} | Worker’s private IP |
{{worker_name}} | Resource name |
{{worker_id}} | Cloud resource ID |
{{infra_id}} | Infrastructure ID |
{{provider}} | Provider name |
{{ssh_user}} | SSH username |
{{index}} | Worker index (0, 1, 2, …) |
Behavior
- Commands run in
/tmp/osm-custom/ on the remote
- Custom-cmds run sequentially per worker, in parallel across workers
- First failure stops remaining commands and skips post-cmds for that worker
- Sync downloads to:
<sync-dest>/<worker_name>-<ip>/<remote_path>
Flags Reference
| Flag | Short | Description |
|---|
--flow | -f | Flow workflow name |
--module | -m | Module workflow name |
--target | -t | Single target |
--target-file | -T | File containing targets |
--provider | -p | Cloud provider |
--instances | -n | Number of instances |
--timeout | | Scan timeout (e.g., 2h, 30m) |
--auto-destroy | | Destroy infrastructure after completion |
--reuse | | Auto-discover existing infrastructure |
--reuse-with | | Reuse specific IPs (comma-separated) |
--sync-back | | Download workflow results (workflow mode) |
--verbose-setup | | Show full setup command output |
--ansible | | Use Ansible playbook for setup |
--chunk-size | | Targets per worker chunk |
--chunk-count | | Split targets into N chunks |
--custom-cmd | | Custom command (repeatable) |
--custom-post-cmd | | Post-command (repeatable) |
--sync-path | | Remote path to download (repeatable) |
--sync-dest | | Local sync directory (default: ./osm-sync-back) |
Cost Reference
| Provider | Instance | vCPU | RAM | Hourly |
|---|
| Hetzner | cx22 | 2 | 4 GB | ~$0.007 |
| Linode | g6-standard-2 | 2 | 4 GB | $0.018 |
| DigitalOcean | s-2vcpu-4gb | 2 | 4 GB | $0.02232 |
| AWS | t3.medium | 2 | 4 GB | $0.0416 |
| GCP | n1-standard-2 | 2 | 7.5 GB | $0.095 |
| Azure | Standard_B2s | 2 | 4 GB | $0.042 |