Skip to main content
Osmedeus supports 8 step types for different execution needs.

Overview

TypeDescriptionPrimary Use
bashExecute shell commandsRun tools, file operations
functionRun utility functionsConditions, logging, file checks
foreachIterate over file linesProcess lists
parallel-stepsRun steps concurrentlyParallel tool execution
remote-bashPer-step Docker/SSHMixed environments
httpMake HTTP requestsAPI calls, webhooks
llmAI-powered processingAnalysis, summarization
fragment-stepExecute reusable fragmentCode reuse, DRY patterns

bash

Execute shell commands.

Basic Command

- name: run-subfinder
  type: bash
  command: subfinder -d {{target}} -o {{Output}}/subs.txt

Multiple Commands (Sequential)

- name: setup
  type: bash
  commands:
    - mkdir -p {{Output}}/scans
    - echo "Starting scan for {{target}}"
    - date > {{Output}}/start-time.txt

Parallel Commands

- name: run-tools
  type: bash
  parallel_commands:
    - subfinder -d {{target}} -o {{Output}}/subfinder.txt
    - amass enum -passive -d {{target}} -o {{Output}}/amass.txt
    - assetfinder {{target}} > {{Output}}/assetfinder.txt

Structured Arguments

- name: nuclei-scan
  type: bash
  command: nuclei
  input_args:
    - name: target-list
      flag: -l
      value: "{{Output}}/live.txt"
  output_args:
    - name: output
      flag: -o
      value: "{{Output}}/nuclei.txt"
  config_args:
    - name: templates
      flag: -t
      value: "{{Data}}/templates/cves"
  speed_args:
    - name: rate-limit
      flag: -rl
      value: "150"

Save Output to File

- name: scan
  type: bash
  command: nmap -sV {{target}}
  std_file: "{{Output}}/nmap-output.txt"

function

Execute utility functions via Goja JavaScript VM.

Single Function

- name: log-start
  type: function
  function: log_info("Starting scan for {{target}}")

Multiple Functions

- name: check-files
  type: function
  functions:
    - log_info("Checking prerequisites")
    - fileExists("{{Output}}/targets.txt")
    - log_info("Ready to proceed")

Parallel Functions

- name: parallel-checks
  type: function
  parallel_functions:
    - fileLength("{{Output}}/subs.txt")
    - fileLength("{{Output}}/urls.txt")
    - fileLength("{{Output}}/live.txt")

Use in Conditions

- name: run-if-exists
  type: bash
  pre_condition: 'fileExists("{{Output}}/targets.txt")'
  command: nuclei -l {{Output}}/targets.txt

foreach

Iterate over lines in a file with parallel execution using a worker pool.

Basic Loop

- name: probe-subdomains
  type: foreach
  input: "{{Output}}/subdomains.txt"
  variable: subdomain
  threads: 10
  step:
    name: httpx-probe
    type: bash
    command: echo [[subdomain]] | httpx -silent >> {{Output}}/live.txt

With Nested Variables

- name: scan-hosts
  type: foreach
  input: "{{Output}}/hosts.txt"
  variable: host
  threads: 5
  step:
    name: nuclei-scan
    type: bash
    command: nuclei -u [[host]] -t {{templates}} -o {{Output}}/nuclei-[[host]].txt

Bounded Concurrency

The foreach executor uses a worker pool pattern:
┌───────────────────────────────────────────────────┐
│                  Foreach Executor                  │
│                                                    │
│   Input File: subdomains.txt (1000 lines)         │
│                      │                             │
│                      ▼                             │
│   ┌────────────────────────────────────────────┐  │
│   │            Worker Pool (threads: 10)        │  │
│   │  ┌────┐ ┌────┐ ┌────┐ ... ┌────┐          │  │
│   │  │ W1 │ │ W2 │ │ W3 │     │ W10│          │  │
│   │  └────┘ └────┘ └────┘     └────┘          │  │
│   └────────────────────────────────────────────┘  │
│                      │                             │
│                      ▼                             │
│   Results collected after all items processed      │
│                                                    │
└───────────────────────────────────────────────────┘
  • Workers pull items from a shared queue
  • Maximum threads items processed concurrently
  • Memory-efficient: doesn’t spawn all goroutines upfront
  • Graceful cancellation on context timeout

Fields

FieldRequiredDescription
inputYesPath to file with items (one per line)
variableYesVariable name for current item
threadsNoMaximum concurrent iterations (default: 1)
stepYesStep to execute for each item

Variable Syntax

Use [[variable]] (double brackets) for loop variables to avoid conflicts with {{templates}}:
step:
  name: scan
  type: bash
  # [[subdomain]] - replaced per iteration
  # {{Output}} - resolved once before loop
  command: nuclei -u [[subdomain]] -o {{Output}}/result-[[subdomain]].txt

Nested Foreach

Foreach steps can contain other foreach steps:
- name: scan-ports-per-host
  type: foreach
  input: "{{Output}}/hosts.txt"
  variable: host
  threads: 5
  step:
    name: scan-ports
    type: foreach
    input: "{{Output}}/ports.txt"
    variable: port
    threads: 10
    step:
      name: probe
      type: bash
      command: nc -zv [[host]] [[port]]

parallel-steps

Run multiple steps concurrently.
- name: parallel-recon
  type: parallel-steps
  parallel_steps:
    - name: subfinder
      type: bash
      command: subfinder -d {{target}} -o {{Output}}/subfinder.txt

    - name: amass
      type: bash
      command: amass enum -passive -d {{target}} -o {{Output}}/amass.txt

    - name: findomain
      type: bash
      command: findomain -t {{target}} -o {{Output}}/findomain.txt
Nested steps can be any type:
- name: parallel-checks
  type: parallel-steps
  parallel_steps:
    - name: check-dns
      type: bash
      command: dig {{target}}

    - name: log-check
      type: function
      function: log_info("Parallel check running")

    - name: probe-hosts
      type: foreach
      input: "{{Output}}/subs.txt"
      variable: sub
      threads: 5
      step:
        type: bash
        command: echo [[sub]] | httpx

remote-bash

Execute commands in Docker or SSH without module-level runner.

Docker Execution

- name: docker-nuclei
  type: remote-bash
  step_runner: docker
  step_runner_config:
    image: projectdiscovery/nuclei:latest
    volumes:
      - "{{Output}}:/output"
    environment:
      - "API_KEY={{api_key}}"
  command: nuclei -u {{target}} -o /output/nuclei.txt

SSH Execution

- name: ssh-nmap
  type: remote-bash
  step_runner: ssh
  step_runner_config:
    host: "{{ssh_host}}"
    port: 22
    user: "{{ssh_user}}"
    key_file: ~/.ssh/scanner_key
  command: nmap -sV {{target}} -oN /tmp/nmap.txt
  step_remote_file: /tmp/nmap.txt
  host_output_file: "{{Output}}/nmap-result.txt"

Fields

FieldRequiredDescription
step_runnerYesdocker or ssh
step_runner_configYesRunner configuration
commandYesCommand to execute
step_remote_fileNoRemote file to copy back
host_output_fileNoLocal destination for remote file

http

Make HTTP requests with automatic retries and connection pooling.

Supported Methods

MethodDescription
GETRetrieve data (default)
POSTCreate/submit data
PUTUpdate/replace resource
PATCHPartial update
DELETERemove resource

GET Request

- name: fetch-api
  type: http
  url: "https://api.example.com/data/{{target}}"
  method: GET
  headers:
    Authorization: "Bearer {{api_token}}"
  exports:
    api_response: "{{fetch_api_http_resp.response_body}}"
    status: "{{fetch_api_http_resp.status_code}}"

POST Request

- name: submit-scan
  type: http
  url: "https://scanner.example.com/api/scan"
  method: POST
  headers:
    Content-Type: application/json
  request_body: |
    {
      "target": "{{target}}",
      "scan_type": "full"
    }

PUT Request

- name: update-config
  type: http
  url: "https://api.example.com/config/{{target}}"
  method: PUT
  headers:
    Content-Type: application/json
    Authorization: "Bearer {{api_token}}"
  request_body: '{"enabled": true}'

PATCH Request

- name: patch-status
  type: http
  url: "https://api.example.com/scan/{{scan_id}}"
  method: PATCH
  headers:
    Content-Type: application/json
  request_body: '{"status": "completed"}'

DELETE Request

- name: remove-entry
  type: http
  url: "https://api.example.com/entries/{{entry_id}}"
  method: DELETE
  headers:
    Authorization: "Bearer {{api_token}}"

Auto-Exported Variables

After HTTP step execution, variables are exported with the pattern <step_name>_http_resp:
# Access as: {{step_name_http_resp.field}}
# Fields available:
#   status_code      - HTTP status code (int)
#   response_body    - Response body (string)
#   response_headers - Response headers (map)
#   content_length   - Response size in bytes (int)
#   response_time_ms - Request duration in ms (int)
#   error            - Error message if failed (string or null)
#   message          - Status message (string)

HTTP Features

  • Connection Pooling: Reuses connections for efficiency
  • Automatic Retries: Retries on network errors and 5xx responses (up to 3 attempts)
  • Timeout: Configurable via step timeout field (default: 30s)
  • Template Support: Headers and request body support {{variable}} interpolation

llm

AI-powered processing using LLM APIs (OpenAI-compatible).

Chat Completion

- name: analyze-findings
  type: llm
  messages:
    - role: system
      content: You are a security analyst. Analyze the findings and provide a summary.
    - role: user
      content: |
        Analyze these vulnerability findings:
        {{readFile("{{Output}}/vulnerabilities.txt")}}
  exports:
    analysis: "{{analyze_findings_content}}"

Message Roles

RoleDescription
systemSystem prompt defining behavior
userUser input/question
assistantModel’s previous response
toolTool/function call result

With Tool Calling

Define tools the LLM can invoke (OpenAI-compatible function calling):
- name: intelligent-scan
  type: llm
  messages:
    - role: system
      content: You are a security scanner assistant.
    - role: user
      content: Analyze {{target}} and suggest next steps.
  tools:
    - type: function
      function:
        name: run_scan
        description: Execute a security scan
        parameters:
          type: object
          properties:
            scan_type:
              type: string
              enum: [port, vuln, web]
            target:
              type: string
              description: Target to scan
          required:
            - scan_type
            - target
  tool_choice: auto  # auto, none, or {"type": "function", "function": {"name": "run_scan"}}
Tool calls are available in exports as {{step_name_llm_resp.tool_calls}}.

Embeddings

Generate vector embeddings for text:
- name: generate-embeddings
  type: llm
  is_embedding: true
  embedding_input:
    - "{{readFile('{{Output}}/finding1.txt')}}"
    - "{{readFile('{{Output}}/finding2.txt')}}"
  exports:
    embeddings: "{{generate_embeddings_llm_resp.embeddings}}"

Multimodal Content (Vision)

Include images in messages:
- name: analyze-screenshot
  type: llm
  messages:
    - role: user
      content:
        - type: text
          text: "Analyze this application screenshot for security issues"
        - type: image_url
          image_url:
            url: "{{Output}}/screenshot.png"

Structured Output (JSON Schema)

Force structured JSON responses:
- name: structured-analysis
  type: llm
  messages:
    - role: system
      content: You are a security analyst.
    - role: user
      content: Analyze {{target}} and return structured findings.
  llm_config:
    response_format:
      type: json_schema
      json_schema:
        name: security_findings
        schema:
          type: object
          properties:
            severity:
              type: string
              enum: [critical, high, medium, low, info]
            findings:
              type: array
              items:
                type: object
                properties:
                  title:
                    type: string
                  description:
                    type: string
          required:
            - severity
            - findings

Configuration Override

Override global LLM settings per step:
- name: custom-llm
  type: llm
  llm_config:
    model: gpt-4-turbo
    max_tokens: 4000
    temperature: 0.3
    top_p: 0.95
    timeout: 5m
    max_retries: 5
    custom_headers:
      X-Custom-Header: value
  messages:
    - role: user
      content: Analyze {{target}}

Auto-Exported Variables

After LLM step execution:
# Access as: {{step_name_llm_resp.field}} or {{step_name_content}}
# Fields available in step_name_llm_resp:
#   id             - Response ID
#   model          - Model used
#   content        - Response text (first choice)
#   finish_reason  - Why generation stopped
#   role           - Message role
#   tool_calls     - Tool calls if any (array)
#   all_contents   - All choices if n > 1
#   usage          - Token usage (prompt_tokens, completion_tokens, total_tokens)
#   embeddings     - Embedding vectors (for embedding mode)
#
# Shorthand: {{step_name_content}} directly contains the response text

Provider Rotation

If multiple LLM providers are configured, the executor automatically:
  • Rotates to next provider on rate limits or errors
  • Retries up to max_retries * provider_count times
  • Records rate limit metrics for monitoring

fragment-step

Execute a pre-defined fragment (reusable step collection) inline.

Basic Usage

- name: run-subdomain-enum
  type: fragment-step
  fragment_name: subdomain-enum

With Overrides

- name: run-enum-custom
  type: fragment-step
  fragment_name: subdomain-enum
  override:
    threads: "20"
    timeout: "10m"
    Target: "custom.example.com"

Fields

FieldRequiredDescription
fragment_nameYesName of the included fragment to execute
overrideNoMap of step fields or template variables to override
Note: The fragment must be included in the workflow’s includes section with a matching fragment_name.

Common Step Fields

All steps support these fields:
- name: step-name              # Required: unique name
  type: bash                   # Required: step type

  depends_on:                  # Step dependencies (DAG execution)
    - previous-step-1
    - previous-step-2

  pre_condition: 'expr'        # Skip if false
  timeout: 30m                 # Step timeout (e.g., 30s, 30m, 1h, 1d)
  log: "{{Output}}/step.log"   # Log file path

  exports:                     # Export values
    var_name: "{{value}}"

  on_success:                  # Success handlers
    - action: log
      message: "Done"

  on_error:                    # Error handlers
    - action: continue

  decision:                    # Conditional routing
    switch: "{{value}}"
    cases:
      "match": { goto: other-step }
    default: { goto: fallback }

Field Reference

FieldDescription
nameRequired. Unique step identifier
typeRequired. Step type (bash, function, foreach, etc.)
depends_onArray of step names this step depends on (enables parallel execution)
pre_conditionExpression to evaluate; step skipped if false
timeoutMaximum execution time (default varies by step type)
logPath to write step output log
exportsMap of variable names to values to export
on_successActions to execute on successful completion
on_errorActions to execute on failure
decisionConditional routing based on variable values

Next Steps