Overview
Thellm step type enables:
- Chat completions for analysis
- Tool/function calling
- Text embeddings
- Multimodal content (images)
- Structured outputs
Configuration
Settings
Inosm-settings.yaml:
Environment Variables
Chat Completion
Basic Usage
Message Roles
| Role | Description |
|---|---|
system | System instructions |
user | User input |
assistant | Previous AI response |
tool | Tool call result |
Multi-turn Conversation
Tool Calling
Define Tools
Handle Tool Calls
Tool calls are exported for handling:Embeddings
Generate Embeddings
Use with Files
Structured Output
JSON Schema
Configuration Override
Per-Step Config
Extra Parameters
Multimodal Content
Image Analysis
API Endpoint
OpenAI-compatible API:Providers
OpenAI
Anthropic
Ollama (Local)
Azure OpenAI
Use Cases
Vulnerability Analysis
Report Generation
Intelligent Filtering
Workflow Functions
Use LLM functions directly in function steps without the fullllm step type.
llm_invoke
Simple LLM call with a direct message:llm_invoke_custom
LLM call with a custom POST body template. Use{{message}} as a placeholder:
llm_conversations
Multi-turn conversation usingrole:content format:
Best Practices
- Use system prompts for consistent behavior
- Limit context size - summarize large inputs
- Handle errors - LLM calls can fail
- Cache responses when possible
- Use structured output for parsing
- Consider local models for sensitive data
Next Steps
- Step Types - LLM step details
- API Overview - LLM endpoints
- Configuration - LLM settings