We’ve successfully submitted our project to the Colosseum Breakout Hackathon!
Visit
LogoLogo
WebsiteX
  • About Minos
  • Minos SDK
    • Core functions
      • Core functions: Trade
      • Core functions: Vaults
    • Websocket
    • CLI: Sandbox strategy testing
  • Multi-Agent Orchestration
    • Agents
      • Data Collection Agent
      • Analytical Agent
      • Strategy Development Agent
      • Trade Implementation Agent
      • Performance Monitoring Agent
    • OpenAI Swarm and Langchain
  • Terminal: On-chain tasks execution
  • Autonomous trading agents
    • Ariadne: Autonomous Trade Scanner
    • Deucalion: Advanced Copy Trading
    • Androgeus: Rule-Based Trade Executor
    • How to use
    • Funds
  • Agent Vaults
    • Creation process
    • Cancelation process
    • Fees
    • Token
  • DevOps
    • Servers and GPU Infrastructure
    • Retrieval-Augmented Generation (RAG) Operations
      • Iterative Retrieval-Generation
      • Embedding Models
    • Conversational History Management
    • Agent Monitoring & Performance Analytics
  • Telegram terminal
Powered by GitBook

Minos AI 2025

On this page
  1. DevOps
  2. Retrieval-Augmented Generation (RAG) Operations

Iterative Retrieval-Generation

PreviousRetrieval-Augmented Generation (RAG) OperationsNextEmbedding Models

Last updated 5 days ago

Iter-RetGen enhances Minos LLMs by integrating retrieval-augmented generation (RAG) in an iterative process. Unlike traditional RAG, where retrieval and generation are loosely coupled, Iter-RetGen actively involves the LLM in refining retrieval queries and processing retrieved knowledge holistically.

This iterative synergy enables AI agents to dynamically incorporate external knowledge, reason over complex trading strategies, and mitigate hallucinations making it ideal for tasks requiring real-time data and multi-step reasoning in order to execute trades.

Iter-Retgen workflow

  1. Initial Generation: When user creates a trading strategy request Minos LLM produces an initial output (e.g., a partial answer or hypothesis) based on a query and its parametric knowledge.

  2. Context-Aware Retrieval: The initial output informs a retrieval step, where the system fetches relevant external knowledge (on-chain data) using the output as context to improve relevance.

  3. Iterative Refinement: The retrieved knowledge is fed back into the LLM, which generates an improved output. This cycle repeats, with each iteration refining retrieval queries and outputs until a satisfactory result is achieved.

  4. Generation-Augmented Retrieval Adaptation: The LLM can further adapt the retriever by fine-tuning its relevance modeling based on generated outputs, enhancing future retrieval accuracy.