Asynchronus Docs
  • Welcome to Asynchronus
  • Ecosystem Overview
  • 🏛️ Core Architecture
  • Using Shell (User Guide)
  • The Asynchronus Agents
  • Building with Graph (Developer Guide)
  • Technology Deep Dive
  • Roadmap & Vision
  • Ecosystem & Community
    • Twitter
    • Github
    • Website
    • Telegram
  • Frequently Asked Questions (FAQ)
Powered by GitBook
On this page

🏛️ Core Architecture

PreviousEcosystem OverviewNextUsing Shell (User Guide)

Last updated 7 days ago

The Asynchronus system is architected as a robust, cloud-native environment, leveraging partnerships with Nvidia, Google, and AWS to ensure high performance, scalability, and cutting-edge AI capabilities. Our design focuses on a clear flow from user intent to autonomous execution, orchestrated by a central agent and powered by multiple AI models and specialized sub-agents.

Let's break down this flow:

1. Input & Task Initiation (Left)

The process begins when a task enters the system. This can originate from various sources:

  • User Input (Single Prompt): A user interacting via Shell, providing instructions in natural language.

  • Information Retrieval: A request or trigger needing specific data.

  • DeFi Transaction Execution: A command to perform an on-chain action like a swap or stake.

  • Data Analysis: A task requiring data processing and insight generation.

These inputs are primarily directed towards our central intelligence and planning components.

2. Model Hub (Multi-Model LLMs)

This is the AI brain of Asynchronus. It doesn't rely on a single Large Language Model. Instead, it acts as a hub, integrating and leveraging multiple LLMs (like Grok3, Perplexity, OpenAI, etc.).

  • Function: It processes incoming tasks, understands intent, enriches data, makes preliminary decisions, and communicates with both the Main Agent and the Sub-Agent System. It selects the best AI model or combination of models for the specific task at hand.

3. Main Asynchronus Agent (Task Planning & Coordination)

Acting as the central orchestrator or 'conductor', the Main Agent receives processed intent and data from the Model Hub.

  • Function: Its primary role is planning and coordination. It breaks down complex goals into a sequence of steps, determines which Sub-Agents are needed, and manages the overall workflow to ensure the user's objective is met efficiently and logically. It interacts closely with the Model Hub for continuous intelligence support.

4. Sub-Agent System (The Specialists)

This layer consists of our specialized workers – the individual Asynchronus Agents (Analyze, Bridge, Swap, Security, etc.). They receive instructions and context from the Model Hub and Main Agent.

  • Function: Each Sub-Agent performs its specific function – be it Information Retrieval, Transaction Execution, Yield Optimization, Bridging, or Analytics. They execute these tasks, interact with blockchains or data sources, and report results back or pass data to other agents or the final interface.

5. Asynchronus Computer (Transparent Interface)

This represents the final execution environment and output layer. It receives coordinated plans from the Main Agent and executed tasks/data from the Sub-Agent System.

  • Function: It acts as a transparent interface, potentially handling:

    • Final aggregation of results.

    • Presentation of information back to the user (via Shell).

    • Interaction with the On-Chain Proving Layer and Trusted Execution Environment (TEE) to ensure actions are verifiable and secure.

    • Providing a clear, understandable view of the autonomous operations performed.

In Summary:

This architecture allows Asynchronus to take a simple user prompt, understand its deep intent using multiple AIs, plan a complex, multi-step execution strategy, dispatch tasks to specialized agents, and finally execute and present the results securely and transparently, all within a high-performance cloud environment.

The Asynchronus Core Architecture, illustrating the flow of tasks from input through planning, intelligence, specialized execution, and the final interface.
Page cover image