AI

The AI component centers around using a LLM to interpret user intents and and execute actions. We also aim to provide intuitive onboarding experience with interactive guidance from AI for new users.

Current architecture

The current AI archirecture
The current AI architecture

Our backend receives users’ requests and immediately converts them into executable code that is then sent to the frontend. For example, if a user requests, "Swap 50% of my USDC.e to wS," the system translates that into code similar to:

const inputAmount = 0.5 * GET_BALANCE("USDC.e");
const outputAmount = SWAP("USDC.e", "wS", inputAmount);

This approach is unconventional compared to more mainstream methods like tool-based calls, yet it offers several advantages:

  • Sonic Speed: Even for actions that require multiple steps, the entire request is processed in a single operation. This minimizes wait time after the initial processing, providing a seamless user experience.

  • Flexibility: Since the agent directly executes the code, it isn’t confined by the limitations of predefined tool calls. This is particularly beneficial for sophisticated tasks that demand precise calculations, data manipulations, or involve a long chain of actions where certain steps must occur sequentially.

Future architecture

We are actively developing a new framework aimed at further enhancing agent performance and allow delegated tasks. The goals for this next-generation architecture are to maintain speed and precision while incorporating richer context.

Last updated