Dynaseal - Secure LLM Agent Framework

Dynaseal Paper

Click the link below to download the Dynaseal paper:

Download Dynaseal Paper

Why Dynaseal?

Imagine a future where LLM agents are ubiquitous on the edge devices. However, the current method of accessing large models through an easily accessible API-key could lead to potential abuse, such as excessive billing or server overload. Dynaseal addresses this issue by introducing a dynamic token system that limits the scope and duration of API access, ensuring secure and controlled communication between edge devices and large model providers.

Project Overview

Dynaseal is a server-side framework designed for future edge-based LLM agent models. It employs a dynamic token mechanism, similar to OSS, to restrict the models and parameters that can be accessed by edge agents, along with the token's lifespan. It supports direct communication between edge devices and large model providers, with a callback mechanism to notify the backend upon response completion.

System Design

The architecture consists of three main components: LLM Server, Backend, and Client.

  1. Backend Initialization: Requests LLM API-key from the LLM Server.
  2. Client Initialization: Requests authentication from the Backend, which returns a dynamic key specifying models, tokens, and other essential information.
  3. Client Request: Uses the dynamic key to request models from the LLM Server.
  4. LLM Server Response: Unpacks the dynamic key, verifies identity, and generates a response.
  5. Callback Notification: The LLM Server notifies the Backend of the client's request and response.

Dynamic Key Design

The dynamic key is designed as a JWT token (`header.payload.secret`). The payload contains essential information such as the API-key, model, maximum tokens, expiration time, and event ID. The secret is encrypted using the backend's registered key, ensuring the dynamic key's validity.

Implementation

The project is divided into three main folders:

  • LLM Server: The large model backend, which includes authentication and response mechanisms for dynamic keys.
  • Backend: The business backend that authenticates clients and distributes dynamic keys.
  • Client: The edge device running the agent.

Getting Started

To start the project:

  1. Launch the LLM Server.
  2. Launch the Backend.
  3. Run the Client to verify successful invocation.

Future Work

Some features yet to be implemented include:

  • Storing callback requests in the database.
  • LLM Server and Backend registration.
  • And more...