asterai Documentation
Overview
Create, integrate and deploy your AI agents with asterai - a platform for developing and deploying modular AI agents and fully featured AI applications.
What is asterai?
asterai is a platform for developing and deploying modular AI agents and fully featured AI applications.
asterai lets you create an AI application such as an agent or assistant and easily add integrations and features to it in a modular way. Program custom features and logic with plugins, or click to instantly add existing functionality from the plugin marketplace.
With asterai, you can write plugins using different programming languages, such as AssemblyScript, a TypeScript-like language, and deploy them from your terminal into your agent to instantly activate the plugin.
This allows you to expose application functionality to LLMs (Large Language Models, such as ChatGPT's GPT-4) as well as to other plugins.
From within a plugin, you have access to a wide range of tools out of the box such as AI search, LLM calls, HTTP requests, WebSocket connections, and more, allowing you to seamlessly integrate a powerful AI stack to your application.
asterai lets you focus on your app, saving your time by giving you all the managed AI infra and tools you need in a simple way.
Key Features of asterai
- Plugin-based, modular AI agent building framework
- AI search with a managed knowledge base and vector DBs
- SDK for building plugins
- SDK for querying agents: consume both structured data and natural language
- REST API endpoints for querying agents from any client
- CLI for deploying plugins
- Cloud console for managing agents
What can I do with asterai?
Lots cool of things!
- make natural language an additional input type for your application
- integrate AI search to make your app more efficient
- easily build custom, reliable chatbots, assistants and AI agents
- create system workflows capable of handling natural language
- get instant access to AI tools without worrying about infrastructure or scaling
Example use cases
Knowledge Base and AI search
Upload files to a knowledge base and connect it to your agent so that it can answer questions specific to your application. Your agent can also perform authenticated actions for your users, such as CRUD operations that would normally be used via buttons instead.
Or, with the knowledge base, return structured results to build an AI search engine rather than an assistant or agent. You can also do both, for different parts of your app, with the same knowledge base. That is, your search bar and chatbot can use the same underlying technology.
Function Calls and Structured Outputs
Query your application from any environment using our REST API or our SDKs, and consume structured outputs as well as natural language, allowing you to show front-end widgets and triggering front-end function calls alongside an AI assistant response.
For example, create an agent with a Weather plugin. The plugin can respond with an object containing the city name and temperature, allowing your front-end to consume that object to show a weather "widget" along with the natural language provided by the LLM.
Querying an asterai agent
With the JavaScript & TypeScript client library
You can use our TypeScript library to easily query an asterai agent from a web client or web server with Node, Bun, Deno etc. For a full interactive example in React, check out yeet-the-cube.
Examples
Query an agent and obtain a full text response back
import { AsteraiClient } from "@asterai/client";
const client = new AsteraiClient({
appId: ASTERAI_APP_ID,
queryKey: ASTERAI_PUBLIC_QUERY_KEY,
});
const response = await client.query({
query: "how's the weather like in NY?"
});
console.log(await response.text());
Query an agent and obtain a response back token by token
import { AsteraiClient } from "@asterai/client";
const client = new AsteraiClient({
appId: ASTERAI_APP_ID,
queryKey: ASTERAI_PUBLIC_QUERY_KEY,
});
const response = await client.query({
query: "how's the weather like in NY?"
});
let llmResponse = "";
response.onToken(token => {
llmResponse += token;
});
response.onEnd(() => {
// The full response has been received.
console.log(llmResponse);
});
With the HTTP API
To query an app, use the HTTP request below:
POST https://api.asterai.io/app/:app_id/query/:query_mode
HTTP Response format
Currently the only supported query mode is sse
, where the response
will be an SSE of app response events.
Each SSE data event line will begin with one of these prefixes:
SSE data event line prefix | Description |
---|---|
"llm-token: " | An LLM response token |
"plugin-output: " | Plugin output message serialized with protobuf |
Authorization
Note that you must set the Authorization
header to an app query key.
Find our more about app query keys below.
Query keys
To query your app, you need a query key. You can generate a new public query key via the dashboard.
There are two types of keys: public keys and user keys.
Public query key
When querying an app with a public key, there is no associated user data.
This is used for when you want everyone to have the same experience in your app, or when the end user is not logged in.
Note that "public" in this context simply means that there is no associated user for the key. Whether your app is open to the public is your choice. If your app accesses internal or sensitive systems even without an user being logged in, the "public query key" should not be leaked for security reasons. This depends on the design of your app and plugins.
User query keys
When querying an app with a user key, there is an associated user ID which allows plugins to work with user-specific features, such as accessing user key-value storage.
It is possible to generate a user query key using the following endpoint:
GET https://api.asterai.io/app/:app_id/query/key/user/:app_user_id
Response example:
{"key":"00000000-0000-0000-0000-000000000000"}
Where :app_id
is your app's ID and :app_user_id
is the ID of your app's
user, as a unique string (maximum of 64 characters).
Note that this is an authenticated request, so you must set the Authorization
header to your API key which can be found in your account page in the dashboard.
Once you have the user query key, you can query your app with the user key
by setting the Authorization
header in the regular app query endpoint
(POST /app/:app_id/query
).
Client libraries for other languages
If you'd like to show your interest in a client library for a specific programming language, join us on Discord and let us know!
Cloud Console & CLI
this section is a work in progress
The cloud console can be accessed from the asterai website. It lets you create and manage applications.
The CLI tool, which can be installed on your computer via NPM, is used to deploy plugins into your applications.
Once a plugin has been added to an application, you can access the cloud console from asterai's website and turn that plugin on or off, or delete the plugin. Other features, such as Vector DBs, can also be managed from the cloud console.
Plugins
this section is a work in progress
Plugins are the core feature of asterai. A plugin is a WebAssembly module, which can be written in AssemblyScript, a TypeScript-like language.
Plugins have access to a range of functionality such as LLMs and Vector DBs. They can also make HTTP calls and establish and manage WebSocket connections.
Therefore, a plugin can act as an bridge that connects AI to application logic.
Plugin manifest
The plugin manifest is a Protobuf (.proto) file written used to expose functionality from your plugin to both an LLM and other plugins.
A plugin manifest for a burger ordering plugin could look something like this:
syntax = "proto3";
service BurgerShop {
// Orders a burger to be delivered to a specific address.
rpc orderBurger(Order) returns (OrderResult);
}
message Order {
// The address to deliver the burger to.
string address = 1;
}
message OrderResult {
string system_message = 1;
}
The manifest file describes the name of your plugin through the service
keyword in protobuf, the exported plugin functions through the rpc
keyword,
and message types exchanged as function inputs and outputs.
The manifest file also generates convenience types for your plugin for each
message
declared in your plugin manifest.
To generate types from the protobuf manifest run asterai codegen
from your plugin's directory, which you can import from AssemblyScript code.
Plugin module
The plugin module is the AssemblyScript file that handles requests.
This compiles to a WebAssembly with asterai build
and can be deployed
to your app with asterai deploy --app <your_app_id>
.
Following from the burger example above, the plugin module could look something like this:
import { Log } from "@asterai/sdk";
import {Order} from "./generated/Order";
import {OrderResult} from "./generated/OrderResult";
export function orderBurger(order: Order): OrderResult {
Log.info("a burger is being ordered!");
// TODO: make http call to burger API.
return new OrderResult(`burger delivered to ${order.address}`);
}
Notice how the function receives structured data and outputs structured data, as defined by the types in the plugin manifest. This allows for seamless interoperability between reliable code and LLMs.
Marketplace
this section is a work in progress
The asterai marketplace will be a platform where users can list their own plugins and consume plugins made by other asterai users.
Hello World
this section is a work in progress
This step-by-step guide describes how to create a new asterai application that simply exposes a hardcoded name to the LLM via a plugin. When you ask the LLM who to give hello to, it will have access to the name within the plugin.
Think of this like sending a message to a pre-defined person in a contacts list. Except, in this example plugin the actual HTTP request for sending the DM is not sent.
- Sign into the asterai console
- Create a new application
- Install the asterai CLI:
npm install -g @asterai/cli
- Generate a new API key in asterai, and authenticate with it into the CLI:
asterai auth <your_api_key>
- Initialise a new project called
hello-world
:
asterai init hello-world
- Install dependencies on the new project:
cd hello-world
npm i
- Inspect the contents of
plugin.asterai.proto
, the plugin manifest file - Inspect the contents of
plugin.ts
, the plugin file - Modify the contents of
plugin.asterai.proto
so that it has a function calledsayHello
instead oforderBurger
. - Modify the function comment to "say hello to a pre-defined person".
- This function call does not require any arguments, therefore update the
input and output types to
Empty
, defined asmessage Empty {}
. - Run
asterai codegen
. Here, this has no effect because there are no arguments (they are empty), but it is good practice to always run codegen after modifying the manifest file to ensure the types are up to date. - Modify the
plugin.ts
file to have the new function name,sayHello
. - Modify the return statement to return
"said hello to Alice"
- Get your app ID from the cloud console
- Deploy the plugin:
asterai deploy --app <your_app_id>
- Refresh the page in the cloud console. Now you should see the new, active plugin that was just deployed.
- In the playground section, ask "who should I say hello to?". The app should reply with "Alice".
- You can also make a POST request to https://api.asterai.io/app/:your_app_id/query with a raw text body containing the user query. The response will be an SSE of token streams.
Hopefully this hello world example illustrates how asterai can be used to connect AI to applications. Within plugins, it is possible to access LLMs, Vector DBs, make HTTP calls and even use WebSockets.