🍯 Demo | 🦚Features | 📍Roadmap | 🛠️Contribute | 🏃Run Locally | 🌺Open Core
Waggle Dance is an experimental application focused on achieving user-specified goals. It provides a friendly but opinionated user interface for building agent-based systems. The project focuses on explainability, observability, concurrent generation, and exploration. Currently in pre-alpha, the development philosophy prefers experimentation over stability as goal-solving and Agent systems are rapidly evolving.
Waggle Dance takes a goal and passes it to a Planner Agent which streams an execution graph for sub-tasks. Each sub-task is executed as concurrently as possible by Execution Agents. To reduce poor results and hallucinations, sub-results are reviewed by Criticism Agents. Eventually, the Human in the loop (you!) will be able to chat with individual Agents and provide course-corrections if needed.
It was originally inspired by Auto-GPT, and has concurrency features similar to those found in gpt-researcher. Therefore, core tenets of the project include speed, accuracy, observability, and simplicity. Additionally, many other agentic systems are written in Python, so this project acts as a small counter-balance, and is accessible to the large number of Javascript developers.
An (unstable) API is also available via tRPC as well an API implemented within Next.js. The client-side is mostly responsible for orchestrating and rendering the agent executions, while the API and server-side executes the agents and stores the results. This architecture is likely to be adjusted in the future.
- LLMs go brrr… Highly concurrent execution graph. Some sub-task branches are not dependent, and can run concurrently.
- Adversarial agents that review results.
- Vector database for long-term memory.
- Explainable results and responsive UI: Graph visualizer, sub-task (agent) results, agent logs and events.
Typescript ﹒ Langchain.js ﹒ T3 ﹒ Prisma ﹒ tRPC ﹒ Pinecone ﹒ Postgres ﹒ OpenAI API ﹒ MUI Joy
Live Project Roadmap Board ﹒ 🛠️Contribute
- Implement Graph of Thoughts architecture
- Human-in-the-loop (e.g. chat to provide course-corrections, chat freely with agents/chains)
- Loop detection (in-chain review)
- Support for Local LLMs and other LLM APIs such as LLaMa-2, Azure Private OpenAI, Claude, etc.
- Recalled skills a la Voyager/PolyGPT
- Agent data connections, e.g. GitHub, Google Drive, Databases, etc.
- Execution notifications (e.g. Slack, Email, etc.)
- Further execution methods and blends (e.g. Tree of thought, ongoing research)
Future
- Execution notifications (e.g. Slack, Email, etc.)
- Further execution methods and blends (e.g. Tree of thought, ongoing research)
- Desktop and mobile apps
- Migrate to from Next.js Pages structure to App structure
- Consider removing langchain
- Improved architecture for running agents
- Templates and sharing
Waggle Dance can be deployed using Docker or manually using Node.js. Configuration of
Docker support is coming soon..env
vars is required.
docker-compose build && docker-compose up
- Node JS LTS
- pnpm
- Turbo -
pnpm add turbo --global
or usenpx turbo
in place ofturbo
below.
- Copy
.env.example
to.env
and configure the environment variables.
The T3 stack includes Prisma. Currently we are using Postgres. The database is used as the source-of-truth of the state of an app deployment. E.g. sessions, accounts, any saved goals/results, etc.
Note that this is different than the user's uploaded documents, however it may store metadata about the documents.
pnpm db:generate
pnpm db:push
db:generate
creates the local typings and DB info from the schema.prisma file (./packages/db/prisma/schema.prisma
).db:push
pushes the schema to the database provider (PostgreSQL by default).- Run these commands on first install and whenever you make changes to the schema.
This is a T3 stack. You can check the boilerplate documentation
turbo dev
Make sure you install the recommended extensions in the solution, particularly es-lint
.
Linting is run on each build and can fail builds.
To get a full list of linting errors run:
turbo lint
Some of these may be able to be auto-fixed with:
turbo lint:fix
for the rest, you will need to open the associated file and fix the errors yourself. Limit ts-ignore
for extreme cases.
As a best practice, run turbo lint
before starting a feature and after finishing a feature and fix any errors before sending a PR
.
- Devs: CONTRIBUTING.md
- If you are not technical, you can still help improving documentation or add examples or share your user-stories with our community; any help or contribution is welcome!
- GPT best practices
- Jerry Liu (LLama Index) on state & history of Agentic AI, context management
- Join the discord
- Using AI Agents to Solve Complex Problems
- Examples of Prompt Based Apps
- Another Example of a Prompt Based App
- Python Notebook/Cookbook for Tinkering/Exploring
- Constitutional AI in RLHF
- Understand different types of memory and vector database techniques
- Interaction Nets
- https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting
- https://github.com/ysymyth/tree-of-thought-llm
- Everything in Helpful Docs above
- Maintainers and Contributors of Langchain.js
- Maintainers and Contributors of AutoGPT, AgentGPT
- big-AGI
- more...
The applications, packages, libraries, and the entire monorepo are freely available under the MIT license. The development process is open, and everyone is welcome to join. In the future, we may choose to develop extensions that are licensed for commercial use.