Documentation Index Fetch the complete documentation index at: https://0g.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Framework Integrations
The 0G AI SDK integrates seamlessly with popular AI frameworks and libraries, bringing decentralized compute to your existing workflows.
Available Providers
Choose your preferred AI framework to get started with 0G decentralized compute. Click on any provider to view the integration progress:
LangChain The most popular framework for building LLM applications with chains, agents, and memory Status : 🔄 In Progress - View PR
Vercel AI SDK React-first AI SDK for building conversational UIs with streaming and type safety Status : 🔄 In Progress - View PR
OpenRouter Unified API for accessing multiple AI models with intelligent routing and fallbacks Status : 🔄 In Progress - View PR
LlamaIndex Data framework for building RAG applications with advanced document processing Status : 🔄 In Progress - View PR
Integration Status
All integrations are currently under active development. Each provider card above links directly to the GitHub pull request where you can:
📋 Track Progress : See the current status of the integration
💬 Join Discussion : Participate in technical discussions
🔍 Review Code : Examine the implementation details
📝 Provide Feedback : Share your thoughts and suggestions
Supported Models
Model Provider Address Best For Framework Support llama-3.3-70b-instruct 0xf07240Efa67755B5311bc75784a061eDB47165DdGeneral AI tasks, conversations, content generation All frameworks deepseek-r1-70b 0x3feE5a4dd5FDb8a32dDA97Bed899830605dBD9D3Complex reasoning, analysis, code generation All frameworks
LangChain Integration
LangChain 0G Provider Official LangChain integration for 0G decentralized compute
Installation
pip install langchain-nebula
Basic Usage
from langchain_0g import ZGChat
from langchain.schema import HumanMessage, SystemMessage
# Initialize with 0G provider
llm = ZGChat(
provider_address = "0xf07240Efa67755B5311bc75784a061eDB47165Dd" , # llama-3.3-70b-instruct
private_key = "your-private-key" ,
temperature = 0.7 ,
max_tokens = 1000
)
# Simple chat
response = llm.invoke([
SystemMessage( content = "You are a helpful AI assistant." ),
HumanMessage( content = "Explain quantum computing in simple terms." )
])
print (response.content)
Advanced LangChain Features
Chains
Agents
Memory & Retrieval
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
# Create a prompt template
prompt = PromptTemplate(
input_variables = [ "topic" , "audience" ],
template = "Explain {topic} to a {audience} audience in a clear and engaging way."
)
# Create chain with 0G LLM
chain = LLMChain(
llm = ZGChat(
provider_address = "0xf07240Efa67755B5311bc75784a061eDB47165Dd" ,
private_key = "your-private-key"
),
prompt = prompt
)
# Run the chain
result = chain.run( topic = "blockchain technology" , audience = "beginner" )
print (result)
Vercel AI SDK Integration
Vercel AI SDK 0G Provider Official Vercel AI SDK integration for 0G decentralized compute
Installation
npm install @ai-sdk/nebula
Basic Usage
import { createZG } from '@ai-sdk/0g' ;
import { generateText , streamText } from 'ai' ;
// Initialize 0G provider
const zg = createZG ({
providerAddress: '0xf07240Efa67755B5311bc75784a061eDB47165Dd' , // llama-3.3-70b-instruct
privateKey: 'your-private-key'
});
// Generate text
const { text } = await generateText ({
model: zg ( 'llama-3.3-70b-instruct' ),
prompt: 'Explain the benefits of decentralized AI compute.'
});
console . log ( text );
Streaming with React
React Streaming
API Route
Tool Usage
'use client' ;
import { useChat } from 'ai/react' ;
export default function Chat () {
const { messages , input , handleInputChange , handleSubmit } = useChat ({
api: '/api/chat' ,
});
return (
< div className = "flex flex-col w-full max-w-md py-24 mx-auto stretch" >
{ messages . map ( m => (
< div key = { m . id } className = "whitespace-pre-wrap" >
{ m . role === 'user' ? 'User: ' : 'AI: ' }
{ m . content }
</ div >
)) }
< form onSubmit = { handleSubmit } >
< input
className = "fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
value = { input }
placeholder = "Say something..."
onChange = { handleInputChange }
/>
</ form >
</ div >
);
}
OpenRouter Integration
OpenRouter 0G Provider OpenRouter integration bringing 0G models to the OpenRouter ecosystem
Installation
npm install @openrouter/nebula-sdk-provider
Usage
import { createOpenRouter } from '@openrouter/ai-sdk-provider' ;
import { generateText } from 'ai' ;
// Configure OpenRouter with 0G provider
const openrouter = createOpenRouter ({
apiKey: process . env . OPENROUTER_API_KEY ,
providers: {
'0g' : {
providerAddress: '0xf07240Efa67755B5311bc75784a061eDB47165Dd' ,
privateKey: process . env . ZG_PRIVATE_KEY
}
}
});
// Use 0G models through OpenRouter
const { text } = await generateText ({
model: openrouter ( '0g/llama-3.3-70b-instruct' ),
prompt: 'Explain the advantages of decentralized AI infrastructure.'
});
console . log ( text );
Model Routing
// Route between different 0G models based on task complexity
const routeModel = ( taskComplexity : 'simple' | 'complex' ) => {
return taskComplexity === 'complex'
? openrouter ( '0g/deepseek-r1-70b' ) // Complex reasoning
: openrouter ( '0g/llama-3.3-70b-instruct' ); // General tasks
};
// Simple task
const simpleResult = await generateText ({
model: routeModel ( 'simple' ),
prompt: 'Write a brief summary of renewable energy.'
});
// Complex task
const complexResult = await generateText ({
model: routeModel ( 'complex' ),
prompt: 'Analyze the economic implications of transitioning to renewable energy, considering supply chain, job market, and policy factors.'
});
LlamaIndex Integration
LlamaIndex 0G Provider LlamaIndex integration for RAG applications with 0G decentralized compute
Installation
pip install llama-index-llms-nebula
Basic RAG Setup
from llama_index.llms.zg import ZG
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core.settings import Settings
# Configure 0G LLM
Settings.llm = ZG(
provider_address = "0xf07240Efa67755B5311bc75784a061eDB47165Dd" , # llama-3.3-70b-instruct
private_key = "your-private-key" ,
temperature = 0.1
)
# Load documents
documents = SimpleDirectoryReader( "./data" ).load_data()
# Create index
index = VectorStoreIndex.from_documents(documents)
# Create query engine
query_engine = index.as_query_engine()
# Query the documents
response = query_engine.query( "What are the key findings in the research papers?" )
print (response)
Advanced RAG with 0G
Multi-Modal RAG
Agent-based RAG
Custom Retrieval
from llama_index.core import StorageContext
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.embeddings.openai import OpenAIEmbedding
import chromadb
# Setup vector store
chroma_client = chromadb.PersistentClient()
chroma_collection = chroma_client.create_collection( "documents" )
vector_store = ChromaVectorStore( chroma_collection = chroma_collection)
storage_context = StorageContext.from_defaults( vector_store = vector_store)
# Configure with 0G for reasoning-heavy tasks
Settings.llm = ZG(
provider_address = "0x3feE5a4dd5FDb8a32dDA97Bed899830605dBD9D3" , # deepseek-r1-70b
private_key = "your-private-key" ,
temperature = 0.2
)
# Create index with custom storage
index = VectorStoreIndex.from_documents(
documents,
storage_context = storage_context
)
# Advanced querying
query_engine = index.as_query_engine(
similarity_top_k = 5 ,
response_mode = "tree_summarize"
)
response = query_engine.query(
"Analyze the trends across all documents and provide strategic recommendations."
)
Integration Benefits
Decentralized Advantages
No Vendor Lock-in Use familiar frameworks while avoiding dependency on centralized AI providers
Cost Efficiency Competitive pricing through decentralized compute marketplace
Censorship Resistance Decentralized network ensures availability and resistance to censorship
Privacy & Security TEE (Trusted Execution Environment) verification for secure computation
Framework-Specific Benefits
Framework Key Benefits Use Cases LangChain Seamless chain/agent integration, extensive ecosystem Complex workflows, multi-step reasoning Vercel AI SDK React streaming, edge deployment, type safety Real-time chat, web applications OpenRouter Model routing, fallback strategies, unified API Production applications, model comparison LlamaIndex RAG optimization, document processing, vector search Knowledge bases, document analysis
Getting Started
Choose your framework based on your use case and existing stack
Install the appropriate 0G provider using the installation commands above
Configure with your private key and preferred model provider address
Start building with decentralized AI compute!
Migration Guide
From OpenAI to 0G
Before (OpenAI)
After (0G via Vercel AI SDK)
import OpenAI from 'openai' ;
const openai = new OpenAI ({
apiKey: process . env . OPENAI_API_KEY
});
const completion = await openai . chat . completions . create ({
messages: [{ role: "user" , content: "Hello!" }],
model: "llama-3.3-70b-instruct" ,
});
From Anthropic to 0G
Before (Anthropic)
After (0G via LangChain)
import anthropic
client = anthropic.Anthropic(
api_key = "your-api-key"
)
message = client.messages.create(
model = "claude-3-sonnet-20240229" ,
max_tokens = 1000 ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
GitHub Discussions : Join framework-specific discussions in each integration repository
Discord : Connect with the 0G community for integration support
Documentation : Comprehensive guides for each framework integration
Examples : Production-ready examples in each integration repository
Ready to integrate 0G with your favorite framework? Check out the specific integration repositories linked above for detailed setup guides and examples!