Skip to main content

Create Chat

Create a new chat instance to interact with AI models.

Constructor

new Chat(config: ChatConfig)

Parameters

config
ChatConfig
required
Configuration object for the chat instance
config.apiKey
string
required
Your API key for authentication
config.model
string
default:"gpt-3.5-turbo"
The AI model to use for chat completions
config.maxTokens
number
default:"1000"
Maximum number of tokens in the response
config.temperature
number
default:"0.7"
Controls randomness in responses (0-1)
config.timeout
number
default:"30000"
Request timeout in milliseconds
config.baseURL
string
Custom base URL for the API endpoint

Response

Returns a Chat instance that can be used to send messages and stream responses.

Example

import { createAgent } from '@src/index';

const agent = await createAgent({
  name: 'My Assistant',
  providerAddress: '0xf07240Efa67755B5311bc75784a061eDB47165Dd', // llama-3.3-70b-instruct
  memoryBucket: 'my-agent-memory',
  privateKey: 'your-private-key',
  maxTokens: 2000,
  temperature: 0.8
});

Available Models

The 0G AI SDK connects to models running on the 0G decentralized compute network:
ModelProvider AddressDescriptionVerification
llama-3.3-70b-instruct0xf07240Efa67755B5311bc75784a061eDB47165DdState-of-the-art 70B parameter model for general AI tasksTEE (TeeML)
deepseek-r1-70b0x3feE5a4dd5FDb8a32dDA97Bed899830605dBD9D3Advanced reasoning model optimized for complex problem solvingTEE (TeeML)

Error Handling

The constructor will throw an error if:
  • No API key is provided and ZG_API_KEY environment variable is not set
  • Invalid configuration parameters are passed
try {
  const chat = new Chat({
    apiKey: 'invalid-key',
    model: 'llama-3.3-70b-instruct'
  });
} catch (error) {
  console.error('Failed to create chat:', error.message);
}

Next Steps