Tutorial

How to Add AI Chatbots to Your App with Wibe Chat

AJ
Anila Jacob·2026-03-29·10 min read

Every app is adding AI chatbots in 2026. Customer support bots, sales assistants, educational tutors, code helpers — if your app has chat, users expect an AI assistant.

The problem? Wiring up an LLM to a real-time chat system is surprisingly fiddly. You need message context, streaming responses, function calling, error handling, and making the bot feel like a natural participant.

Wibe Chat has native bot support that handles most of this. Here's how to set it up.

What We're Building

An AI support bot that lives inside a Wibe Chat channel, responds in real-time, streams responses word-by-word, can call functions (check orders, book appointments), and has access to conversation history.

Architecture

1. User sends a message in a Wibe Chat channel 2. Wibe Chat webhook fires, hitting your server 3. Your server sends the message + history to an LLM 4. The LLM generates a response 5. Your server sends the response back through the Wibe Chat API 6. The response appears as a bot message

Wibe Chat handles steps 1, 2, 5, and 6. You handle 3 and 4.

Step 1: Create a Bot User

curl -X POST https://api.wibechat.com/v1/users \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"id": "ai-support-bot", "name": "Wibe Assistant", "role": "bot"}'

Step 2: Set Up the Webhook

Configure in your dashboard: Webhook URL pointing to your server, triggered on message.created events.

Step 3: Handle Messages

import express from 'express'
import Anthropic from '@anthropic-ai/sdk'
import { WibeChatServer } from '@wibechat/server'

const app = express()
const anthropic = new Anthropic()
const wibe = new WibeChatServer({ apiKey: 'YOUR_SERVER_KEY' })

app.post('/api/wibechat/webhook', async (req, res) => {
  const { message, channel } = req.body
  
  if (message.user.role === 'bot') return res.status(200).send('ok')
  
  const history = await wibe.messages.list({ channelId: channel.id, limit: 20 })
  
  const messages = history.map(msg => ({
    role: msg.user.role === 'bot' ? 'assistant' : 'user',
    content: msg.text,
  }))
  
  const response = await anthropic.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 1024,
    system: 'You are a helpful customer support assistant. Be concise and friendly.',
    messages,
  })
  
  await wibe.messages.send({
    channelId: channel.id,
    userId: 'ai-support-bot',
    text: response.content[0].text,
  })
  
  res.status(200).send('ok')
})

Step 4: Add Function Calling

Want the bot to check order status or book appointments? Add tools to your LLM call:

const response = await anthropic.messages.create({
  model: 'claude-sonnet-4-20250514',
  max_tokens: 1024,
  messages,
  tools: [
    {
      name: 'check_order_status',
      description: 'Check the status of a customer order',
      input_schema: {
        type: 'object',
        properties: { order_id: { type: 'string' } },
        required: ['order_id']
      }
    }
  ]
})

Step 5: Enable Streaming

For the typing effect, use streaming:

const streamingMessage = await wibe.messages.startStream({
  channelId: channel.id,
  userId: 'ai-support-bot',
})

for await (const chunk of llmStream) {
  await streamingMessage.append(chunk.text)
}

await streamingMessage.complete()

Users see the bot's response appear word-by-word in real-time.

Use Cases

Customer support bot — Answer common questions, check orders, escalate to humans. Reduces ticket volume by 40-60%.

Sales assistant — Engage visitors, answer product questions, qualify leads, book demos.

Educational tutor — Answer student questions, explain concepts, quiz students.

Code helper — Assist with API questions, generate examples, troubleshoot integrations.

Frequently Asked Questions

Which LLMs are supported?

Any LLM with an API. The webhook architecture means your server calls whatever model you prefer — OpenAI, Anthropic Claude, Google Gemini, Mistral, Llama, or a custom model.

Can the bot join group chats?

Yes. The bot is a regular user in the channel. It can be added to any channel — 1:1, group, or public.

How do I handle rate limits?

Implement a queue on your server. If the LLM rate-limits you, queue the response and process it when capacity frees up. Wibe Chat shows the bot as 'thinking' until the response arrives.

How much does this cost?

The Wibe Chat side is included in your plan — no extra charge for bot users. Your main cost is the LLM API. Claude Sonnet costs about $3 per million input tokens, so most apps spend $10-50/month on LLM costs.

Start building with Wibe Chat

Get started for free — no credit card required.

Get Started Now
AJ

Anila Jacob

Product Analyst

Expert in real-time communication infrastructure and developer experiences.

Ready to Ship?

Join thousands of developers building the future of real-time communication. Start free today.