Custom API Endpoints
Connect your chat interface to external AI services, Langflow flows, self-hosted agents, or custom APIs.
When to Use Custom API Endpoints
Use custom API endpoints when you:
- Have a DataStax Langflow flow you want to use
- Run a self-hosted Langflow instance (on your own server)
- Use LM Studio to run LLMs locally on your computer
- Have a self-hosted AI agent or custom solution
- Want to connect to a custom local API or external service
- Already have an AI configuration elsewhere and want to use it with Axie Studio's chat interface
Important: Self-Hosted = Local = Needs to be Public!
If you're using a self-hosted service (like LM Studio, self-hosted Langflow, or a local API), remember:
- Self-hosted means it's running on YOUR computer or server (localhost)
- Axie Studio's servers cannot access your localhost directly
- You MUST expose your local service to the internet using a tunneling service
Before connecting, make sure your local service is publicly accessible! See Exposing Local Services to the Internet below.
Note: If you want to use Axie Studio's built-in Knowledge Base with OpenAI, Cloudflare Workers AI, or DeepSeek, use the Knowledge Base option instead. See Knowledge Base documentation.
How It Works
When a customer sends a message through your chat interface:
- The system automatically detects the API type from your endpoint URL
- Formats the request correctly for that API type
- Sends the message with proper authentication
- Parses the response and displays it to the customer
All of this happens automatically - you just provide the endpoint URL and API key!
Supported Integrations
DataStax Langflow
Connect to your DataStax Langflow flows hosted on DataStax's platform.
Endpoint Format:
https://your-langflow-instance.com/api/v1/run/your-flow-id
Request Format:
POST https://your-langflow-instance.com/api/v1/run/your-flow-id
Headers:
Authorization: Bearer your-langflow-api-key
Content-Type: application/json
Accept: application/json
Body:
{
"input_value": "Customer's message here",
"output_type": "chat",
"input_type": "chat",
"session_id": "unique-session-id"
}
Response Parsing: The system automatically extracts the response from:
outputs[0].outputs[0].results.message.textoutputs[0].outputs[0].artifacts.message- Or other common Langflow response paths
LM Studio
LM Studio is a desktop application that lets you run Large Language Models (LLMs) locally on your computer. It provides an OpenAI-compatible API server that you can connect to Axie Studio.
What is LM Studio?
- Desktop app for running LLMs on your computer
- Provides an OpenAI-compatible API on localhost
- Perfect for privacy-sensitive use cases (data never leaves your computer)
- Free to use with your own models
Endpoint Format:
http://localhost:1234/v1/chat/completions (default port, local only)
https://your-tunnel-url.com/v1/chat/completions (after exposing with tunnel)
Request Format (OpenAI-Compatible):
POST http://localhost:1234/v1/chat/completions
Headers:
Content-Type: application/json
Accept: application/json
Body:
{
"model": "your-model-name",
"messages": [
{ "role": "user", "content": "Customer's message here" }
],
"temperature": 0.7
}
Response Format: LM Studio returns OpenAI-compatible responses:
{
"choices": [
{
"message": {
"content": "AI response text"
}
}
]
}
Important: Since LM Studio runs on localhost, you must expose it to the internet using a tunneling service (see Exposing Local Services below).
Self-Hosted Langflow
Connect to your own Langflow instance running on your infrastructure.
Endpoint Format:
https://your-domain.com/api/v1/run/your-flow-id (publicly accessible)
http://localhost:7860/api/v1/run/your-flow-id (local only - needs tunnel!)
If using localhost: You MUST expose it to the internet first! See Exposing Local Services below.
Request Format: Same as DataStax Langflow format above.
Authentication:
- Use your Langflow API key in the
Authorization: Bearerheader - For local instances, you may need to configure CORS settings
Self-Hosted AI Agents
Connect to your own custom AI agents or AI services.
Endpoint Format:
https://your-agent-api.com/chat (publicly accessible)
http://localhost:8000/api/chat (local only - needs tunnel!)
If using localhost: You MUST expose it to the internet first! See Exposing Local Services below.
Request Format (Generic):
POST https://your-agent-api.com/chat
Headers:
x-api-key: your-api-key
Content-Type: application/json
Accept: application/json
Body:
{
"message": "Customer's message here",
"session_id": "unique-session-id"
}
Response Format: Your API should return JSON with one of these fields:
messageresponseoutputtext
The system will automatically try these fields to extract the response.
Custom Local APIs
Connect to any REST API endpoint, including local development servers.
Use Cases:
- Local development and testing
- Internal company APIs
- Custom integrations
- Legacy systems
Endpoint Format:
https://your-api.com/endpoint (publicly accessible)
http://localhost:3000/api/chat (local only - needs tunnel!)
If using localhost: You MUST expose it to the internet first! See Exposing Local Services below.
Request Format:
POST https://your-api.com/endpoint
Headers:
x-api-key: your-api-key (or Authorization: Bearer, depending on your API)
Content-Type: application/json
Accept: application/json
Body:
{
"message": "Customer's message here",
"session_id": "unique-session-id"
}
Exposing Local Services to the Internet
Critical: If your API is running on localhost or 127.0.0.1, Axie Studio cannot access it directly. You must expose it to the internet first!
Why? Axie Studio's servers are on the internet. They can't reach services running on your local computer. You need a "tunnel" that creates a public URL pointing to your localhost.
Option 1: Cloudflare Tunnel (Recommended)
Cloudflare Tunnel (formerly Argo Tunnel) is a free, secure way to expose local services.
Steps:
- Install Cloudflare Tunnel:
cloudflared tunnel --url http://localhost:1234 - Cloudflare gives you a public URL:
https://random-subdomain.trycloudflare.com - Use this URL in Axie Studio instead of
localhost:1234
Benefits:
- Free
- Secure (HTTPS automatically)
- No account required for basic use
- Works great for development
Example:
# Start your LM Studio server on localhost:1234
# Then run:
cloudflared tunnel --url http://localhost:1234
# You'll get a URL like:
# https://abc123-def456.trycloudflare.com
# Use this in Axie Studio!
Option 2: ngrok
ngrok is a popular tunneling service that makes localhost publicly accessible.
Steps:
- Sign up for a free ngrok account at ngrok.com
- Install ngrok:
npm install -g ngrokor download from website - Run:
ngrok http 1234(replace 1234 with your port) - Copy the HTTPS URL ngrok provides (e.g.,
https://abc123.ngrok.io) - Use this URL in Axie Studio
Benefits:
- Free tier available
- Easy to use
- Web interface to monitor requests
- Custom domains available (paid plans)
Example:
# Start your local API on port 1234
# Then run:
ngrok http 1234
# You'll see:
# Forwarding https://abc123.ngrok.io -> http://localhost:1234
# Use the HTTPS URL in Axie Studio!
Option 3: Other Tunneling Services
There are other services that can expose localhost to the internet:
- LocalTunnel - Free, open-source alternative
- Serveo - SSH-based tunneling (no installation needed)
- Bore - Simple tunneling tool
- Pagekite - Commercial tunneling service
Choose based on:
- Ease of setup
- Security requirements
- Cost (many have free tiers)
- Performance needs
Quick Setup Example (LM Studio + ngrok)
-
Start LM Studio:
- Open LM Studio
- Load a model
- Start the local server (usually on port 1234)
-
Expose with ngrok:
ngrok http 1234 -
Copy the HTTPS URL:
Forwarding https://abc123.ngrok.io -> http://localhost:1234 -
Use in Axie Studio:
- API Endpoint:
https://abc123.ngrok.io/v1/chat/completions - API Key:
lm-studio(or leave empty if not required)
- API Endpoint:
-
Test the connection!
Pro Tip: The tunnel URL changes each time you restart ngrok (on free tier). For production, consider:
- Using Cloudflare Tunnel with a custom domain
- Setting up a permanent tunnel
- Deploying your service to a cloud provider instead
Automatic API Type Detection
The system automatically detects the API type from your endpoint URL:
- DataStax Langflow: URLs containing
astra,datastax, orlangflow- Uses Bearer token authentication
- Formats request as Langflow format
- OpenAI / OpenAI-Compatible: URLs containing
openai.com,api.openai.com, or/v1/chat/completions- Includes LM Studio (OpenAI-compatible API)
- Uses Bearer token authentication (or no auth for LM Studio)
- Formats request as OpenAI chat completions format
- Anthropic: URLs containing
anthropic.comorclaude- Uses Bearer token authentication
- Formats request as Anthropic messages format
- Cohere: URLs containing
cohere.aiorcohere.com- Uses Bearer token authentication
- Formats request as Cohere generation format
- Generic: Any other URL
- Uses
x-api-keyheader authentication - Formats request as generic format (
messageandsession_id)
- Uses
LM Studio Detection: If your endpoint URL contains /v1/chat/completions (like LM Studio's OpenAI-compatible API), the system will automatically format requests as OpenAI-compatible format!
Request Structure by Provider
DataStax Langflow / Self-Hosted Langflow
POST https://your-langflow-instance.com/api/v1/run/your-flow-id
Headers:
Authorization: Bearer your-api-key
Content-Type: application/json
Accept: application/json
Body:
{
"input_value": "Customer's message here",
"output_type": "chat",
"input_type": "chat",
"session_id": "unique-session-id"
}
Generic API / Self-Hosted Agents
POST https://your-custom-api.com/chat
Headers:
x-api-key: your-api-key
Content-Type: application/json
Accept: application/json
Body:
{
"message": "Customer's message here",
"session_id": "unique-session-id"
}
Response Parsing
The system automatically parses responses from each provider:
- DataStax Langflow: Extracts from
outputs[0].outputs[0].results.message.textor similar paths - Generic APIs: Tries common response fields in order:
responseoutputtextmessageresult
Authentication Methods
Bearer Token (Langflow, OpenAI, Anthropic, Cohere)
Authorization: Bearer your-api-key
Used automatically for:
- DataStax Langflow
- Self-hosted Langflow
- OpenAI endpoints
- Anthropic endpoints
- Cohere endpoints
API Key Header (Generic APIs)
x-api-key: your-api-key
Used automatically for:
- Generic custom APIs
- Self-hosted agents
- Local APIs
Query Parameter (Optional)
Some APIs require authentication via query parameter:
https://api.com/endpoint?api_key=your-key
This is handled automatically if your API requires it.
Session Management
- Each conversation gets a unique
session_id - Reuse the same
session_idto maintain conversation history - If not provided, a new session is created automatically
- Format:
api_timestamp_randomstring(e.g.,api_1234567890_abc123)
For conversation history:
- Pass the same
session_idin each request - Your API can use this to maintain context
- Axie Studio automatically manages session IDs
Timeout & Error Handling
- Request Timeout: 30 seconds
- Error Handling: Returns clear error messages if the API fails
- Connection Errors: Automatically handled with user-friendly messages
Setting Up a Custom API Endpoint
- Go to Dashboard -> Chat Interfaces
- Create a new interface or edit an existing one
- In the API Connection section:
- Enter your API Endpoint URL
- Enter your API Key
- Click "Test Connection" to verify it works
- Save your interface
Testing Your Connection
Always test your connection before saving! The "Test Connection" button will:
- Send a test message to your API
- Verify authentication works
- Check response format
- Show you the actual response
If the test fails, check:
- Is your endpoint URL correct?
- Is your API key valid?
- Is your API accessible (not blocked by firewall)?
- Does your API return the expected response format?
Common Use Cases
Connecting to DataStax Langflow
- Get your Langflow flow ID from DataStax
- Get your Langflow API key
- Enter endpoint:
https://your-instance.data-stax.com/api/v1/run/your-flow-id - Enter API key: Your Langflow API key
- Test and save!
Connecting to LM Studio
-
Start LM Studio:
- Open LM Studio desktop app
- Load a model
- Start the local server (default: port 1234)
-
Expose to internet:
- Use Cloudflare Tunnel:
cloudflared tunnel --url http://localhost:1234 - Or use ngrok:
ngrok http 1234 - Copy the public URL you get
- Use Cloudflare Tunnel:
-
Configure in Axie Studio:
- API Endpoint:
https://your-tunnel-url.com/v1/chat/completions - API Key:
lm-studio(or leave empty) - Model: Use the model identifier from LM Studio
- API Endpoint:
-
Test and save!
Connecting to Self-Hosted Langflow
If your Langflow is publicly accessible:
- Make sure your Langflow instance is accessible on the internet
- Get your flow ID from Langflow
- Enter endpoint:
https://your-domain.com/api/v1/run/your-flow-id - Enter API key: Your Langflow API key
- Test and save!
If your Langflow is on localhost:
- Start your Langflow instance (usually on port 7860)
- Expose it to internet using a tunnel (see Exposing Local Services above)
- Get your flow ID from Langflow
- Enter endpoint:
https://your-tunnel-url.com/api/v1/run/your-flow-id - Enter API key: Your Langflow API key
- Test and save!
Connecting to a Local Development Server
- Make sure your local server is running (e.g.,
http://localhost:8000) - Expose it to internet using a tunnel:
- Cloudflare Tunnel:
cloudflared tunnel --url http://localhost:8000 - Or ngrok:
ngrok http 8000
- Cloudflare Tunnel:
- Copy the public URL you get
- Enter endpoint: Your tunnel URL (e.g.,
https://abc123.ngrok.io/api/chat) - Enter API key: Your local API key (if required)
- Test and save!
Troubleshooting
"Connection Failed" Error
If using localhost:
- ⚠️ Did you expose your local service to the internet? Axie Studio cannot access
localhostdirectly! - Make sure you're using a tunnel URL (ngrok, Cloudflare Tunnel, etc.), not
localhost:1234 - Verify your tunnel is still running
- Check that your local service is actually running
General checks:
- Check if your endpoint URL is correct
- Verify your API key is valid
- Make sure your API is accessible (not behind a firewall)
- Check if your API requires CORS configuration
- For localhost services: Make sure the tunnel is active and the URL is correct
"Invalid Response Format" Error
- Make sure your API returns JSON
- Check that your response includes one of:
message,response,output,text - Verify the response structure matches expected format
"Timeout" Error
- Your API took longer than 30 seconds to respond
- Check your API's performance
- Consider optimizing your API response time
Next Steps
- Learn about Knowledge Base → - Axie Studio's built-in RAG solution
- View API documentation → - Using Axie Studio's API
- Set up a custom domain → - Use your own domain for your chat interface