LogoMiMo API Docs
LogoMiMo API Docs
HomepageWelcome

Quick Start

Pricing & Rate Limits

API Reference

Guides

Support

FAQ

OpenAI Compatible API

Use the OpenAI-compatible endpoint to interact with MiMo models via the standard Chat Completions format.

Endpoint

POST https://api.mimo-v2.com/v1/chat/completions

Authentication

Authenticate requests using either of the following headers:

HeaderFormat
api-key<your-api-key>
AuthorizationBearer <your-api-key>

You can generate API keys from Settings → API Keys in your Mimo dashboard.

Request Parameters

ParameterTypeRequiredDescription
modelstringYesModel ID. Options: mimo-v2-pro, mimo-v2-omni, mimo-v2-flash
messagesarrayYesArray of message objects with role and content
max_completion_tokensintegerNoMaximum tokens to generate (default varies by model)
temperaturenumberNoSampling temperature, 0-2 (default: 1.0)
top_pnumberNoNucleus sampling threshold, 0-1 (default: 0.95)
streambooleanNoEnable streaming output (default: false)
stopstring/arrayNoStop sequences
frequency_penaltynumberNoFrequency penalty, -2 to 2 (default: 0)
presence_penaltynumberNoPresence penalty, -2 to 2 (default: 0)
toolsarrayNoList of tool/function definitions
tool_choicestring/objectNoTool selection strategy: auto, none, or specific tool

Message Object

FieldTypeDescription
rolestringOne of: system, user, assistant, tool
contentstring/arrayMessage content (text or multimodal content array)
reasoning_contentstring(Optional) Model's thinking/reasoning content
tool_callsarray(Optional) Tool calls made by the assistant

Example Request

curl https://api.mimo-v2.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "api-key: YOUR_API_KEY" \
  -d '{
    "model": "mimo-v2-pro",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello, who are you?"}
    ],
    "max_completion_tokens": 1024,
    "temperature": 0.7
  }'

Response Format

{
  "id": "chatcmpl-xxx",
  "object": "chat.completion",
  "created": 1711234567,
  "model": "mimo-v2-pro",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I am MiMo...",
        "reasoning_content": "The user asked me to introduce myself..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 50,
    "completion_tokens": 100,
    "total_tokens": 150
  }
}

Response Fields

FieldDescription
idUnique identifier for the completion
objectAlways chat.completion
createdUnix timestamp of when the response was created
modelThe model used for the completion
choicesArray of completion choices
choices[].message.contentThe generated text response
choices[].message.reasoning_contentThe model's internal reasoning (when available)
choices[].finish_reasonWhy the model stopped: stop, length, or tool_calls
usageToken usage statistics

Streaming Response

When stream is set to true, the API returns Server-Sent Events (SSE). Each event contains a partial response chunk.

Streaming Request Example

curl https://api.mimo-v2.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "api-key: YOUR_API_KEY" \
  -d '{
    "model": "mimo-v2-pro",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ],
    "stream": true
  }'

Streaming Event Format

Each SSE event is prefixed with data: and contains a JSON chunk. The stream ends with a data: [DONE] event.

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]}

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}

data: [DONE]

In streaming mode, reasoning_content may appear in early delta chunks before the main content begins.

Table of Contents

Endpoint
Authentication
Request Parameters
Message Object
Example Request
Response Format
Response Fields
Streaming Response
Streaming Request Example
Streaming Event Format