AI in Insomnia

Explore the AI features available in Insomnia and learn how they enhance automation, collaboration, and productivity.

As of Insomnia v12, AI features are free to use, though this may change in future releases.

Insomnia 12 introduces a suite of AI-driven capabilities that make API development faster, smarter, and more collaborative. These include:

Activate these features by enabling a Large Language Model (LLM) in Preferences > AI Settings.
Choose from one of the following providers:

  • Local
  • Claude
  • OpenAI
  • Gemini

Local models keep processing fully on your machine for privacy and control, but may run slower and produce less-refined responses than hosted models.

AI features overview

Once an LLM is activated, Insomnia unlocks multiple AI features that are designed to automate repetitive workflows, generate content dynamically, and enhance collaboration.

Feature

Description

Product context

Auto-generate Mock Servers from natural language Creates a mock server from a prompt, OpenAPI definition, or live URL response. Automatically scaffolds routes, responses, and configurations. Available when creating Self-hosted mock servers. See Mock Servers.
Suggest comments and grouping for Commits Analyzes staged Git changes and suggests logical commit groupings and draft messages. Available in the Git Sync interface. See Version control in Insomnia.
MCP Client operations Connect to MCP Servers that expose callable tools, prompts, and structured resources via JSON-RPC. Manage connections under MCP Servers in Insomnia. See MCP clients in Insomnia.

Get started

Activate features by choosing and uploading an LLM in Preferences > AI Settings:

  1. Click Preferences.
  2. Select the AI Settings tab.
  3. In the provider list, choose a LLM type:
  4. Enter your credentials or select a local model.
  5. Click Activate.

After activation you can toggle Auto-generate Mock Servers and Suggest commit comments from the AI Features panel

Note: Local LLMs require a .gguf file placed in the /Insomnia/llms/ directory.

Credentials for hosted LLM providers are stored securely on your local system by the Insomnia app and are never synced across accounts or devices.

MCP Servers and Clients

Model Context Protocol (MCP) Servers expose domain-specific operations through a JSON-RPC interface. For example:

  • tools/CALL
  • resources/READ
  • prompts/GET

When you connect Insomnia to an MCP Server, Insomnia creates an MCP Client that acts like a synchronized request group. The client stays updated with tools, prompts, and resources published by the server. When offline, you will view cached data until you resync.

Use MCP Clients to:
  • Discover and execute callable tools
  • Retrieve structured resources
  • Explore and test AI-driven prompts
  • Resync data from the server as it changes

Transports available: HTTP and STDIO.

AI-driven Git commits

Insomnia’s Suggest comments and grouping for Commits feature analyzes staged changes and helps maintain consistent, meaningful Git histories. For Git concepts and workflows, go to Version control in Insomnia.

To use commit suggestions:
  1. Open the Git Sync interface.
  2. Click Suggest comments and grouping for Commits.
  3. Review the suggested commit groups and messages.
  4. (Optional) To edit a message inline, double-click the message.
  5. Drag and drop files between commit groups, or exclude files.
  6. Click Commit or Commit & Push.

Frequently asked questions

Yes. Go to Preferences → AI Settings and deactivate the toggles for Auto-generate Mock Servers and Suggest commit comments.
To stop using an LLM entirely, click Deactivate under the provider configuration.

You must first configure and activate an LLM under Preferences → AI Settings.
If AI is deactivated at the instance level, the feature toggles will remain unavailable in the UI.

Enterprise administrators can activate or deactivate AI features at the instance level from Insomnia Admin → AI Settings.
When deactivated, the desktop app shows an explanatory message.
When activated, each user must still activate a model before toggles become available. This setting is available only for Enterprise plans.

Both HTTP and STDIO transports are supported for connecting to MCP Servers.

Not yet. MCP Clients are stored locally and currently do not support Git Sync or import/export.

Local LLMs with fewer than 10 billion parameters may produce less accurate or inconsistent commit suggestions.
Smaller models have limited context understanding and token capacity compared to hosted providers.

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!