My MCP Servers

Prev Next

What it is

Flows MCP Servers let you turn any combination of business-app actions (HubSpot, Slack, Visma, Google Sheets, Jira, …) into one or multiple governed MCP servers that an AI agent or AI client (Claude, Cursor, ChatGPT, your own app) can call as tools.

Instead of giving an AI agent raw API keys and hoping it behaves, you assemble a curated toolbox in Flows: pick the apps, pick the exact tools (actions), lock down which fields the AI can fill, share it with the right people, and watch every call in the execution log.

Core Concepts

MCP Server

A named, shareable bundle of apps & tools exposed at a single URL (e.g. https://mcp.flows.visma.com/abcdef12-345).

MCP Server Access Control

Access is secured by your Visma Flows sign-in — your AI client will be prompted to authenticate before it can use this server.

Each server has:

  • A name and description (shown to the AI client).

  • A status: active, draft, or inactive. Only active servers respond to AI clients.

  • AI instructions â€” server-level system guidance sent on initialize.

  • Tags â€” workspace-level labels for organization and filtering.

  • Sharing scope â€” private, specific team members, or the whole team.

Tool (same as an "Action" in a Flow)

A single capability inside a server, e.g. "Send Slack message" or "Create HubSpot contact". Each tool is bound to:

  • An app (the integration it belongs to).

  • A connection (the credentials/account it runs as).

  • A set of inputs with per-field rules.

  • Optional AI guidance â€” natural-language hint appended to the tool description ("when to use this").

Connection

A stored, reusable link to an external account (e.g. "HubSpot — sales@acme.com"). Connections live in the Connections section and can be reused across many servers and Flows. A connection is either connected or disconnected; disconnecting instantly disables every tool that depends on it.

Input modes

For every field on a tool, you choose:

  • AI â€” the AI client decides the value at call time. You can add an aiHint to steer it.

  • Fixed â€” you set the value once; the AI cannot change it (great for channel = #alerts, account_id = 123, from_email = noreply@…).

Execution log

Every tool call is recorded: which server, which tool, which connection, the arguments the AI passed, the response, timestamp, and outcome. This is your audit trail and your debugger.

The setup Flow

The MCP Server Builder is a 3-step wizard:

  1. Tools â€” Pick apps, then pick the actions you want to expose as tools. Each picked app needs a Connection (pick an existing one or create a new one inline).

  2. Configure â€” For each tool, decide which inputs are AI-controlled vs fixed, add per-field hints, add per-tool "when to use" guidance.

  3. Review & activate â€” Set name, description, server-level AI instructions, tags, and sharing. Hit Save and activate â†’ server goes live and you land on My MCP Servers.

Drafts are saved at any step. Deactivating a live server keeps it as inactive (not draft) so the configuration stays intact.

Connecting an AI Client

After activation, the server detail page shows:

  • The MCP URL (copy button).

  • A Connect dialog with snippets for Claude Desktop, Cursor, and generic JSON config.

  • A Share dialog to grant access to specific teammates or the whole team.

  • An Executions panel with the live call log.

Permissions are assembled in Flows, not in the agent. The AI never sees an API key, never holds an OAuth token, and can never reach beyond the tools you exposed.

Why control access in Flows vs in the AI client


Centralized governance

Permissions live in one auditable place instead of being scattered across every agent, prompt, or client config — change a scope once and every agent that uses the flow inherits it instantly.

Principle of least privilege, enforced

You can expose only the specific tools and fields an agent actually needs (e.g. "create invoice" but not "delete customer"), instead of handing the agent a broad API token that can do anything the user can do.

Separation of duties

The person designing the flow (ops/admin) is not necessarily the person building the agent (developer/PM). Flows let a non-technical owner curate safe capabilities; the agent author just consumes them.

Credential isolation

Real API keys and OAuth tokens never touch the agent, the model provider, or the prompt. If the agent is compromised, jailbroken, or its logs leak, your Slack/Salesforce/DB credentials are not exposed.

Revocation without redeployment

Disable a connection or a single tool in Flows and every downstream agent loses that capability immediately — no need to rotate keys, redeploy agents, or push new system prompts.

Per-field guardrails

Fixing values (e.g. "always post to #alerts", "always use account_id=123") at the flow level prevents prompt injection or hallucination from redirecting actions to the wrong channel, account, or recipient.

Auditability and compliance

Execution logs give you a tamper-evident record of what the agent did, which tool it called, with what arguments, and what the result was — essential for SOC2, GDPR, HIPAA, and incident forensics. Permissions baked into an agent typically leave no structured trail.

Debuggability

When something goes wrong ("why did it email the wrong customer?"), you can replay the exact tool call, inputs, and output instead of trying to reconstruct it from model traces.

Usage visibility & cost control

Logs surface which tools are actually used, how often, and by whom — making it easy to spot runaway loops, unused integrations, or abusive patterns before they become a bill or an incident.

Safer iteration

You can tighten or loosen a permission, add an AI-guided field, or swap a connection and observe the impact in logs — without touching agent code or risking a regression in production prompts.

Multi-agent reuse

One well-scoped flow can power many agents (support bot, internal copilot, Zapier-style automation) with consistent behavior, instead of each agent re-implementing — and re-mis-configuring — the same integration.

Vendor & model portability

Because permissions and logs are decoupled from the agent, you can swap the underlying LLM (GPT → Claude → Gemini) or client (Cursor → Claude Desktop → custom app) without re-auditing security.

© 2026 Visma