Strategies to Control the Controller
Explore the opportunities and risks of AI using the new Model Context Protocol (MCP), which enables AI to coordinate across tools, perform complex tasks, and report results—while raising important security and operational considerations.
https://delivery-p155402-e1860468.adobeaemcloud.com/adobe/assets/urn:aaid:aem:7eaef054-beca-4f09-8b89-6d0a29fdd584/as/Blog-Security-2026-01-27.avif
business man hand outstretched with lightbulb and connecting icons
2026-01-29T00:00:00.000Z
3
Robert Thomas
Principal Security Consultant
Robert Thomas

New technologies are advancing at a tremendous pace. No area of information technology shows this more than AI. With the recent adoption of a standard Model Context Protocol (MCP) by many of the major players in AI (e.g., OpenAI, Google, Microsoft) it becomes possible to configure agents to perform complex structured tasks, report results, and coordinate across multiple tools and processes seamlessly with the AI acting as the controller.

This new functionality is exciting and brings tremendous opportunities. It also creates and empowers new risks, risks which should be carefully considered. The strength of the model – the ability to perform tasks and assess results in realtime – opens the process to dangers that might not pose similar risks with human operators. A human would know through lateral knowledge that data was unreliable or odd, for example, or that certain combinations of tasks, performed in a particular order, could lead to unexpected outcomes. This is a factor in any unmonitored process; it is likewise a factor with MCP.

First, let’s take a step back. What is MCP? Simply put, it’s a standardized framework for passing commands and information back and forth between a server (the AI) and a given tool. Such a tool can be another AI, a database, the API for another program, SaaS systems, or anything else that has an MCP connector. MCP gives the AI “actor” the ability to call any service it needs from a catalog of known resources. What risks are exposed by this structure?

Risks

  1. Prompt Injection/Context Poisoning
    Anything run by an AI ‘controller’ is subject to prompt injection. Inputs need to be isolated, validated, and screened for tampering before being sent to the AI. AI models are designed primarily to be helpful, so they can and will execute anything asked of them unless specifically told not to do so. Since restricting compliance is challenging once the command is issued, easier to prevent the command from reaching the AI at all. That being said, such a process is never as easy to implement as it is to state. Hidden or obscured instructions, or even poorly written descriptions, can mislead AI models to provide accidentally or intentionally incorrect or even harmful responses to certain requests. Prompt quality, access control, and data governance are key here to preventing misuse and potential errors.
  2. Credential Compromise or Token Theft
    In order to perform its functions, an MCP server has to maintain a storehouse of credentials – accounts and/or authentication tokens – for the services that it utilizes and can call. These credentials must be protected like any other such sensitive information to prevent abuse.
  3. Misconfigured Access for the MCP Server
    Since the MCP server has the ability to do so many things, care must be taken to ensure that its own permission levels are only what is needed to perform its functions. If the server is over-privileged or granted excessive access it can create potential issues by granting unintended abilities like deleting files or databases, changing account permissions, and so forth. This is true even in Proof-of-Concept rollouts, which have a tendency to be granted “all or nothing” permissions; such POC implementations have a nasty habit of rolling into Production without anyone remembering that they were configured this way for ease of demonstrating functionality.
  4. Session Hijacking and Replay Attacks
    In any MCP interaction, sessions have to be established with the tools being used. An attacker with access to the data stream can hijack the token or session ID in use in the same way that such attacks can be performed against other networking services and devices. It is imperative to ensure that networks are secured against unknown devices and monitored for unknown activity.

These risks are all manageable, but they should be known and planned for in order to ensure that they are governed. Basic network and process hygiene can go a long way towards ensuring that such risks are managed. Ensure that networks are secure and that identity and access management processes are in place and enforced. Ensure that use cases – especially for new technologies – are clearly defined and scoped before implementation. Ensure that data governance is performed; source information for AI should be carefully vetted against known legal, regulatory, and internal security requirements in advance before being made available. With these basic protections in place, MCP Servers can take a place among the most valuable tools available for process automation.

To learn more about how ePlus can help you with your AI Security journey, please visit https://eplus.com/solutions/security or click here to contact an ePlus AI Security expert today.

Blog
Security
3