Hey there, everybody, and welcome to the most recent installment of “Hank shares his AI journey.” 🙂 Synthetic Intelligence (AI) continues to be all the fad, and getting back from Cisco Dwell in San Diego, I used to be excited to dive into the world of agentic AI.
With bulletins like Cisco’s personal agentic AI answer, AI Canvas, in addition to discussions with companions and different engineers about this subsequent section of AI potentialities, my curiosity was piqued: What does this all imply for us community engineers? Furthermore, how can we begin to experiment and study agentic AI?
I started my exploration of the subject of agentic AI, studying and watching a variety of content material to realize a deeper understanding of the topic. I received’t delve into an in depth definition on this weblog, however listed below are the fundamentals of how I give it some thought:
Agentic AI is a imaginative and prescient for a world the place AI doesn’t simply reply questions we ask, but it surely begins to work extra independently. Pushed by the objectives we set, and using entry to instruments and programs we offer, an agentic AI answer can monitor the present state of the community and take actions to make sure our community operates precisely as supposed.
Sounds fairly darn futuristic, proper? Let’s dive into the technical facets of the way it works—roll up your sleeves, get into the lab, and let’s be taught some new issues.
What are AI “instruments?”
The very first thing I needed to discover and higher perceive was the idea of “instruments” inside this agentic framework. As you might recall, the LLM (giant language mannequin) that powers AI programs is actually an algorithm skilled on huge quantities of information. An LLM can “perceive” your questions and directions. On its personal, nonetheless, the LLM is restricted to the info it was skilled on. It may possibly’t even search the online for present film showtimes with out some “software” permitting it to carry out an internet search.
From the very early days of the GenAI buzz, builders have been constructing and including “instruments” into AI purposes. Initially, the creation of those instruments was advert hoc and different relying on the developer, LLM, programming language, and the software’s objective. However lately, a brand new framework for constructing AI instruments has gotten loads of pleasure and is beginning to turn out to be a brand new “normal” for software improvement.
This framework is named the Mannequin Context Protocol (MCP). Initially developed by Anthropic, the corporate behind Claude, any developer to make use of MCP to construct instruments, referred to as “MCP Servers,” and any AI platform can act as an “MCP Shopper” to make use of these instruments. It’s important to keep in mind that we’re nonetheless within the very early days of AI and AgenticAI; nonetheless, at the moment, MCP seems to be the method for software constructing. So I figured I’d dig in and work out how MCP works by constructing my very own very primary NetAI Agent.
I’m removed from the primary networking engineer to need to dive into this area, so I began by studying a few very useful weblog posts by my buddy Kareem Iskander, Head of Technical Advocacy in Be taught with Cisco.
These gave me a jumpstart on the important thing matters, and Kareem was useful sufficient to offer some instance code for creating an MCP server. I used to be able to discover extra alone.
Creating a neighborhood NetAI playground lab
There isn’t any scarcity of AI instruments and platforms right this moment. There may be ChatGPT, Claude, Mistral, Gemini, and so many extra. Certainly, I make the most of lots of them frequently for varied AI duties. Nevertheless, for experimenting with agentic AI and AI instruments, I needed one thing that was 100% native and didn’t depend on a cloud-connected service.
A main purpose for this want was that I needed to make sure all of my AI interactions remained completely on my pc and inside my community. I knew I might be experimenting in a completely new space of improvement. I used to be additionally going to ship information about “my community” to the LLM for processing. And whereas I’ll be utilizing non-production lab programs for all of the testing, I nonetheless didn’t like the thought of leveraging cloud-based AI programs. I might really feel freer to be taught and make errors if I knew the chance was low. Sure, low… Nothing is totally risk-free.
Fortunately, this wasn’t the primary time I thought of native LLM work, and I had a few doable choices able to go. The primary is Ollama, a strong open-source engine for working LLMs regionally, or not less than by yourself server. The second is LMStudio, and whereas not itself open supply, it has an open supply basis, and it’s free to make use of for each private and “at work” experimentation with AI fashions. Once I learn a latest weblog by LMStudio about MCP assist now being included, I made a decision to provide it a strive for my experimentation.


LMStudio is a shopper for working LLMs, but it surely isn’t an LLM itself. It offers entry to a lot of LLMs obtainable for obtain and working. With so many LLM choices obtainable, it may be overwhelming once you get began. The important thing issues for this weblog put up and demonstration are that you simply want a mannequin that has been skilled for “software use.” Not all fashions are. And moreover, not all “tool-using” fashions really work with instruments. For this demonstration, I’m utilizing the google/gemma-2-9b mannequin. It’s an “open mannequin” constructed utilizing the identical analysis and tooling behind Gemini.
The subsequent factor I wanted for my experimentation was an preliminary thought for a software to construct. After some thought, I made a decision an excellent “howdy world” for my new NetAI venture could be a method for AI to ship and course of “present instructions” from a community system. I selected pyATS to be my NetDevOps library of alternative for this venture. Along with being a library that I’m very conversant in, it has the good thing about automated output processing into JSON via the library of parsers included in pyATS. I might additionally, inside simply a few minutes, generate a primary Python operate to ship a present command to a community system and return the output as a place to begin.
Right here’s that code:
def send_show_command(
command: str,
device_name: str,
username: str,
password: str,
ip_address: str,
ssh_port: int = 22,
network_os: Elective[str] = "ios",
) -> Elective[Dict[str, Any]]:
# Construction a dictionary for the system configuration that may be loaded by PyATS
device_dict = {
"units": {
device_name: {
"os": network_os,
"credentials": {
"default": {"username": username, "password": password}
},
"connections": {
"ssh": {"protocol": "ssh", "ip": ip_address, "port": ssh_port}
},
}
}
}
testbed = load(device_dict)
system = testbed.units[device_name]
system.join()
output = system.parse(command)
system.disconnect()
return output
Between Kareem’s weblog posts and the getting-started information for FastMCP 2.0, I discovered it was frighteningly simple to transform my operate into an MCP Server/Instrument. I simply wanted so as to add 5 traces of code.
from fastmcp import FastMCP
mcp = FastMCP("NetAI Good day World")
@mcp.software()
def send_show_command()
.
.
if __name__ == "__main__":
mcp.run()
Properly.. it was ALMOST that simple. I did should make just a few changes to the above fundamentals to get it to run efficiently. You may see the complete working copy of the code in my newly created NetAI-Studying venture on GitHub.
As for these few changes, the modifications I made have been:
- A pleasant, detailed docstring for the operate behind the software. MCP purchasers use the main points from the docstring to grasp how and why to make use of the software.
- After some experimentation, I opted to make use of “http” transport for the MCP server reasonably than the default and extra frequent “STDIO.” The rationale I went this manner was to organize for the following section of my experimentation, when my pyATS MCP server would possible run throughout the community lab surroundings itself, reasonably than on my laptop computer. STDIO requires the MCP Shopper and Server to run on the identical host system.
So I fired up the MCP Server, hoping that there wouldn’t be any errors. (Okay, to be trustworthy, it took a few iterations in improvement to get it working with out errors… however I’m doing this weblog put up “cooking present fashion,” the place the boring work alongside the best way is hidden. 😉
python netai-mcp-hello-world.py ╭─ FastMCP 2.0 ──────────────────────────────────────────────────────────────╮ │ │ │ _ __ ___ ______ __ __ _____________ ____ ____ │ │ _ __ ___ / ____/___ ______/ /_/ |/ / ____/ __ |___ / __ │ │ _ __ ___ / /_ / __ `/ ___/ __/ /|_/ / / / /_/ / ___/ / / / / / │ │ _ __ ___ / __/ / /_/ (__ ) /_/ / / / /___/ ____/ / __/_/ /_/ / │ │ _ __ ___ /_/ __,_/____/__/_/ /_/____/_/ /_____(_)____/ │ │ │ │ │ │ │ │ 🖥️ Server title: FastMCP │ │ 📦 Transport: Streamable-HTTP │ │ 🔗 Server URL: http://127.0.0.1:8002/mcp/ │ │ │ │ 📚 Docs: https://gofastmcp.com │ │ 🚀 Deploy: https://fastmcp.cloud │ │ │ │ 🏎️ FastMCP model: 2.10.5 │ │ 🤝 MCP model: 1.11.0 │ │ │ ╰────────────────────────────────────────────────────────────────────────────╯ [07/18/25 14:03:53] INFO Beginning MCP server 'FastMCP' with transport 'http' on http://127.0.0.1:8002/mcp/server.py:1448 INFO: Began server course of [63417] INFO: Ready for software startup. INFO: Utility startup full. INFO: Uvicorn working on http://127.0.0.1:8002 (Press CTRL+C to stop)
The subsequent step was to configure LMStudio to behave because the MCP Shopper and hook up with the server to have entry to the brand new “send_show_command” software. Whereas not “standardized, “most MCP Shoppers use a quite common JSON configuration to outline the servers. LMStudio is certainly one of these purchasers.


Wait… should you’re questioning, ‘Wright here’s the community, Hank? What system are you sending the ‘present instructions’ to?’ No worries, my inquisitive buddy: I created a quite simple Cisco Modeling Labs (CML) topology with a few IOL units configured for direct SSH entry utilizing the PATty characteristic.


Let’s see it in motion!
Okay, I’m certain you’re able to see it in motion. I do know I certain was as I used to be constructing it. So let’s do it!
To begin, I instructed the LLM on how to hook up with my community units within the preliminary message.


I did this as a result of the pyATS software wants the tackle and credential data for the units. Sooner or later I’d like to have a look at the MCP servers for various supply of fact choices like NetBox and Vault so it will probably “look them up” as wanted. However for now, we’ll begin easy.
First query: Let’s ask about software program model information.


You may see the main points of the software name by diving into the enter/output display screen.


That is fairly cool, however what precisely is occurring right here? Let’s stroll via the steps concerned.
- The LLM shopper begins and queries the configured MCP servers to find the instruments obtainable.
- I ship a “immediate” to the LLM to think about.
- The LLM processes my prompts. It “considers” the completely different instruments obtainable and in the event that they is likely to be related as a part of constructing a response to the immediate.
- The LLM determines that the “send_show_command” software is related to the immediate and builds a correct payload to name the software.
- The LLM invokes the software with the correct arguments from the immediate.
- The MCP server processes the referred to as request from the LLM and returns the end result.
- The LLM takes the returned outcomes, together with the unique immediate/query as the brand new enter to make use of to generate the response.
- The LLM generates and returns a response to the question.
This isn’t all that completely different from what you may do should you have been requested the identical query.
- You’ll contemplate the query, “What software program model is router01 working?”
- You’d take into consideration the alternative ways you would get the knowledge wanted to reply the query. Your “instruments,” so to talk.
- You’d determine on a software and use it to collect the knowledge you wanted. In all probability SSH to the router and run “present model.”
- You’d evaluate the returned output from the command.
- You’d then reply to whoever requested you the query with the correct reply.
Hopefully, this helps demystify a bit about how these “AI Brokers” work underneath the hood.
How about yet one more instance? Maybe one thing a bit extra advanced than merely “present model.” Let’s see if the NetAI agent may also help establish which change port the host is related to by describing the fundamental course of concerned.
Right here’s the query—sorry, immediate, that I undergo the LLM:


What we should always discover about this immediate is that it’ll require the LLM to ship and course of present instructions from two completely different community units. Identical to with the primary instance, I do NOT inform the LLM which command to run. I solely ask for the knowledge I want. There isn’t a “software” that is aware of the IOS instructions. That information is a part of the LLM’s coaching information.
Let’s see the way it does with this immediate:


And take a look at that, it was in a position to deal with the multi-step process to reply my query. The LLM even defined what instructions it was going to run, and the way it was going to make use of the output. And should you scroll again as much as the CML community diagram, you’ll see that it accurately identifies interface Ethernet0/2 because the change port to which the host was related.
So what’s subsequent, Hank?
Hopefully, you discovered this exploration of agentic AI software creation and experimentation as fascinating as I’ve. And perhaps you’re beginning to see the probabilities on your personal each day use. In case you’d prefer to strive a few of this out by yourself, you could find the whole lot you want on my netai-learning GitHub venture.
- The mcp-pyats code for the MCP Server. You’ll discover each the easy “howdy world” instance and a extra developed work-in-progress software that I’m including further options to. Be at liberty to make use of both.
- The CML topology I used for this weblog put up. Although any community that’s SSH reachable will work.
- The mcp-server-config.json file which you can reference for configuring LMStudio
- A “System Immediate Library” the place I’ve included the System Prompts for each a primary “Mr. Packets” community assistant and the agentic AI software. These aren’t required for experimenting with NetAI use circumstances, however System Prompts could be helpful to make sure the outcomes you’re after with LLM.
A few “gotchas” I needed to share that I encountered throughout this studying course of, which I hope may prevent a while:
First, not all LLMs that declare to be “skilled for software use” will work with MCP servers and instruments. Or not less than those I’ve been constructing and testing. Particularly, I struggled with Llama 3.1 and Phi 4. Each appeared to point they have been “software customers,” however they did not name my instruments. At first, I believed this was as a consequence of my code, however as soon as I switched to Gemma 2, they labored instantly. (I additionally examined with Qwen3 and had good outcomes.)
Second, when you add the MCP Server to LMStudio’s “mcp.json” configuration file, LMStudio initiates a connection and maintains an lively session. Because of this should you cease and restart the MCP server code, the session is damaged, providing you with an error in LMStudio in your subsequent immediate submission. To repair this challenge, you’ll have to both shut and restart LMStudio or edit the “mcp.json” file to delete the server, reserve it, after which re-add it. (There’s a bug filed with LMStudio on this drawback. Hopefully, they’ll repair it in an upcoming launch, however for now, it does make improvement a bit annoying.)
As for me, I’ll proceed exploring the idea of NetAI and the way AI brokers and instruments could make our lives as community engineers extra productive. I’ll be again right here with my subsequent weblog as soon as I’ve one thing new and fascinating to share.
Within the meantime, how are you experimenting with agentic AI? Are you excited in regards to the potential? Any ideas for an LLM that works properly with community engineering information? Let me know within the feedback beneath. Speak to you all quickly!
Join Cisco U. | Be part of the Cisco Studying Community right this moment without cost.
Be taught with Cisco
X | Threads | Fb | LinkedIn | Instagram | YouTube
Use #CiscoU and #CiscoCert to affix the dialog.
Share:

