Alright, my associates, I’m again with one other publish based mostly on my learnings and exploration of AI and the way it’ll match into our work as community engineers. In right now’s publish, I wish to share the primary (of what is going to seemingly be many) “nerd knobs” that I feel all of us ought to concentrate on and the way they are going to affect our use of AI and AI instruments. I can already sense the thrill within the room. In any case, there’s not a lot a community engineer likes greater than tweaking a nerd knob within the community to fine-tune efficiency. And that’s precisely what we’ll be doing right here. High quality-tuning our AI instruments to assist us be simpler.
First up, the requisite disclaimer or two.
- There are SO MANY nerd knobs in AI. (Shocker, I do know.) So, should you all like this sort of weblog publish, I’d be completely satisfied to return in different posts the place we have a look at different “knobs” and settings in AI and the way they work. Properly, I’d be completely satisfied to return as soon as I perceive them, no less than. 🙂
- Altering any of the settings in your AI instruments can have dramatic results on outcomes. This consists of rising the useful resource consumption of the AI mannequin, in addition to rising hallucinations and reducing the accuracy of the data that comes again out of your prompts. Think about yourselves warned. As with all issues AI, go forth and discover and experiment. However achieve this in a protected, lab surroundings.
For right now’s experiment, I’m as soon as once more utilizing LMStudio operating regionally on my laptop computer relatively than a public or cloud-hosted AI mannequin. For extra particulars on why I like LMStudio, take a look at my final weblog, Making a NetAI Playground for Agentic AI Experimentation.
Sufficient of the setup, let’s get into it!
The affect of working reminiscence dimension, a.ok.a. “context”
Let me set a scene for you.
You’re in the midst of troubleshooting a community subject. Somebody reported, or seen, instability at a degree in your community, and also you’ve been assigned the joyful job of attending to the underside of it. You captured some logs and related debug info, and the time has come to undergo all of it to determine what it means. However you’ve additionally been utilizing AI instruments to be extra productive, 10x your work, impress your boss, you recognize all of the issues which might be occurring proper now.
So, you resolve to see if AI can assist you’re employed by way of the info sooner and get to the basis of the difficulty.
You fireplace up your native AI assistant. (Sure, native—as a result of who is aware of what’s within the debug messages? Finest to maintain all of it protected in your laptop computer.)
You inform it what you’re as much as, and paste within the log messages.


After getting 120 or so traces of logs into the chat, you hit enter, kick up your ft, attain in your Arnold Palmer for a refreshing drink, and await the AI magic to occur. However earlier than you may take a sip of that iced tea and lemonade goodness, you see this has instantly popped up on the display screen:


Oh my.
“The AI has nothing to say.”!?! How might that be?
Did you discover a query so tough that AI can’t deal with it?
No, that’s not the issue. Try the useful error message that LMStudio has kicked again:
“Attempting to maintain the primary 4994 tokens when context the overflows. Nonetheless, the mannequin is loaded with context size of solely 4096 tokens, which isn’t sufficient. Attempt to load the mannequin with a bigger context size, or present shorter enter.”
And we’ve gotten to the basis of this completely scripted storyline and demonstration. Each AI instrument on the market has a restrict to how a lot “working reminiscence” it has. The technical time period for this working reminiscence is “context size.” Should you attempt to ship extra information to an AI instrument than can match into the context size, you’ll hit this error, or one thing prefer it.
The error message signifies that the mannequin was “loaded with context size of solely 4096 tokens.” What’s a “token,” you surprise? Answering that might be a subject of a completely totally different weblog publish, however for now, simply know that “tokens” are the unit of dimension for the context size. And the very first thing that’s executed whenever you ship a immediate to an AI instrument is that the immediate is transformed into “tokens”.
So what can we do? Properly, the message provides us two potential choices: we are able to enhance the context size of the mannequin, or we are able to present shorter enter. Generally it isn’t an enormous deal to offer shorter enter. However different instances, like once we are coping with giant log recordsdata, that choice isn’t sensible—all the information is necessary.
Time to show the knob!
It’s that first choice, to load the mannequin with a bigger context size, that’s our nerd knob. Let’s flip it.
From inside LMStudio, head over to “My Fashions” and click on to open up the configuration settings interface for the mannequin.


You’ll get an opportunity to view all of the knobs that AI fashions have. And as I discussed, there are a number of them.


However the one we care about proper now’s the Context Size. We are able to see that the default size for this mannequin is 4096 tokens. However it helps as much as 8192 tokens. Let’s max it out!


LMStudio supplies a useful warning and possible purpose for why the mannequin doesn’t default to the max. The context size takes reminiscence and assets. And elevating it to “a excessive worth” can affect efficiency and utilization. So if this mannequin had a max size of 40,960 tokens (the Qwen3 mannequin I exploit typically has that prime of a max), you won’t wish to simply max it out immediately. As an alternative, enhance it by a bit at a time to seek out the candy spot: a context size large enough for the job, however not outsized.
As community engineers, we’re used to fine-tuning knobs for timers, body sizes, and so many different issues. That is proper up our alley!
When you’ve up to date your context size, you’ll must “Eject” and “Reload” the mannequin for the setting to take impact. However as soon as that’s executed, it’s time to benefit from the change we’ve made!


And have a look at that, with the bigger context window, the AI assistant was capable of undergo the logs and provides us a pleasant write-up about what they present.
I notably just like the shade it threw my method: “…take into account searching for help from … a certified community engineer.” Properly performed, AI. Properly performed.
However bruised ego apart, we are able to proceed the AI assisted troubleshooting with one thing like this.


And we’re off to the races. We’ve been capable of leverage our AI assistant to:
- Course of a major quantity of log and debug information to establish potential points
- Develop a timeline of the issue (that can be tremendous helpful within the assist desk ticket and root trigger evaluation paperwork)
- Determine some subsequent steps we are able to do in our troubleshooting efforts.
All tales should finish…
And so you might have it, our first AI Nerd Knob—Context Size. Let’s evaluation what we realized:
- AI fashions have a “working reminiscence” that’s known as “context size.”
- Context Size is measured in “tokens.”
- Oftentimes instances an AI mannequin will assist a better context size than the default setting.
- Rising the context size would require extra assets, so make modifications slowly, don’t simply max it out fully.
Now, relying on what AI instrument you’re utilizing, chances are you’ll NOT be capable of alter the context size. Should you’re utilizing a public AI like ChatGPT, Gemini, or Claude, the context size will rely on the subscription and fashions you might have entry to. Nonetheless, there most positively IS a context size that can issue into how a lot “working reminiscence” the AI instrument has. And being conscious of that reality, and its affect on how you need to use AI, is necessary. Even when the knob in query is behind a lock and key. 🙂
Should you loved this look below the hood of AI and wish to find out about extra choices, please let me know within the feedback: Do you might have a favourite “knob” you want to show? Share it with all of us. Till subsequent time!
PS… Should you’d prefer to study extra about utilizing LMStudio, my buddy Jason Belk put a free tutorial collectively known as Run Your Personal LLM Regionally For Free and with Ease that may get you began in a short time. Test it out!
Join Cisco U. | Be part of the Cisco Studying Community right now free of charge.
Study with Cisco
X | Threads | Fb | LinkedIn | Instagram | YouTube
Use #CiscoU and #CiscoCert to affix the dialog.
Learn subsequent:
Share: