Experiments With Local AI Agents
I did some experimenting with local AIAgents over the weekend.
I used DeepSeek‑R1:1.5b, which is a small reasoning‑focused AI model that runs locally and seems popular for lightweight agents implementations.
I had Ollama running on a local server on my machine and is exposed via its HTTP API.
The agent sends prompts to Ollama such as:
POST /api/generate
{
“model”: “deepseek-r1:1.5b”,
“prompt”: “Your task is...”
}
Ollama returns streamed tokens back to the Python agent and the Python agent wraps the LLM with tools.
I only defined two tools for now, “search the web” and “read/write files”.
The LLM is instructed to output JSON describing which tool it wants to call.
The agent:
➡️ Parses the model’s JSON output
➡️ Executes the requested tool
➡️ Captures the result
➡️ Sends the result back to the model as new context
This creates a loop and the agent keeps iterating until the model returns a final answer.
It’s remarkably simple but effective and even though it’s only 1.5B parameters, it performs well when paired with Python execution, external tools and iterative reasoning loops.
An example would be asking the to write a blog post on AI Agents. The LLM reads the instruction, decides it needs background information and because the system prompt tells it has tools available, it will call them. The model triggers the ‘web_search’ tool and calls the configured search API (in this case Tavily). The LLM receives a structured summary of search results which gives the model real world grounding.
Now that it has the research it calls the ‘write to file’ tool and writes out an article to disk and finally output an confirmation to the chat window.
Unlike OpenClaw, which is system centric this small model approach is LLM centric so it does not have a planner or a supervisor etc as the model itself decides on the next step, but this type of minimal agent loop punches about its weight and is still uber useful.
I’m thinking of leaving it running and rather than posting in a chat to have it execute tasks, I may give it its own email address or Telegram account and use this to delegate things to it (this is running in my home office, sandboxed on an Ubuntu UTM virtual machine on a spare mac).
At some point, when I’ve cleaned up the code I’ll post the code repo in the comments.

