Does Gabriel still do tech or is it all just weight loss now?
I’ll admit my weight loss journey and IRL activities are taking up a great deal of my time these days, but I’m still very interested in expanding my technical knowledge! I hope my techie friends will appreciate this post where I try out AI agents so they don’t have to. Of course I will be posting another weight loss update soon, things are progressing well!
My first foray into automation software was an Android application that let you do all kinds of fancy flows. I genuinely believe that accessible automation tools are a genuinely useful thing that any OS should have built-in. Accessibility is a real way to give users power over their own computing. I’ve noticed that most of the refined solutions for automation are commercial products, which in turn means you’ve got to become even more dependent on Big Tech infrastructure. For those of us comfortable around shell scripting, a node-based automation tool can seem like overkill, but there are many people who could benefit from one. The general idea of flows and automation graphs seem to be genuinely interesting things and I’m hoping we see some exciting developments there in the software freedom space.
But this post is about my latest dabbling into the potential of self-hosted “AI agents”. After speaking with James Corbett about many concerns related to artificial intelligence tools, I decided it would be worth spending the time to get a better grasp of what’s possible. Since this was only a brief look at these tools, I am in no way claiming to have advanced knowledge of their use. I am sure many of the problems I encountered have well-known solutions, but I mostly wanted to see how far I could go in experimenting. This is my personal experience with learning how to use self-hosted tools for automation and AI agents.
WTF is an ‘AI agent’?

AI Agent node in n8n
My uncharitable explanation would be that an AI agent is merely the introduction of LLM chatbots to an automated process. For example, instead of parsing an email for keywords to react to it, you can feed the content of the message to an LLM to decide what to do with it. Depending on the model being used, a variety of different tools can be given to the LLM to perform various actions. Since an LLM can generate a list of steps, the idea is based on providing it the tools to actually run those steps.
Having spent some time tinkering with it, I can understand the appeal. By burning an insane amount of resources you can hopefully have the AI agent smooth over trivial problems that arise in your flow. Instead of worrying about minor details, the idea is to get the LLM to simply make the best of the inputs. Provided the problem is broken up into simple enough choices, I can imagine this being quite powerful in specific contexts.
What I find bizarre is how the so-called “reasoning models” start their prompt with self-generated instructions with <think>
tags. This means that for every prompt, there is a beginning section where specific instructions are reiterated and elaborated on. I can see why this would help them come up with more useful outputs, but at the cost of additional tokens and attention. This also means that at least when using these models you’ll have to filter out the <think>
content prior to responding. I have a feeling that publicly accessible AI chatbots have these kinds of features to sanitize outputs prior to being presented to the user.
‘Reasoning’ example
<think> Okay, so the user wants me to create a concise outline for an article about Mexico’s telecom reform and human rights risks. They’ve shared what appears to be part of the HTML source code for the press release.
{Multiple paragraphs about the article’s content}
For the heading, something catchy but also formal would work well to capture attention while maintaining professionalism. Then I’ll craft a few paragraphs summarizing the main points of the article and explaining its significance in terms that show both immediate impact on Mexico citizens and potential broader implications. </think>
To me, it’s fascinating that this “reasoning” by the model is longer than a decent summary would be. It’s definitely possible that this tactic creates significantly better outputs, but it’s at the cost of so much more compute every step of the way. Part of the idea of how AI agents are sold is the idea that with dozens of these you can replace real human teams. It seems clear to me that to the degree you can replace human workers with LLM-driven automation, it may end up costing multiples of their income in raw electricity, never mind the potential legal and copyright issues.
How did I set it up?
While I wanted to dip my toes into understanding AI agents, my principal goal was to understand what was possible while self-hosting. I’m sure there are so many very different tools for building and using AI agents in the cloud; I would prefer to use self-hosted but ideally free software options. With a limited look around, I didn’t find Free as in Freedom tools for the automation side, never mind the licenses around particular LLM models. So I essentially had to settle for open-source and self-hostable. If you’re aware of fully-free AI & automation suites, I would love to know.
Based off an admittedly quick search, the simplest AI agent tool I came across was n8n which is open source but certainly not focused on Free Software. What I did appreciate about it was its simplicity and support for RSS feeds. I had a simple idea for what I wanted to try out, and RSS support made it a lot easier. The projects website has a collection of templates you can use to import for your own uses. That’s pretty sweet, but I’ll elaborate more on that later. You can install n8n or run it as a docker image. It is a web-based GUI (a WUI?) for building automated workflows and running AI agents.
For your AI agent to work, it needs to be able to connect to an LLM API, such as the big tech ones, but I wanted to try self-hosted. Originally, I wanted to see if I could get GPT4All because the docs claim to support a server mode, but I couldn’t get that working. Instead I opted for Ollama which allows you to serve many LLMs.

Configuring self-hosted ollama on n8n
With Ollama serving Qwen3 over the API I was ready to start my AI agent journey. I wanted to come up with something that could potentially be useful for me, so I would actually be motivated to get it to work well. I chose to try to see if I could build a tool that would read some of my RSS feeds and write me a simple report on pressing issues. The hope would be that if one is using a self-hosted model, with selected RSS feeds and a constrained focus, that many potential censorship issues could be eliminated. If it worked, it could absolutely save me the toil of keeping up to date on many things I struggle to keep up with. I wasn’t aiming for perfection, nor to automate what I care about, just simply to see if such a tool could save me time.
How did it go?
After a great deal of tinkering, I got it to a point where it provides a reasonably useful output. There are many things I would fix if I really wanted to devote the time to perfecting it, but I’m not convinced it would be worth the effort. I really liked playing with n8n, but I’m convinced AI agents are wasteful trash. Part of this is even though I’m providing a set of information to work with, it will still hallucinate additional articles. There are many quality control issues regarding repeating articles and inconsistent formatting. I’m aware you can fix some of this with better prompts, but I expect the path of least resistance for many will be to simply add a “checker” agent and just recalculate the output. This makes zero economic sense when one is paying for their electricity. I believe this will be the mechanism that pushes people on to using artificially subsidized Big Tech infrastructure and not decentralized or self-sovereign AI.
In case you’re curious, my flow begins with a list of RSS feeds that can be updated however we want. It wouldn’t be hard to have it pull from a server or .opml file. It then loops over the items to grab the RSS feed contents. Simple sort and limit nodes are to get only the most recent items. Then each article is given to the “Summarizer” AI agent which actually does a not terrible job at converting rss content into a few paragraphs about it. I then have to filter out the <think>
content before passing it along. I also merge the summaries with the original list so that I can preserve attribution. I have another code note that combines and formats the articles and their summaries so that they can be sent to the “report writer” AI agent. The report writer then takes all the articles and their summaries and is supposed to create a pretty HTML email report. I yet again have to use a code node to remove the nasty <think>
tags from the output as well as other irregularities. Then the final report is sent to me as-is without any interaction on my part.

When the report writer isn’t just hallucinating articles the proper link is there almost all of the time. But often enough it will simply link to example.com
if it has any trouble. Not only that, but you will often see the same article/summary/link combo multiple times in the same email. I am convinced it would be much better to simply use a code node to build the HTML email with the data than to rely on the “AI Agent” feature for that specific task. If I was to go back and fix things I would have the “report writer” agent simply look over the summaries and write a report which would then be used to enhance the email.
Ironically, the fact that a bunch of code nodes needed to be used at all is a bit disappointing to me. It seems like as someone with technical ability, I’d be better off scripting on my own with whatever I want to talk to llama than to use javascript with the n8n suite. But I will say that n8n does have a lot of useful built-in nodes that make it very easy to get started quickly. Having done all this, I think AI agents are still quite far from being something one can expect in a freedom respecting environment. The path of least resistance will always be to simply cobble together Big Tech systems rather than to actually support decentralization and self-sovereign computing.
Closing thoughts
I can definitely say the short time I spent looking at this was valuable. I’ve gained a more refined understanding of how these systems are used, and what can be done beyond merely copy-pasting your homework into ChatGPT. I am genuinely impressed with what is possible from a self-hosted setup, and can imagine many valid use-cases for the software. This has shown me that we may need to rethink what making computing free (as in freedom) means for people with less technical ability. I’m definitely interested in learning what kinds of free software exists for automation, and observing projects like unit. As always, I’m convinced there is a remarkable amount of opportunity if independent technical minds are supported to do what they do best.
Doing this has congealed my feelings about corporate open source quite a bit. I used to be relatively ambivalent towards corporate open source because on some level the money has to come from somewhere. But I’m beginning to recognize first hand what many others have warned about for quite some time now, that Free computing and non-free computing are absolutely at a bifurcation point. The middle of the road is gone, it is clear we will have to choose what kind of a technological future we want to participate in.
When it comes to AI agents, I can definitely see the dark side. Just search “generate post” into the workflows page and you’ll see how many flows exist for generating information pollution. The fact that the online hustler of the 2020s can generate slop without any technical ability is yet another nail in the dead Internet coffin. I think it’s important for me to share this exploration so that people can imagine what tools are being used by governments, corporations, and other entities to carry out their duties, and what kinds of impact this can have on people.
I am now convinced more than ever that the destruction of the open web (for non-technical people) is the point of the pushing “AI” into everything. I fully believe that the free and open web will always exist in some form, but it is clear to me that there can’t be sustained via any commercial means. In some ways this might be the best thing ever, the future of the indieweb will be bright if it’s maintained by people out of passion and care rather than clicks and comments. It just means that the minds of the public are going to need to be prepared for a much more hostile information environment than what we’ve experienced to date.
In short, AI critics actually don’t hate AI enough. This may sound extreme. Even if I extrapolate that these tools work so much better with big tech infrastructure, that won’t fix everything. The fundamental aspects of inserting LLMs into various automated flows seems so much more wasteful than paying people to do work. It is clear to me that the AI hype isn’t actually about sound economics but actually about economic warfare. It won’t matter that doing things with AI agents instead of paid staff will likely cost 10x what it would cost to pay a human a real wage, what matters is that consolidating the technological landscape will yield more returns than money can buy. The point is that using LLMs to fix problems created by LLMs is the robotic equivalent of bullshit jobs, spending money on robotic make-work to market demand for AI “solutions”.
As I stated in my chat with Corbett, I don’t believe that technology = implementation. I don’t think machine learning is evil, nor that we can’t build useful tools with it. I am however very concerned about how political and economic forces are shaping this industry to build “aligned” AI against the public. This has been conspicuously absent from high-profile conversations about “AI safety” and expect it to be a driving force of many near-term trends.