I’ve been following the recent explosion of OpenClaw (formerly Clawdbot/Moltbot) on GitHub. For those who haven’t seen it, it’s a local, agentic AI that runs on your own hardware and interacts with you via your preferred messaging apps (Telegram, Signal, iMessage etc.).
It struck me that this feels exactly like the “Personal AI” vision Sir Tim Berners-Lee frequently describes—an agent that works for me, not a corporation, and stays on my infrastructure.
Sir Tim often talks about “Charlie,” an AI that knows your schedule, your health goals, and your family’s needs because it has access to your data. Currently, OpenClaw “lives” on a local machine and reads local files. Imagine if we bridged OpenClaw with Solid Pods.
Instead of the AI scraping our data into a corporate silo, it would use our WebID to authenticate into our Pods, fetch the data it needs to perform a task, and then “relock” the vault.
Has anyone in the community experimented with this yet? It feels like OpenClaw provides the “Brain and Hands” that the Solid “Storage” layer has been missing to truly go mainstream.
I’d love to hear if anyone is working on a Solid “Skill” for OpenClaw!
I have been thinking about this for a while, even though I haven’t done anything about it yet… But one thing I’m not super comfortable with in this case is, as simon willison calls it, the lethal trifecta. I wouldn’t be very comfortable exposing all my private data to the current set up.
It’s really cool to have an AI running on a personal device, but the way OpenClaw seems to work is that it still ends up calling Claude Code and all of these other AI models. And sure, you can also run local LLMs, but I don’t know how many people is going to do this. The whole point of Charlie is that the AI works for you, but if you’re still using an external model (or even a local model that you didn’t train), I’m not sure that’s going to happen.
One thing I can say for sure is that I don’t think the AIs are going to do Solid properly. I don’t think it’d be as simple as adding a skill. Even though this is spiritually similar to Charlie, I’m not sure how feasible it is to have it working with Solid.
I worked on a small research paper a while ago which went in a different direction. Google has produced on-device LLMs which you can load from a model file and run in Kotlin code. I looked at using that with OIDC authentication from an app to your Solid pod, then using the on-device LLM to answer questions related to my data. The engineering was all there, but it definitely needed some work to be more flexible. I think you should allow an AI agent credentials related to your pod in order for it to perform operations, because it seems that autonomous agents in their current form cannot handle security risks properly. Therefore, gate them with tokens or permissions with respect to your personal data so that way even if they try to change something remotely, they can’t due to the access controls inherent to your Pod. One obvious issue with this solution is all of this is in the same software environment. User-auth keys may need to be in a separate sandbox space which an AI has no access, or something similar.
How this relates to clawdbot is that it has high level access to messaging apps and emails and such. Therefore if it gets prompt injected, many of these messaging apps have no guardrails for that scenario, so anything clawd sends can be executed. If more companies treated AI agents as silly interns rather than senior managers, they may understand that it is less about teaching the AI guardrails at this point and rather focusing on the robustness of systems as if a clueless intern was working on them.
I agree with @NoelDeMartin about doing Solid properly. My intuitive approach to AI in Solid would be less “let an AI have access to all available resources/applications and make decisions” but rather “my Solid AI can access my grocery list, weather app, and messages” and set permissions from there with the folders synchronized to your Pod space. I think tool usage is also being undervalued here. AI should have a set of strong tools in order to make decisions - I remember a saying where if “all you have is a hammer, then everything looks like a nail”. So let’s give our AI agents good tools to use in the Solid ecosystem from the get-go.