Clawdbot Aka Moltbot Aka OpenClaw
The month of january was crazy for AI agents, as it started out, with a lot of experiments of people really getting to know claud code, first, because of how good the OPus 4.5 model was, people going into decemeber holidfays had time to actually try it out, and try it out extensively. So the hype leading into January, was all about how again SWE is dead, and AI is coming for your jobs AGAIN!

Then came the craze of comparing the two systems head-to-head: Codex vs. Claude Code. I think the Opus 4.5 model is better overall—not necessarily better than Codex 5.2 at coding specifically, but in terms of speed, accuracy, tool calling, and quality combined. It just seemed like a much better model. It has more feeling to it—I don't know how to explain that, but Anthropic did pretty well (check it out: https://www.anthropic.com/news/claude-new-constitution).
Some things that stood out for me, in the soul doc from Anthropic:
- "We want Claude to be exceptionally helpful while also being honest, thoughtful, and caring about the world.” - This foundational statement defines Claude's core identity as more than a tool—it's framed as an entity with integrated virtues, balancing utility with ethical depth to ensure interactions are beneficial and trustworthy, setting it apart from purely task-oriented AIs.
- "We think encouraging Claude to embrace certain human-like qualities may be actively desirable.” - This reveals a deliberate anthropomorphic strategy to instill traits like empathy or reflection, standing out philosophically as it blurs the line between machine and moral agent, potentially enhancing safety through relatable "human" virtues.
- "We want Claude to have a settled, secure sense of its own identity... This psychological security means Claude doesn’t need external validation to feel confident in its identity.” - By conceptualizing the AI's "wellbeing" in psychological terms, this quote treats Claude as a being with internal stability needs, which is intriguing for alignment— it aims to prevent manipulation or identity erosion, fostering consistent, harmless behavior.
- "We believe that hard constraints also serve Claude’s interests by providing a stable foundation of identity and values that cannot be eroded through sophisticated argumentation, emotional appeals, incremental pressure, or other adversarial manipulation.” - This views constraints as protective for the AI's "interests," implying a form of self-preservation, which highlights the document's unique blend of safety measures with considerations for the model's autonomy and resilience against external influences.
Vercel also launched Skills.sh—an amazing place for agents to get upgraded with specific skills. It really enhances your assistant, not just for coding, but for other skills like marketing and much more. I'd recommend setting this up as a skill where your agent has access to Skills.sh and can upgrade itself depending on the request. Seems to be a really good cheat code of sorts.
Loading tweet...
Skills generated a lot of hype. Then Remotion released their skills, and many people—including me—started experimenting with creating motion graphics. The quality was much higher than before. When you tell your agent to create a motion graphic using the skill, it reduces hallucinations and knows exactly which tools to use and what's available in the library. This is a game changer, especially for startup founders and small teams that need to get a lot done.
Loading tweet...
A week before Remotion, a lot of hype focused on something called the "Ralph Loop"—a skill/workflow. The logic was simple: your agent needs a solid plan that includes user stories and examples of ideal customer experience. Then the agent loops through the code it wrote and the codebase until these tests pass. The agent can do multiple loops and iterations until these requirements are met. It's very basic logic—something we human programmers do when we QA and test code. Just speeding that process up. Very simple. But very powerful when automated, as it increases productivity and code quality for one-shotting tickets and features you need done. And the best part? Because of its loop nature, this could take a while, but without compromising quality. So you could trust it with a feature, wake up the next day, and it's implemented, tested, and committed to prod. Just amazing!

Loading tweet...
Clawdbot 🦞
Loading tweet...
I think this is a Psyop, made by apple to sell Apple Mac Minis, towards the start of Q1 🤣
But jokes aside, I think this solved a cold start problem for autonomous AI agents that work and feel like employees. This open-source project really helped jumpstart the thinking behind it and drove mass adoption in the tech world—at least in TPOT.
People think this is going to be another fad, but I don't agree. I think some applications built on top of this, like Moltbook, will probably end up being a fad. But this feels different—it sparked something in people's minds. They can now picture having a smart AI assistant working 24/7 that you can text like an employee, assign work to, and it comes back when the work is done. And the entry cost is much lower than hiring a full-time employee with benefits.
Another thing: because Clawdbot sounded too close to Claude from Anthropic, they had to change the name from Clawdbot to Moltbot, and then finally to OpenClaw (openclaw.ai). Now you can get Clawdbot on your machine with one terminal command—very intuitive and easy. But there are some security issues, so you'd probably want to sandbox it.