TECH015: OPENCLAW AND SELF-SOVEREIGN AI W/ ALEX GLADSTEIN AND JUSTIN MOON
TECH015: OPENCLAW AND SELF-SOVEREIGN AI W/ ALEX GLADSTEIN AND JUSTIN MOON
17 February 2026
Alex and Justin break down the fundamentals of large language models and explore the rise of OpenClaw as a self-sovereign AI assistant.
Justin explains context engineering, local inference, and vibe coding, while Alex dives into the AI for Individual Rights program and its mission to empower activists. We also debate open vs. closed models and what the future of user-controlled AI could look like.
IN THIS EPISODE, YOU’LL LEARN
- What Large Language Models (LLMs) are and how they differ from traditional programs
- Why AI feels like magic—and what’s really happening under the hood
- The key differences between open and closed AI models
- Why capital structures influence AI model openness
- How persistent memory enhances AI agent performance
- What inference means and why context is a scarce resource
- How AI agents combine traditional software with LLM reasoning
- The evolution from MCP-style systems to skills-based context engineering
- What “vibe coding” is and how it lowers the barrier to building apps
- How the AI for Individual Rights program supports activist-driven innovation
Disclosure: This episode and the resources on this page are for informational and educational purposes only and do not constitute financial, investment, tax, or legal advice. For full disclosures, see link.
TRANSCRIPT
Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present due to platform differences.
[00:00:00] Intro: You are listening to TIP.
[00:00:06] Intro: You are listening to Infinite Tech by The Investor’s Podcast Network, hosted by Preston Pysh. We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthrough shaping the next decade and beyond empowering you to harness the future today.
[00:00:28] Intro: This show is not investment advice. It’s intended for informational and entertainment purposes only. All opinions expressed by hosts and guests are solely their own, and they may have investments in the securities discussed. And now here’s your host, Preston Pysh.
[00:00:52] Preston Pysh: Hey everyone. Welcome to the show. I am here with Alex Gladstein and Justin Moon.
[00:00:57] Preston Pysh: Guys, it feels like the world is moving at 10x, the speed and pace that it was just a couple months ago. I don’t know if you guys are feeling the same way, but things are accelerating. Oh my God.
[00:01:11] Justin Moon: I listened to the show with Pablo and he said, it’s compressing time, and I’m like, that’s just how it feels.
[00:01:15] Preston Pysh: Yeah. By the way, if a person’s listening to this podcast and you haven’t listened to the show that was basically too earlier where we were talking about the Clawdbot, or it’s called OpenClaw. Now with the branding, I would highly encourage you to go back and listen to that conversation as well, because it’s going to be pertinent to some of the stuff we’re talking about here.
[00:01:44] Preston Pysh: Amazing. So, Justin. Where do we even start this conversation? ’cause what I kind of feel like was the conversation I had with Trey and Pablo was so like we were already going a hundred mile an hour with the conversation and for the listener that was listening to it, I think their takeaway might have been, oh my God, what is happening? I don’t even know what they’re talking about right now.
[00:02:06] Preston Pysh: So like, maybe we throttle things back and like slowly bring everything up to speed. So take it away.
[00:02:11] Justin Moon: I agree. It was a great episode and I really enjoyed it. I could almost keep up. I could keep up with it only ’cause I know them and I really know Pablo well. But I feel like, for the drive-by listener, it was like trying to get on a fully moving train, like one of those Japanese trains. It’s like it’s asking a lot.
[00:02:25] Justin Moon: So I think I want to help kind of explain at least how I understand, like what’s going on, what the hell’s happening and if you understand Clawdbot or OpenClaw, the thing in the news right now, if you understand that, you kind of understand what’s going on.
[00:02:37] Justin Moon: And I was thinking about how to understand it, like break it down into basics. And I realized you had to introduce a lot of foundational ideas first that most people don’t quite get, and it impairs their ability to understand what’s going on. So I’m going to try to introduce, going to be 10 ideas. I have a bunch of notes here I think you have to understand in order to really understand what’s going on.
[00:02:54] Justin Moon: But I’m not going to use any jargon. I’m going to try to simplify it and make it understandable for people who don’t know anything in this, so that’s my goal. It’s a bit of a highwire act, so it might not go well, but we’ll see.
[00:03:02] Preston Pysh: But real fast, before you kick that off, would you say from a really, really like zoomed out from space kind of view that all the excitement is about right now is everybody’s accustomed to using cloud-based large language model AI. They type into a chat and they get an answer back. But now you’re at this pivotal point where the tech is so advanced that now people can run it locally in a way that’s actually going to be quite useful.
[00:03:30] Preston Pysh: And we haven’t had the hardware and we haven’t had the software models to do that yet. And that’s really kind of like this clear break of what we’re experiencing is now people can run it locally without even tapping into a cloud-based provider.
[00:03:46] Justin Moon: The significance of OpenClaw to me is that it’s a big step towards self-sovereign, user-controlled AI. It’s not a full step all the way there, but it’s a big step in that direction.
[00:03:55] Preston Pysh: Yeah.
[00:03:56] Justin Moon: And it’s a step in that direction from a couple different angles, and so I want to try to tease that out for people. I need to Introduce some basic ideas just to make it make sense. There’s a few things that you can understand why the importance of, like, we’ve talked a lot with HRF about the importance of vibe coding.
[00:04:08] Justin Moon: That’s going to be one of the takeaways here is like vibe coding enabled this. And it’s going to enable that a heck of a lot more over time. So like just zooming out, like what is an LLM, right? Like that we have got to start from the very base, but like, what is an LLM? To me it’s like a new way of using computers, right?
[00:04:22] Justin Moon: So like traditionally a computer, like computer programs, right? That’s top apps and stuff like that. A computer program is something where it’s like a recipe. A recipe for a computer. So it’s something that’s typed out with exact instructions by a human and it tells the computer exact steps to follow to do something.
[00:04:38] Justin Moon: Anything that can be broken down into steps can be like represented in a traditional computer program like arithmetic. Traditional computers are very good at arithmetic. They’re very bad at telling jokes because you can’t encode the steps of a good joke. Like at almost a sense, what makes it funny is ’cause it’s unexpected, right?
[00:04:54] Justin Moon: Zoom out one way to think about LLM. It’s like a new type of computer program, like bad at everything. Traditional computer programs were good at like arithmetic, but good at all the things they were bad at. Like creating art, right? Or telling a story, right? Or coding. So that’s kind of like the high level thing is, I want to frame this as like, in a sense, OpenClaw is a new type of computer to me.
[00:05:12] Justin Moon: That’s what it is. It’s a new way of using computers. It’s a new type of computer program. I’m assuming you’ve all used an LLM but have no idea how they work. So basically there’s kind of three steps in an LLM. The first is like, it’s called pre-training, but what it does is it downloads all the text on the internet and compresses it into a single file.
[00:05:28] Justin Moon: That’s the fundamental thing of what an LLM is. You take all the information on the internet and you try to lose the least important parts of it and only keep the most important kind of ideas and principles and facts. So what you get at the end is a file that can given like half of an internet document.
[00:05:43] Justin Moon: It can complete it. It can like do a best effort job getting half of a Wikipedia article and writing the second half. That’s all it can do, which is it have a lot of intelligence, but it’s not actually useful. ’cause like when does a normal person need to complete an internet document? . And so that file, it’s a file.
[00:05:57] Justin Moon: That’s what a model is. If you’re to model, that’s what a model is. It’s a file, right? If you’ve heard of weights, weights are what’s in the file. That’s what weights are in AI and an open model versus a closed model. An open model is if you can download that file, like DeepSeek or Kimi, generally many of them are Chinese and then the American ones are closed generally, you can’t download the file.
[00:06:17] Justin Moon: So it’s generally the closed ones are a little smarter and the open ones are a little more self-sovereign. The closed ones are generally American. The open ones are oftentimes Chinese.
[00:06:26] Preston Pysh: Let’s pull on that thread. ’cause I think somebody who’s hearing that, it makes no sense to them.
[00:06:32] Preston Pysh: I have an opinion on this. I’m very curious to hear your opinion though. Why are they the ones releasing these open models, but in the US where you would think that would be taking place? You’re not seeing anything of the sort. Why is that the case?
[00:06:46] Justin Moon: To me, I think the biggest part is like the capital structure of the companies doing it. So like opening eye and anthropic have like these huge capital structures and they need to make a lot of money fast and they’re on the frontier and they need barriers to prevent competitors. And so not releasing the model weights is the biggest thing just from a business point of view. No kind of extra thinking. I think that makes sense.
[00:07:06] Justin Moon: I mean, another thing is like I bet the CCP likes that there are these open models out there that get embedded into like Airbnb, Airbnbs come out and say, Hey, we use these coin for all kinds of stuff. It’s great, right? It’s a way, it’s for like the CCP basically to embed Chinese values in American tech software. And also, you know, America is like the leading one and then it’s kind of easier.
[00:07:24] Justin Moon: Chinese economies the last 10, 20 years has done a lot of imitating. So that’s kind of another thing is it’s just like kind of something that they’re already very good at is reverse engineering. Those are three things. Alex, do you have anything to add there?
[00:07:35] Alex Gladstein: Yeah, I just would say that at the moment they judged that they could not compete proprietary side and could both Introduce maybe some chaos and opportunities for themselves by going this route. However, going that route, kind of like a Sputnik thing, as we know, has opened a whole new door.
[00:07:51] Alex Gladstein: And you know, it’s actually, I think, been good for the world at large that you have other geopolitical powers pushing open-source options. It’s going to eventually force the American companies to do the same. So you’re going to have pressure, just like you had pressure to add encryption to devices and to apps.
[00:08:07] Alex Gladstein: The open files, like over time there’s going to be pressure on American companies. You know, despite profits, like they’re going to feel pressured to have open arrangements and open products. And we’ll get to this at the end of the recording, I think, but hopefully also privacy protecting ones too. But yeah, that would be my take.
[00:08:23] Justin Moon: One small note I want to recap from a talk that was given at our yearly AI summit in San Francisco was this guy Rames. I mentioned how like a year ago we thought there would be a takeoff runaway leader in AI and that didn’t happen. They’re all getting closer and closer. It’s getting more and more competitive.
[00:08:37] Justin Moon: And the close models and the open models are starting to get competitive. So it’s a bigger gap built and now it’s getting very competitive. So this is like great for user sovereignty. We’re not, you know, it’s trending in a way that you don’t have like a single overlord. And it’s a very competitive dynamic, which I think is great for freedom.
[00:08:51] Preston Pysh: One of the things that I think also makes it more competitive is when you start running these models that are not on the forefront of being the best from a intelligence standpoint. But you combine these lesser models with persistent memory run locally. The performance that you get for what it is that you need is actually a lot better than a premier model because it’s continuing to learn and it’s not forgetting all those past interactions like you get with a frontier model that has a new context window every single time that you open up with very limited memory.
[00:09:26] Preston Pysh: So that persistent memory is one of the things that I think is massive for self-sovereignty and from getting away from these large language models that are just sucking all the data and using that potentially against you. You’re going to get better local performance.
[00:09:42] Preston Pysh: The thing that I was, you know, on that original question, it seems like, and I’ve asked the AI, this particular question, why we’re seeing the open-source models coming out of places that we would least expect it, and it gave me a really surprising answer in that they’re looking at the game theory and where this is all going.
[00:09:59] Preston Pysh: And what they’re trying to do is slightly very, very, just ever so slightly steering the results of what you get out of the model for. Let’s just take an example, Tiananmen Square. If you are training the model, you can either have that as part of the initial, you know, data input before it compresses everything into the model and it adjusts the weights ever so slightly.
[00:10:24] Preston Pysh: And if those are the models that everybody starts to build on and run locally, you get somewhat, slightly different results than if you have somebody who’s feeding it with the base. Everything that’s ever been written on the internet minus these things that we really don’t want in there when we compress the model, they’re removed.
[00:10:42] Preston Pysh: And so I found that to be really interesting and you know, really a lot of foresight. If true, there’s a lot of foresight in there to make sure that you get your model out there. Now at the end of the day. I can run that model locally, I can ask it a question that maybe isn’t in its weights. I can say, you know, go out there and research.
[00:11:01] Preston Pysh: That’s just wrong. That’s not truth. Go out there and research on the internet. Like more facts on using the Tiananmen Square as an example, and then my local model now knows it and it’s not like it’s part of its weights anymore because I’ve steered it in a different direction. So in the end it doesn’t matter.
[00:11:15] Justin Moon: I want to make one before moving on. I want to make one point here. So we did a hackathon recently where we put together like activists from HRF with Freedom Tech developers from my Bitcoin meetup in Austin, basically. And one of the interesting projects was an actual Tiananmen Square, like student organizer, Dunley, last name Dr. Young G. Lee.
[00:11:30] Justin Moon: They did a project where they basically made a benchmark for all the different LLMs comparing their questions on like human rights questions like Tiananmen Square which is very interesting. And we look forward to that getting published.
[00:11:39] Justin Moon: Let me move on because I have a lot here, so I’m trying to like where an LLM comes from and how it’s used. So I talk about train. You take the internet and you get it onto a file. Then there’s a thing called post training, which turns it into like a useful assistant. It gives it a bunch of examples, like, here’s how to be useful to a person.
[00:11:52] Justin Moon: Here’s how to do a coding agent, right? And so now you have something that goes from be able to complete a document to be able to like answer questions, be your therapist, write some code, right? And so that’s how the model happens. That’s it.
[00:12:02] Justin Moon: So then the question is how do you use it, right? And the word for that is inference. You’ve probably heard that word. It took me a while to remember that. That’s what it means. Inference means just when the model is run, right? And so this is something that you can hire someone to do in the cloud for you, like ChatGPT or you can do it on your own computer if you have a computer. So you can use something called Ollama, right?
[00:12:19] Justin Moon: And so what inference is, you run that model basically and you can put text in and you get text out, right? So it’s just like the ChatGPT interface. That’s what’s happening behind the scenes. Text in, text out. And the one problem with open models is you need about a $20,000 computer in order to run them, right?
[00:12:33] Justin Moon: So that’s one of the tough things right now. It’s a big technical barrier. It’s a real user, individual, user sovereignty and AI. And it’s something we’re all kind out where. So that’s what inference is.
[00:12:40] Justin Moon: Okay, so now I want to talk about another word that’s very, very important. This is maybe the most important one called context.
[00:12:45] Preston Pysh: Justin, I’m sorry to slow you down. I just want, so people heard on the episode with Trey and Pablo that Trey was running his off of a Raspberry Pi. And so they’re like, well, hold on, you just told me it costs $20,000 to run it locally. And I just want to explain to the listener, so the way Trey’s OpenClaw works on his Raspberry Pi, which is, you know, three, 400 bucks, is he’s making API calls to Claude or to OpenAI to do the inference on their cloud, and then it’s giving a result back.
[00:13:15] Justin Moon: He has an agent, which we’ll get to. He has an agent running on a Raspberry Pi. But the inference, the thing that’s actually doing the smart AI stuff is on a cloud somewhere.
[00:13:24] Preston Pysh: Yep.
[00:13:24] Justin Moon: So he is a step towards user sovereignty because what ChatGPT was trying to get us to do a year ago is run the agent in the cloud too. So this is like halfway there. So it’s huge step forward, right? Running the agent locally. It can save memories locally, you know, and you, you have the option for certain things to use a local model too. So it’s a great kind of a half step forward. I mean, it’s 10 steps forward, but it’s not all the way to the globe.
[00:13:43] Alex Gladstein: I think huge win for open-source and to teach the game. Yeah. Let’s go.
[00:13:47] Justin Moon: Okay. So we define the word inference. That’s like one kind of word you need to do in context is maybe the most important one. So context is, it took me a while to, I mean, I’m very technical. It took me a while to actually understand what the heck people were saying.
[00:13:57] Justin Moon: It probably took like six months to actually understand it. And the key thing to understand is that LLMs are something we call stateless. Every time you interact with an LLM, it actually, it’s a bit of a, we talked about memory earlier. On a deep technical level, there is no memory at all. Every time you interact with it, you start from scratch.
[00:14:14] Justin Moon: All it remembers is the training and the pre-training. That’s it. Okay? So if me and Preston use chatGPT 5.2, we are getting exactly the same model, right? If there are some memories that are specific to Preston, they come from elsewhere. They don’t actually come from the model. We get the exact same thing. That’s an important thing to understand.
[00:14:32] Preston Pysh: So Justin, would it be safe to say that this context, so we’re using you and I use the same model. But the header that’s put into the start of that chat is what’s different. So if you have like past memory, like Preston likes short answers, he doesn’t like a long answer.
[00:14:47] Preston Pysh: That little snippet or that header is inserted and you don’t see it getting inserted into the context window, but it’s inserted in there. And so that’s how we might get a different answer from its past memory of us and how we use it is that header that it’s seeded with before you enter the context window?
[00:15:05] Justin Moon: Exactly. So like if me and Preston have the same model and we’re getting different answers. You can see this with yourself, right? Like let’s say you use ChatGPT, if you’re in a long conversation, it will remember things previous in the conversation, but it usually won’t remember things from different conversations.
[00:15:19] Justin Moon: But every once in a while it will, right? So that’s like a big question. Well, if LLMs are stateless, how are these two things that we’ve all observed true, right? And so the answer is that every round of conversation, let’s say you open a ChatGPT account. You go through 10 back and forths right on the 11th one it doesn’t just send the question you ask. Or the thing you said the 11th time it sends that, it sends the 10th, the response, the ninth. It sends the entire history every single time. And there’s also one extra one that you don’t see, which is called the system prompt. This is what the header that Preston was talking about.
[00:15:51] Justin Moon: This is like, think of it as like the 10 commandments. This is something that God, you know, like the developer basically ChatGPT or sometimes the user themselves gets to put in there and it’s instructions for how the model should behave, which the model doesn’t always follow. It tries to, and it’s also important that it be, that 10 Commandments are not like the 10,000 commandments, right?
[00:16:09] Justin Moon: So like what we were doing with AI a year ago is we were doing the 10,000 commandments. We’d write like a whole essay on the beginning and we basically overload the model and it couldn’t do things. And so a lot of the development over the last year that has enabled OpenClaw and things like it is that we figured out a way to only give it 10 commandments and figure basically derive the extra things and do like just in time learning to figure out the other things without overloading it right as a start.
[00:16:31] Justin Moon: So what context means is it’s the conversation, the entire conversation. Everything you’ve gone in that session is what context means. It’s everything that has been said previously, including the magic system prompt at the top.
[00:16:42] Preston Pysh: I want to pause here and really foot stomp why this is such a big deal.
[00:16:47] Preston Pysh: So you’re about to see commercials coming out at the Super Bowl from Claude, basically banging OpenAI over the head because they recently said that they’re going to start doing advertisements in their service. Let’s just like really pull on this thread and go deeper.
[00:17:02] Preston Pysh: If you’re OpenAI and you have an advertiser that’s doing really well with you because they’ve got a high margin product and you’re able to convert on that, OpenAI could potentially, and I’m not saying they’re going to do this, but there’s an incentive for them to do this where they start blindly inserting in the header things that could potentially steer the user to wanting said product that’s being advertised. And you would have no idea that that’s in the header.
[00:17:31] Justin Moon: Yeah.
[00:17:31] Preston Pysh: And this just goes to the whole point of like why we’re having this conversation, which is local AI is going to be very important for you to see the world clearly, because you won’t know that you’re being very indirectly subliminally steered in a certain direction because you have no idea what’s going into that header.
[00:17:49] Justin Moon: Yeah. Like the AI experience will get steered by something. Do you want it to be an advertiser? Do you want it to be a big tech company? Do you want it to be another government or do you want it to be you? Right. Like we want it to be you.
[00:18:00] Preston Pysh: Alex, do you have anything to add on that particular point? ’cause I mean, this is really why you’re so passionate about running local AI, right?
[00:18:08] Alex Gladstein: Well, let’s let Justin finish the context. Sorry, sorry. Jump unintended.
[00:18:12] Preston Pysh: Yeah.
[00:18:12] Alex Gladstein: And then I have my piece and I think it’ll help to pull things together.
[00:18:15] Preston Pysh: Keep going, Justin.
[00:18:16] Justin Moon: Yeah. So we think about it from like a Bitcoin point of view, like the Bitcoiners, we understand scarcity. That’s like one mental model that the Bitcoiners really get. And so you think, you apply that to AI, it’s like what’s scarce, right? In the training, you need data, you need energy, you need computers, right? In inference, when you actually run it, it’s context. Context is a scarce thing. That conversation, the longer it gets, the more confused the AI will get.
[00:18:38] Justin Moon: And at a certain point you run out of context and you just have to start over. And that’s called compaction and it makes everything worse, right? So that’s the big engineering battle. And it’s traditional engineering. It has nothing to do with AI. Traditional software engineering. The last year we’ve all been trying to figure out how to get better at managing this, and that is what has led to good AI agents now that we didn’t have a year ago.
[00:18:56] Justin Moon: It’s a big part of it, right? The models got smarter, but the context engineering also got way smarter. So I want to discuss next what an agent is, right? An agent is like, so now we’re getting close to open claw. Open claw is an agent, right? So an agent to me is like a marriage between these new and old computer programs, right?
[00:19:10] Justin Moon: The old stuff is like, you know, how you control your desktop computer or how you run a browser, stuff like that. And the new one is an LLM, which can generate text that’s like really smart and in some sense has the entire, all the intelligence of the internet baked in, right? So an agent, how is it a marriage between these two?
[00:19:26] Justin Moon: An agent is the thing that makes requests to an LLM. So like the ChatGPT website in this definition would be an agent. Claude Code, which is like a desktop or terminal program you can run that will write code for you or Replit. Those are agents, right? So it’s something you make a bunch of requests to some AI and also has the ability to use what we call tools.
[00:19:46] Justin Moon: A tool is like you can do something, all an LLM can do is spit out text. It can’t do anything in the world. So the question was how do you make something that can only spit out text control a browser or do a web search, right? How can I do a web search? So what we did is we invented this idea called a tool.
[00:20:01] Justin Moon: What a tool is, you put in the system prompt, you tell it, there’s a special marker. That means I want you to search this on the web, right? So think of this, it’s like a sentence that says, search this in capitals, and then there’s like a question and then it ends search this in capitals as well. So if the AI responds with that, to your question, if the LLM sends that back, search this question search, this agent will say, Ooh, I know that that’s a marker special marker.
[00:20:28] Justin Moon: I have got to do something special with that. I’m not going to show that to user. I’m going to go fire up Google and do a web search. Then I’m going to send it back to the LLM. So this is what an agent does. In the system prompt you teach its tools that the agent software itself will intercept and do special things like search the web, control a browser, send a message on Telegram and all the other things that OpenClaw does. That’s called a tool.
[00:20:48] Justin Moon: And so once we had that, this is the way augmenting an LLM to be able to do stuff in the real world. So you maybe heard of MCT. MCT was something like a year ago that blew up. ’cause it was a way to publish a bunch of these tools and share them.
[00:20:59] Justin Moon: In the beginning, ChatGPT tried to dictate what tools you could use, right? They said, now we have our tool and you can only use this one. Right? And everyone’s like, screw that. We want to do anyone we want. And so MCP was invented as a way to share tools and so the user can choose which one they want. And the problem with it, it was like, if you ever heard of like just-in-case learning versus just in time learning, like just-in-case learning is by getting a college degree to solve a problem.
[00:21:22] Justin Moon: Just-in-time learning is like you have a problem and then you go to YouTube and learn how to solve that problem and you solve it. And so like a year ago we were doing, just-in-case prompting with MCP, we’d say, here’s how to do 10,000 commandments just-in-case you need them. And then the first round of conversation, the AI’s already kind of confused ’cause you should have told it way too much, right?
[00:21:40] Justin Moon: And so now a thing called skills, which I’ll talk about next is more like, like just-in-case prompting, you say, here’s a bunch of manuals you can use if you need them. They’re over on that shelf over there, don’t read them yet. But you can see the titles and when you should use them on the findings, right?
[00:21:53] Justin Moon: So that’s kind of the difference between MCP is like, that was like just-in-case prompting and a skill is like just-in-time prompting. And so this was like kind of a revolution in context engineering because you could expose many more things to an LLM without overloading its context window.
[00:22:09] Preston Pysh: That was extremely helpful for me personally. ’cause I’ve seen both MCPs and I’ve seen skills. And I know skills.
[00:22:15] Justin Moon: There’s so many, like if you feel overwhelmed by all the jargon, like there’s just so much.
[00:22:19] Alex Gladstein: It’s kinda like in The Matrix when they plug the different things into Neo’s head, right? Exactly. Yeah. It’s like, what skill do you want? And you’re going to have a little fricking library.
[00:22:25] Justin Moon: Very similar. So, yeah. Let me tell you more about what a skill is. So now skills are like—this is a foundational thing that OpenClaw is built on. So MCP was like, “Here’s 50 different things you can do.”
[00:22:36] Justin Moon: You’ve got to figure out how to use them though. You’ve got to figure out when to use them. Like it was asking a lot of the LLM to kind of map— to figure out the user’s intent and like, wanna do stuff. Skills are based on the insight. It’s a mapping from a user intent to an action. When user wants X, you do this, right?
[00:22:54] Justin Moon: So you only see that at the beginning of the system prompt. And when the user declares the intent, you go and look up the manual and figure out how to do that, right? And so what is the manual? The manual—this is a skill. A skill is kind of like an analog to an app. Right now, the closest thing to the old word, it’s like an app.
[00:23:10] Justin Moon: The skill is a folder. So it’s a very traditional thing—a folder. You’ve seen many folders on your computer—with two types of content. One is text files containing prompts, meaning just a plain English description of like, “Hey, when the user wants to book a flight, you know, first you open the browser, then you log in and the user has to enter their password and you wait for that and then go to kayak.com.”
[00:23:29] Justin Moon: And so it’s a prompt, but it’s not only a prompt. ’Cause sometimes if you give it an open-ended task like that, it won’t be able to do that. But parts of this are better done by like a traditional programming technique, like a computer program. That’s the second thing that goes in a skill folder. You could have programs, right?
[00:23:44] Justin Moon: So you could have a program that can specifically open kayak.com and can specifically find where to put the credit card information and can specifically, you know, do a bunch of the thing. The actual steps that are involved in booking a flight can control Google Chrome and the browser, for example, and do all these things.
[00:24:00] Justin Moon: And the prompt would say, “Hey, they prefer aisle seats to window seats,” right? They’ll have a bunch of preferences like that. It’s like a compact manual that maps a user intent to an action and leverages prompting, which is the new type of computer, and like a simple computer program, which is kind of like the old type.
[00:24:15] Justin Moon: So to me it’s kind of like a marriage. It’s a good marriage between these two. And that’s why it’s so powerful, is because it allows these LLMs to more effectively use a computer to accomplish what the user wants.
[00:24:26] Preston Pysh: It’s more efficient, it’s faster, it’s not bloated. Your context window probably won’t fill up nearly as fast.
[00:24:32] Justin Moon: It always fills up once the user wants it to, but not before. So it’s much more efficient.
[00:24:37] Preston Pysh: Yeah.
[00:24:38] Justin Moon: Yeah. And so that’s kind of like one thing here, is that we figured out a hierarchy for these types of things, right? So like in Clawdbot, it saves a bunch of memories. But it doesn’t look at the memories until they might be relevant.
[00:24:48] Justin Moon: So it goes through like file system hierarchies to only expose what the user needs, but to allow it to be discoverable for other things they need in the future. So that’s been a big thing in context engineering. We’ve been adding hierarchy for all these things we used to just dump in there just-in-case.
[00:25:03] Justin Moon: Okay. If I want to— the next one is one more and then it’ll be OpenClaw. So vibe coding. What is vibe coding? So this has been a really big thing. We’ve just had like the one-year anniversary of this.
[00:25:11] Justin Moon: So normally when you write computer programs, it’s like a very, very— you have to really, you have to have the blinders on. You have to really look, and if you get one semi-colon wrong, you’re typing text into a file, doing really logical operations, and you— it’s like very, very focused, anal, you know? And so vibe coding is like the complete opposite where you put your feet on the desk and you’re like, “Hey, computer, build me a movie player app that can download it from my Dropbox.”
[00:25:36] Justin Moon: And you just watch it do it. And so this became sort of possible a year ago. And it’s become very effective in the last three months. Like very effective. It’s—yeah. And so let’s just talk about like what is actually happening there. What happens is you say, “Hey, I want you to write a program,” to something like Claude Code or Replit, right?
[00:25:50] Justin Moon: And then it might come back, like a normal ChatGPT conversation, ask you some clarifying questions, try to clarify your intent a little bit. And then it’ll go into a loop, right? A loop just is a programing thing. You need to do something over and over again, right? And so we’ll do a bunch of these tool calls.
[00:26:02] Justin Moon: It will do a tool call to do web search, to search something you might’ve said. Then it will read some files in the existing thing, then it will write a file. Then it will add a file. And at the end, once it thinks it’s working, it will do a tool call to run the program. And then you can interact with it.
[00:26:15] Justin Moon: And it might try to do some tool call to test it manually itself. So it’s just doing a loop, doing these tools over and over again in skills and stuff like that. Tell the judge, “Hey, I think I accomplished the thing.” And then loops have a termination condition. You do it until there’s some condition.
[00:26:29] Justin Moon: And in vibe coding and coding agents, that condition is a response from the LLM that doesn’t have a tool call in it. So everything is just a bunch of these little things with a special marker to do something special. And at the end it’s just a text message. And that’s just displayed to the user and the loop exits.
[00:26:43] Justin Moon: And if you’re lucky, you have a working app that does exactly what you wanted. A year ago, you often didn’t. But now you often do.
[00:26:51] Alex Gladstein: And the agents are like— some of the agents update you along the way. They’re like showing you, “Oh, we did this,” cross that off, this off. And they can be quite transparent.
[00:26:58] Alex Gladstein: So it’s exactly what he’s saying there. You can see how it’s working.
[00:27:02] Justin Moon: And you can steer it along the way if it’s going in the right direction. You say, “I want blue, not purple,” right? So you can control it a lot. And you know, this is something now if you go on Replit, for example, you can have a pretty good time with zero technical understanding.
[00:27:14] Justin Moon: And I encourage everyone to do it, ’cause it will give like a different lens. It gives you a lens into that’s what the future’s going to look like.
[00:27:20] Preston Pysh: Is Replit like CoWork, like Claude’s CoWork?
[00:27:23] Justin Moon: Kind of. So Replit, it’s a website that you can go to and you can ask it to build an app. Okay. And it’s very good at building an app.
[00:27:30] Justin Moon: Yeah. And it’s also very good at hosting it on the web or like getting it onto your phone if it’s a mobile app. So it’s a 10-year-old company that was— they were dedicated to make it easy to learn to program.
[00:27:45] Justin Moon: Yeah. I actually used to do interviews on this platform like 10 years ago. And they were early to seeing this vibe coding trend, ’cause hey, this solves the mission of the company.
[00:27:46] Alex Gladstein: So you’re about to explain how OpenClaw works, right Justin?
[00:27:49] Justin Moon: Yeah.
[00:27:50] Alex Gladstein: I think this is a good time for me to like interject some of the social impact of what Justin has just described. Yeah. And then I’ll sort of end with something I just saw OpenClaw do. And then you can explain how that works, because I think we’ve covered a lot of ground and I think we’re ready for this.
[00:28:04] Alex Gladstein: I now— I love that. Okay. So a lot of people, including me and Pablo five years ago, if you had asked us about AI— zoom out way outside of learning how it works— just impact on the world, we would’ve thought that it would be inherently repressive with regard to civil liberties and personal freedom. There was an old— you know, I’ll paraphrase Peter Thiel about seven or eight years ago.
[00:28:23] Alex Gladstein: He said something like, “Bitcoin is decentralizing, AI is centralizing.” If you want to frame it ideologically, Bitcoin is libertarian and AI is communist.
[00:28:31] Alex Gladstein: You know, a lot of people, including me, really believed that. We thought it would be very pernicious towards human rights in the hands of states as they vacuum up everybody’s information and build a more efficient surveillance and control machine.
[00:28:42] Alex Gladstein: And a lot of that is true. Part of the program we’ve launched at the Human Rights Foundation, where we brought Justin on to help us, is going to be exposing how dictators are using and abusing AI.
[00:28:54] Alex Gladstein: What we didn’t see coming until, you know, in the last 18 months— or 24 months— was how AI can supercharge individuals asymmetrically, in the same way that encryption or Bitcoin could certainly help dictators, but it helps individuals way more.
[00:29:04] Alex Gladstein: I mean, dictators already control vast communication networks, banking systems, massive data centers. They already have ways to exploit money and fire people, control armies and big companies. And they have huge numbers of talented people to do their bidding, but individuals in resistance groups and innovators don’t.
[00:29:18] Alex Gladstein: So vibe coding changes this, right? So now individuals have access to enormous cutting-edge computing power and unbelievably intelligent personal assistants that are already saving them huge amounts of time and resources. I mean, just very simply, the fact that you can talk to a computer and make it do things for you is revolutionary.
[00:29:36] Alex Gladstein: And this is increasingly, exponentially. So again, one year ago, vibe coding was invented. Nine months ago, a non-technical person could vibe code a website decently. I don’t know if they could deploy it— maybe through Replit— but like, a little shaky, but like, they could do it. Today, a non-technical person can spin up an agent that can autonomously conduct work and perform tasks in the background without human oversight.
[00:29:54] Alex Gladstein: And tomorrow, like we don’t know, right? So six months ago, a lot of elite developers, including a lot of the ones that Justin and I know, looked down upon vibe coding and they thought it was very ineffective and a bad work ethic, et cetera, et cetera.
[00:30:13] Alex Gladstein: I did a retreat with some of these people— amazing elite developers— in the beginning of December, and a bunch of them were like, “Nope, don’t want that.” All of them have changed their minds as of today, right? It’s really crazy.
[00:30:34] Alex Gladstein: You know, these agents are capable of massively automating a lot of human work and it makes it possible to really super-scale individuals and smaller organizations.
[00:30:51] Alex Gladstein: We can basically give people superpowers. And you know, the way I like to look at like what’s available for the activists today— and this lines up pretty much with what Justin has said so far and I’m getting close to finishing here— is, uh, you have your chatbot, just in terms of terminology. Okay.
[00:31:06] Alex Gladstein: Everybody knows they have their chatbot— go to ChatGPT or Claude or whatever. Then you have what I would call creator mode, which is like Claude Code. It can do a lot more than just spit text out, as Justin was describing. It can use tools, skills. Then you have a personal agent. These are three kind of options that are out there.
[00:31:22] Alex Gladstein: Now we’re about to explain how OpenClaw actually works, but the social impact of it is really important. Essentially what I’ve seen with OpenClaw— so like yesterday what we did is, to a group of 20 people from different industries, Pablo and I did a 40-minute session where we did some background. We did some pretty amazing things with Claude Code, and then we used his own OpenClaw that he set up.
[00:31:46] Alex Gladstein: And basically, like from my phone, I can go into Telegram and I can message his— and I left it. I just left it a two-minute voice note with an incredibly complex task to do, and like three minutes later it responded— like it gave me this thing— and it was just like the most— this data-rich website thing that was actually quite useful.
[00:32:16] Alex Gladstein: I mean, to be very clear, we asked it to create a doable, scalable, manipulatable, circular, global spherical map that shows exactly how much civil liberties and free speech and democracy funding every single country in the world gets, broken down by who gives it and then like sorted. So you could like rank them—
[00:32:18] Preston Pysh: Hold on— you sent this request over like—
[00:32:20] Alex Gladstein: a phone. Over Telegram from the phone. I was just like, “Yo,” and I had it on speaker and other people were listening in the room and I just said, “I want you to do all these things.” And then a couple minutes later it gives us this like freaking incredible visual project.
[00:32:32] Alex Gladstein: And what it’s showing me is the following — and this is kind of where I’ll conclude — is that workflow for creators is going to change. So basically the way it works up to this point is like: if you’re an executive or you’re a creative person, you have a meeting and you have a cool idea, you really want to do something — well, what do you do?
[00:32:48] Alex Gladstein: Well, you normally talk to your executive assistant or your product team or your program team, depending on what kind of organization you work with. You have a meeting and you describe what you want, and then they go talk to the creative team ’cause they’re not designers or engineers. Or they go talk to engineers, and then those people talk to web people, and then maybe they come back to you a few weeks later with some proposals: “Hey, do you like this one better or this one?”
[00:33:09] Alex Gladstein: And there’s just so much human time and effort there. Now what you’re going to be able to do this year is like the creative — like the founder-type person — can literally describe exactly what they want. They can say, “I want it to look like Liquid Glass on iPhone,” or “I want it to kind of look like this movie vibe.”
[00:33:23] Alex Gladstein: Or they can literally — like the dream can come out of the head so specifically — and then they can take the voice and they can speak it into existence, and they take that and give it to the creative team. And then there’s no more like, “Well, do you like this color or that?” No, no, no. They have a really specific idea of the vision.
[00:33:39] Alex Gladstein: So this is going to become, in my opinion, like a skill — like surfing or like calligraphy. And it’s like, are you going to be decent at it or are you going to be like Michelangelo? And we’ll see. But I think it’s going to be so amazing for creators — people who have big dreams and visions — because they can really quickly get to a really good blueprint of what they want, and then their colleagues or allies or teams can finish the rest.
[00:34:01] Alex Gladstein: And that’s, I think, one of the biggest social impacts of what Justin is describing. So maybe Justin, now we turn to you and figure out how I can talk to Telegram and have it do stuff. Something like that.
[00:34:12] Justin Moon: Yeah, so transition from vibe coding to OpenClaw — or like chat. It started with the ChatGPT interface and it became kind of vibe coding agents, right?
[00:34:20] Justin Moon: And now it’s like the personal assistant — we’re just starting to enter that, where, you know, we’ve had a good coding agent for about a year. We’re just starting to get good personal assistants, and that’s what OpenClaw is. It’s kinda the first actually useful personal assistant.
[00:34:33] Justin Moon: And so to transition though, I want to make a note that like, I actually met Peter Steinberger — I think his name is — the guy who created it, from a blog post about how he vibe coded.
[00:34:40] Justin Moon: And when I read it, it was called Shipping at Inference Scale. And it like blew my mind. I’m like, “Oh my God, I’m a complete amateur. What this guy’s doing is unreal.”
[00:34:50] Justin Moon: And I think OpenClaw is largely a story of — he was like the world’s best vibe coder. This guy figured out how to vibe code, and that’s actually what created OpenClaw.
[00:34:56] Justin Moon: Like the real thing that unlocked it was that he was able to use these open vibe coding tools so effectively. So I’ll get to that. So what is the user experience, right? It’s a personal assistant that you can chat with on any messenger you like — Signal, Telegram—
[00:35:11] Alex Gladstein: Nostr.
[00:35:11] Justin Moon: Nostr. Like, last night I did a livestream. We used — there’s an existing Nostr thing that wasn’t very good, and I built one using LLMs, right?
[00:35:20] Justin Moon: So, but you can add and do whatever the heck — email—
[00:35:20] Alex Gladstein: any emails—
[00:35:20] Justin Moon: additional email, anything you want. And if it doesn’t exist, you can make it. So the ingestion can be talking to — this can be from anywhere. The agent has its own computer.
[00:35:28] Justin Moon: It gets a computer and it totally controls it. It can be a desktop, like a little Mac mini. It can be a virtual. It can be something in the cloud. It can be on your laptop — although don’t probably do that in general. Be very careful with this. Do not try this. Mm-hmm. Have information security skills — like I’m still scared of it, and I’m like an expert, almost like — and then it can totally control that computer, right?
[00:35:47] Justin Moon: So you can talk to it anywhere you want. It has its own computer and it totally controls that computer. And basically the premise is: what if you gave the agent its own computer and gave it skills and tools to control literally anything about that computer that the user wants to.
[00:36:00] Justin Moon: And it got to a certain point now where the developers don’t even have to invent the skill anymore.
[00:36:04] Justin Moon: Now, if it’s missing something — if there’s something you want to be able to control that it can’t do — you just say, “Hey, now make—” it has recursive self-improvement. Now it can be like, “Okay, make a skill that allows me to pilot this weird app that nobody else uses,” right? And so it basically vibe codes internally to make a personal skill.
[00:36:18] Alex Gladstein: Or, if you could color this in, you can also buy, you know, free-market skills.
[00:36:23] Alex Gladstein: So Pablo was showing me that what he’s building — he’s building a, that may not be a competitor to OpenClaw, but like something like an alternative that’s more for a different use case. But the idea is that when he wants stuff done, his agent can go hire — via Nostr and Bitcoin — like an expert in Cashu, for example.
[00:36:40] Alex Gladstein: That Calle has worked with, so that it knows kung fu, right? Yeah. So like you can hire that one, or hire one that’s really good at designing Liquid Glass apps for iOS, for example.
[00:36:52] Alex Gladstein: So we can go out and hire these and then like do it. So again, like the skills thing is not just something that you’d have locally. You could hire them or you could acquire them or whatever you want. But the point is it’s fascinating to see this start to work—
[00:37:03] Preston Pysh: real fast because we have a huge Bitcoin audience here.
[00:37:06] Alex Gladstein: Yeah.
[00:37:06] Preston Pysh: When you look at how these AIs are going to want to transact with each other, for me, it’s become super obvious that they’re going to want Bitcoin because it’s the only form of payment that they can’t be rugged on.
[00:37:18] Preston Pysh: So if they’re managing their own wallet, and you look at all the different ways that they could be paid—
[00:37:23] Preston Pysh: Anything that touches human rails or has the capacity for a human to be like, “Uh, I think I’m going to liquidate this account that it’s using,” I think the AIs are going to deeply understand that risk and never want to denominate their exchange in such a system.
[00:37:37] Alex Gladstein: I think for sure that’s where we go. But it’s just worth noting now though. For example, I saw the founder of Umbrel today. He was just posting that like he had his OpenClaw on the Umbrel just book his script for him, and yeah — he like gave it his credit card. He gave it his credit card and his billing address.
[00:37:51] Alex Gladstein: So it does work with fiat. But like, I think you’re right that over the coming years it’ll be way easier for these things to work with a digitally native currency. Yes, of course. Yes. Yeah.
[00:38:01] Justin Moon: I almost think it’s going to happen the opposite way, where it’s like they’ll just use dollars, ’cause that’s what’s in the training data and that’s what everyone accepts by default.
[00:38:07] Justin Moon: Right. They’ll use fiat.
[00:38:08] Justin Moon: And then they’ll try to do something where they can’t. They’ll be like, “I can’t — is there another option?” Oh, I can just use the Bitcoin.
[00:38:17] Justin Moon: It’s like they keep asking me for all this stuff and I have got to check emails, and then my owner has the email and I can’t get in there.
[00:38:23] Justin Moon: So it’s like, lemme just create a Bitcoin wallet, right? I think it’ll kind of happen that way from the ground up just based on failure with the fiat option, right?
[00:38:28] Alex Gladstein: It’s like I’m trying to hire a person in Nigeria and the credit card’s not working. Well, why don’t I try something else? Let me see. Oh, there’s this Bitcoin skill.
[00:38:35] Alex Gladstein: Oh, let me learn that really quickly. Oh, okay. It works now. Like it’s going to do that.
[00:38:39] Justin Moon: Okay. Let me continue the OpenClaw.
[00:38:42] Justin Moon: So I talked about the user experience, right? It’s a personal system that can message however you want, that has its own computer, and that computer can be whatever you as the user want.
[00:38:50] Justin Moon: You have the freedom to choose. And so it completely blew up in popularity. So to give a sense: GitHub is like the collaboration platform for open-source software. There’s something called a like — or like a favorite on GitHub. You can favorite a post or a project. You say, “I like this one,” right? A star.
[00:39:06] Justin Moon: It’s called the GitHub Star. Bitcoin has 80,000 GitHub stars. That’s a really popular project and it’s 15 years old. OpenClaw is like six weeks or seven weeks old and it has 160,000 stars. So it’s double as popular as Bitcoin in like six or seven weeks. Linux is like 200,000. So it’s almost caught up to Linux, which is like the most famous open-source project that exists. There’s graphs where you can find, where they show all these other like super fast-moving projects that look like a hockey stick. And compared to those, OpenClaw is like a vertical line. Yeah, it’s just insane.
[00:39:41] Justin Moon: Like, wow. There’s no X dimension of the adoption. Wow. It’s really cool. So that’s to give the listeners a sense of how popular it got. And so it’s because the user experience was really good. Like this is what everyone’s wanted. It’s like a relatively self-sovereign personal assistant.
[00:39:59] Justin Moon: I just want to kind of ask some questions about like, why did it happen now, and give my takes on it. What enabled this? And this is in a sense of like, where are we now? Like the first thing you think is, “Oh, finally the AI got smartenough.” I kind of disagree.
[00:40:15] Justin Moon: Like I kind of think that if we had Clawdbot like when Claude 4 came out, this was May 22 of last year I kind of think it could have gone viral at the same time. It wouldn’t have been able to do everything, but I think that some of the previous models from six or nine months ago maybe could’ve done this. I’m not sure.
[00:40:15] Justin Moon: But I think that some of the previous models from six or nine months ago maybe could’ve done this. I’m not sure. I want to do some testing on it.
[00:40:20] Justin Moon: Yeah. But I don’t actually think that when it comes down to running the assistant, we needed the models that we have today. So one big one was context engineering.
[00:40:27] Justin Moon: Yeah. We got a lot better at this “just-in-time” prompting instead of “just-in-case” prompting. That’s traditional software engineering. This was human software engineering. But I think to me the biggest one was that this one guy basically vibe coded a massive bridge — like Peter Steinberger’s GitHub is insane.
[00:40:42] Justin Moon: The average developer does like maybe 10 GitHub contributions — that’s like an action on GitHub a day. This guy does like a thousand a day. He’s just absolutely ripping it. He is operating at a much higher level than the rest of us, and many of us are trying to catch up. He has like 50 projects on his GitHub that compose this bridge between a traditional computer and an agent.
[00:41:03] Justin Moon: So stuff like managing Google Calendar, managing Gmail, making tweets, communicating over Telegram, communicating over Apple Messages. He made all these little command-line tools — little basic tools — that were optimized for an agentic user, not a human user. Like, no human would want to use a CLI tool to manage their calendar, but since LLMs are all text-based, right?
[00:41:21] Justin Moon: It’s all based on text. Yeah. They are really good at making these little CLI tools. And so eventually it got to this kind of recursive improvement where the tool builds itself. I mean also, it’s like the labs couldn’t do it ’cause it was reckless. Like you needed a cowboy. You needed an open-source cowboy.
[00:41:35] Justin Moon: He didn’t care. Like, I don’t know if the guy’s a Bitcoiner, but he would fit right in. Yeah, he would.
[00:41:41] Alex Gladstein: But like Satoshi open-source, Nostr, this thing.
[00:41:44] Justin Moon: No big company would ever do this. And also he’s kind of a hero ’cause he didn’t— you know, you could have raised VC money and all these things. You know, “I’m already successful. I’m just going to move this for the people,” right?
[00:41:49] Justin Moon: So there’s a lot of these technical things: making skills, skills for information extraction, context engineering.
[00:41:56] Alex Gladstein: Amazing. And it brought so much pressure on the large corporations. Mm-hmm. Because the users are now going to want the choice of using whatever input they want.
[00:42:04] Alex Gladstein: Whereas before, they wanted to corral you in their thing. Like they wouldn’t have wanted you to use Signal to talk to, you know, their new product. They’d want you to use their own.
[00:42:13] Justin Moon: Yeah.
[00:42:13] Alex Gladstein: Right. And now it’s like, well, what are we going to do? They’re probably going to have to offer ways for people to use any input they want.
[00:42:20] Alex Gladstein: So this is pretty seismic. And I would also just note that from a human rights perspective, maybe we could conclude a little bit of this with this part: Justin, like I’m not— yes, of course these things are risky and have hazards, but the cool part is you can hook up Signal and Maple and do OpenClaw like that.
[00:42:37] Alex Gladstein: Like you can use privacy-protecting AI agents and you can use privacy-protecting messengers. And there are some serious innovations happening on that now by some of our friends and people in our community who are making what are essentially going to be full-stack personal agents where maybe three to six months from now — some of them are already very alpha — but you can experiment with them.
[00:43:03] Alex Gladstein: You’ll be able to go in your Signal and have it do stuff, and have like the whole supply chain be encrypted. And I’m so bullish on that. So that’s what HRF is really going to be focusing on this year.
[00:43:07] Justin Moon: Yeah.
[00:43:07] Alex Gladstein: From an investment point of view, supporting the infrastructure is going to be building those tools, and then the rest of what we’re doing is going to be the super-scaling education.
[00:43:16] Justin Moon: Yeah. Let’s go into those in a little more detail. Yeah. I just want to kind of summarize first.
[00:43:20] Alex Gladstein: Go ahead.
[00:43:20] Justin Moon: So if you think of OpenClaw like a story — and it is a story — that’s why it went so viral. The story is just as much as the tool. I think in a sense it’s the story of what one individual can do with the help of vibe coding and AI development.
[00:43:32] Justin Moon: Right — one guy. And then eventually he got far enough that a big open-source, voluntary open-source community arose around it. And this is exactly what we Bitcoiners participate in and love. This is what Nostr is. And so it’s very inspiring to see what one person can do.
[00:43:51] Justin Moon: And to me, OpenClaw is more of an idea than an actual product. Like it shows us the idea of: what if an agent has its own computer and you can talk to it however you want? I’m going to build my own OpenClaw. I’m not going to use OpenClaw. I’m just going to vibe code my own, and I’m going to use some of the pieces they have, and all my friends are going to do the same thing.
[00:44:09] Justin Moon: And you’re going to see this big renaissance of stuff that can’t be controlled, that is customized to what the user wants. And so for my takeaway, it’s like: I want to teach more people about AI, and also that this is why I’m proud to work on the HRF AI for Individual Rights program.
[00:44:26] Justin Moon: We’re fighting to make sure that more of this type of stuff can happen, that AI remains user-controlled, and that people can thrive in an AI world. So yeah — I’ll transition out to Alex to share a little more about how the program started and what we’ve done and what we want to do.
[00:44:34] Alex Gladstein: Well, yeah — again, the moment was fortunate about 13 months ago when we were presented with the opportunity to do this by a generous supporter. And anyone listening: you can just do things. You can support people like us and have us do really cool things. So thank you to everybody who supported us, including you, Preston, for helping us today.
[00:44:43] Alex Gladstein: Just even having this conversation is going to spark a lot of thoughts, I think. But yeah — we created the world’s first AI for individual rights program.
[00:44:57] Alex Gladstein: Every other human rights group either hates AI, or they’re going to try to focus on research and they’re not going to do anything. And you know what? We wanted to do it differently. And most of our effort is going to be focused on how to make this tool a mechanism for personal liberation. Period.
[00:45:14] Alex Gladstein: We are going to do, again, some research and investigations into how dictators are abusing it. That’s very important. And we do feel like that will start to get crowded with other people.
[00:45:34] Alex Gladstein: What I don’t see anyone else doing for sure is: in the same way that we’ve been pioneers in educating dissidents and activists and resistance groups on Bitcoin, we’re going to do the same thing with these open-source, privacy-protecting AI tools.
[00:45:49] Alex Gladstein: Because in the same way that Bitcoin helps them become unstoppable, AI is going to help them 10x or 100x what they can do — and we need that right now. Right now is the moment for us to push freedom forward. So that’s what the program is designed around.
[00:46:02] Alex Gladstein: We’re going to do events that bring people together, as Justin was describing — bringing together talented developers with activists. I mean, both of them were thrilled. The event went so well, the first one.
[00:46:10] Alex Gladstein: We’re going to do two more this year at least. We’re doing one in Nashville at Bitcoin Park in May. We’re going to do one at PubKey in DC in September. So we’re going to cook with these.
[00:46:19] Alex Gladstein: And the developers were thrilled ’cause it’s something inspiring to work on, as opposed to just the standard hackathon. And the activists are like, “Awesome. I get like five of the smartest people in the world to help me do what I want to do.” So everybody’s like, you know—
[00:46:21] Justin Moon: Lemme chime in here a little bit.
[00:46:22] Alex Gladstein: Yeah.
[00:46:22] Justin Moon: Like, you had this idea — you know, this HRF thing — I mean, my friends still every once in a while give me crap, like, “How do you work for an NGO?” And I’m like, “I don’t know, man.”
[00:46:32] Alex Gladstein: We are non-governmental,
[00:46:33] Justin Moon: I’m not. And I’m like, well, Alex brought me and my friends — these Freedom Tech developers, the ideological freedom tech developers — and we met these physical freedom fighters who actually fight for freedom under authoritarian regimes.
[00:46:48] Justin Moon: Over the years, I would meet these people and they were some of the most courageous, inspiring people I’ve ever met. I was like, “Man, I wish I could help them.” But it was always a little distant. ’Cause it’s like, I’d be like, “Okay, use my wallet, you know — I can teach you how to use Bitcoin,” right? But it remained like a friendship — a social thing.
[00:47:02] Justin Moon: But then when vibe coding happened — what vibe coding means is the cost of software production going kind of to zero. That’s what it means. A year ago, you needed to be, like, ChatGPT to build an agent. Then Peter Steinberger could build one himself — and Pablo — and now the tools themselves can recursively self-improve, right? The cost is going down, down, down, and down.
[00:47:19] Justin Moon: So the opportunity is like: okay, what if we could put activists and developers together, have them actually try to solve problems, right? Usually the ideas are bad and there’s no distribution of the product at the end. But the activist collaboration fixes both of these.
[00:47:34] Justin Moon: The activists bring a real problem like, “Hey, how do we make a leaderboard of which LLMs respect human rights, and how do we distribute it?”
[00:47:41] Justin Moon: Okay, the person’s got a massive academic following, is very respected, and works at Harvard. You know — this is what all the projects were like, right? It was very empowering from the activist point of view ’cause they got to do something useful, and they also got to see how software is created, right?
[00:47:47] Justin Moon: So a lot of these people have been around HRF, talked to these developers, but I don’t think they actually understood where it comes from. And they got to see it for a day — where it comes from.
[00:48:04] Justin Moon: And from the developer point of view it’s empowering because they’re like, “Man, we’ve been working on these abstract problems all the time, and now I get to make a tool that can help find corruption in a big data dump of documents from Iran,” right? It’s very nice to work on a concrete problem.
[00:48:14] Justin Moon: And then apply the skills you knew previously from your work with Freedom Tech stuff. It was a big success. It was a very surprising success for me, and I’m really looking forward to doing more of these.
[00:48:14] Alex Gladstein: And just, you know, the TL;DR: what are we doing? I mean, two main things. Again, we’re going to be bringing people together at all kinds of interesting events. We’ll have a big Freedom Tech Day at the Oslo Freedom Forum where we’re going to have quite a bit of vibe coding for activism.
[00:48:35] Alex Gladstein: Then the second thing will be grants. We want both the activists to apply to our AI fund to seek help to build the things they need. Then we also want really talented developers working on things like OpenClaw or Maple — open-source sovereignty and/or privacy-improving infrastructure.
[00:48:54] Alex Gladstein: We want to aggressively support that. So people should get in touch with us, and we really, really want to beef that up.
[00:49:01] Alex Gladstein: And even small investments can go a really long way. Right now the virality is here. Like again, the guy from OpenClaw — when he released it, it was Clawdbot. It’s not like he raised $30 million of venture capital. He did it out of his house. And it’s like: we could do that.
[00:49:13] Alex Gladstein: I don’t know if you want to mention briefly just— like Calle came out with today or yesterday? The Claw today?
[00:49:16] Preston Pysh: I think so.
[00:49:17] Alex Gladstein: Like our friends are coming up with amazing stuff.
[00:49:19] Preston Pysh: Calle — another pretty famous Bitcoiner who has done incredible things historically as far as writing code — he made a turnkey Clawdbot that he just released, a website, right?
[00:49:30] Preston Pysh: That makes all of it super easy. A person can just go to the website that he just stood up. And I can only imagine how quickly a guy as talented as he is was able to engineer something like this and put it out there, right?
[00:49:45] Alex Gladstein: Yeah. No — and it still has a ways to go on the security side, but he knows that. He’s a privacy maximalist, and you can work on that.
[00:49:51] Preston Pysh: Yeah.
[00:49:51] Alex Gladstein: Again, where we are today — like for the activists at least — is we want people to use something like Maple for their basics, for what their 101s are. You should just not be using other chats. Like it’ll get 95 cents on the dollar, at least, of the big corporate model.
[00:50:08] Alex Gladstein: Then you can be in encryption — let’s move there. Let’s move from text message to Signal. In the next three to six months, we’re going to be able to move your creator mode — you know, your basically your Claude Code-type things — and I think we’re going to be able to move your agent as well into a similar environment.
[00:50:23] Alex Gladstein: So that’s like the hope and the dream right now: that in the next three to six months, people who really value privacy and sovereignty will have access to extremely powerful tools that reflect their values, but they can also 10x to 100x their work. And that’s very exciting.
[00:50:39] Preston Pysh: Guys, we have to keep this conversation going. Honestly, you guys are on the tip of the spear — it’s a military term — you’re on the tip of the spear of everything that’s happening in AI.
[00:50:49] Justin Moon: Coming from you, thank you, Preston.
[00:50:50] Preston Pysh: No, I really mean it. And the conversation I had with Pablo and Trey, I was like, “Guys, you have got to come back and keep us updated with where this is.” ’Cause I honestly think that this Clawdbot thing — and it’s interesting ’cause Sam Altman literally said the same thing.
[00:51:04] Preston Pysh: And you know, coming from a guy that’s one of the biggest in the AI space.
[00:51:09] Alex Gladstein: No, he said it’s here to stay.
[00:51:10] Preston Pysh: It’s here to stay. That caught my attention. And I think this is going to be massive for individuals. It’s the wild, wild west right now. And from a privacy and security standpoint — people losing bank accounts and email addresses and things like that — it’s the wild, wild west right now.
[00:51:24] Preston Pysh: But in a year from now, I can only imagine what this project.
[00:51:34] Alex Gladstein: It’s a new era of personal computing, you know? Yeah. Just commentary: the creator of OpenClaw really just opened a new hole in what’s possible, and now we’re into that world.
[00:51:45] Justin Moon: Lemme give one analogy: personal agents at this stage really remind me of eCash — right? Which I worked on, and Calle worked on through Cashu — because there’s an obvious tradeoff, big security tradeoff right up front. It’s like, hey, you trust a random guy with your bank, right?
[00:52:01] Justin Moon: So it’s kind of crazy. You give an AI agent its own computer and let it do whatever the heck it wants. So it’s a big upfront tradeoff that’s a little reckless. But then you get this flowering of all kinds of hobbyists and people who understand the risk, understand the tradeoffs.
[00:52:16] Justin Moon: That’s what we’re trying to communicate: don’t just recklessly do this if you don’t understand what’s going on. That’s why I tried to explain so much of these ideas to you — ’cause you need to equip yourself with some basic things in order to make these decisions.
[00:52:35] Justin Moon: But when you have this flowering of a big group of very motivated people in the open-source ecosystem, that’s when you can have really magical things happen. And that’s what happened with eCash and Cashu. And that’s what’s happening with these personal self-sovereign AI agents.
[00:52:41] Preston Pysh: You know, you have all these people talking like AI’s coming — it’s going to take all of our jobs. The other side of the coin that I really want to impress on a person listening to this: the tools we’re talking about also give a person the ability to 100x or 1000x their capacity and their ability to do things.
[00:53:03] Preston Pysh: And so these two forces — amazing — really come down to what is your perspective? Is your perspective, “This is too hard and complicated”? Well, AI’s probably going to eat your lunch. Or are you sitting there saying, “Hey, this is my moment”? Like, what can you do with this?
[00:53:19] Alex Gladstein: I can think about this — it could be a great example. I’m here with a really well-known Cuban activist. I’m thinking to myself: right now there’s no Bitcoin wallet that’s perfect for her needs, and no one’s really going to build that.
[00:53:36] Alex Gladstein: She’s going to build it. Within the next year, she’ll be able to speak to a computer and it’ll open-source. It’ll take some stuff from BitChat — which is very important given that Cuba doesn’t have great internet — it’ll take some stuff from popular open-source Lightning libraries. It’ll build what she needs, it’ll look awesome, it’ll be exactly what she needs, and she can just do it in a few weeks or a few days or a few hours, depending on how much she wants to put into it.
[00:53:49] Alex Gladstein: You’re going to see the blossoming of so many interesting little personalized tools that can radically expand people’s potential. And it’s such an exciting moment, to your original point, Preston.
[00:54:06] Alex Gladstein: And yeah, we’ll come back. We’re making a mini documentary right now — the current six months that we’re living on — that we’re going to play on the main stage of the Oslo Freedom Forum. It’s going to start January 1. It’s going to end June 1. We’re going to play it on June 2.
[00:54:27] Alex Gladstein: And at the bottom third, you’re going to see the days go by, and you’re going to see the headlines, and you’re going to see interviews and work, and it’s going to be so crazy what happens on June 2 when we show this thing. The speed is just face-melting — what is going on here. Yeah. So honor and a pleasure, as always.
[00:54:34] Preston Pysh: Hey, that event — and also the one in Nashville in May — I’m very interested in going to the one.
[00:54:40] Alex Gladstein: Let’s go.
[00:54:40] Preston Pysh: Yeah. So we’ll put links to that in the show notes.
[00:54:43] Alex Gladstein: Yeah, May 8 to May 10 for the Bitcoin Park hackathon, part two — AI Hack for Freedom. And then it’s June 1 to 3 for the Oslo Freedom Forum in Norway. Amazing — freedomforum.com. Check it out.
[00:54:55] Preston Pysh: Amazing.
[00:54:56] Justin Moon: I have one thing to plug here at the end. Yeah — so I started doing some livestreaming on Nostr to try to share what I’ve learned over the last year. And for next week I’m going to try to vibe code a Bitcoin full node. That’s what I’m going to try to do.
[00:55:04] Justin Moon: So I’m going to be livestreaming on Nostr all week and probably going to injure myself severely in this process. Wish me luck.
[00:55:12] Preston Pysh: Amazing. Okay, so we end the shows now with a song, and we need you guys to select — either one of you — what your favorite artist is, or song, like if there’s a specific song you like. I want it to be like that.
[00:55:27] Preston Pysh: And then the song is going to recap everything we just talked about in a fun song-like way. So do either of you have a very strong preference for a specific song, artist, genre?
[00:55:40] Alex Gladstein: Go ahead and speak up. Justin.
[00:55:41] Justin Moon: I don’t have— I can’t think of a specific song, but I would go with the sea shanty. Sea shanty song style would be fun.
[00:55:48] Preston Pysh: Sea shanty song. I don’t even know what that is, but I’m about to find out.
[00:55:52] Justin Moon: . Great. It’s like the sailors. I could send you one afterwards if I find it. The sailors, about how they’re getting out the door and they’re going to get in trouble, you know.
[00:56:02] Preston Pysh: Wow. I love how diverse these song selections are. The last one, I think, was a Beatles song or something like that. So alright, guys — thank you so much for making time. We’re going to have links to all of that in the show notes. Enjoy your shanty song on the closeout here. Thank you.
[01:00:13] Outro: Thanks for listening to TIP. Follow Infinite Tech on your favorite podcast app, and visit theinvestorspodcast.com for show notes and educational resources. This podcast is for informational and entertainment purposes only, and does not provide financial, investment, tax, or legal advice.
[01:00:29] Outro: The content is impersonal and does not consider your objectives, financial situation, or needs. Investing involves risk, including possible loss of principal, and past performance is not a guarantee of future results. Listeners should do their own research and consult a qualified professional before making any financial decisions.
[01:00:44] Outro: Nothing on this show is a recommendation or solicitation to buy or sell. Any security or other financial product—hosts, guests, and The Investor’s Podcast Network may hold positions in securities discussed and may change those positions at any time without notice. References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor’s Podcast Network is not responsible for any claims made by them.
[01:01:05] Outro: Copyright by The Investor’s Podcast Network. All rights reserved.
HELP US OUT!
Help us reach new listeners by leaving us a rating and review on Spotify! It takes less than 30 seconds, and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!
BOOKS AND RESOURCES
- Pablo on Nostr.
- Trey’s newsletter and podcast: Fire BTC.
- Related books mentioned in the podcast.
- Ad-free episodes on our Premium Feed.
Some of the links on this page are affiliate links or relate to partners who support our show. If you choose to sign up or make a purchase through them, we may receive compensation at no additional cost to you.
NEW TO THE SHOW?
- Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members.
- Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok.
- Check out our Bitcoin Fundamentals Starter Packs.
- Browse through all our episodes (complete with transcripts) here.
- Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool.
- Enjoy exclusive perks from our favorite Apps and Services.
- Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value.
- Learn how to better start, manage, and grow your business with the best business podcasts.
SPONSORS
References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor’s Podcast Network is not responsible for any claims made by them.



