Block by Block: A Show on Web3 Growth Marketing

Ram Kumar -- Openledger: Next-gen Blockchain Network for AI

Peter Abilla

Summary

In this conversation, Ram, a core contributor at OpenLedger, shares his journey into the crypto space and the evolution of OpenLedger. He discusses the potential of specialized AI agents and the importance of data contribution in building effective models. Ram emphasizes the need for community engagement and partnerships to drive innovation in the Web3 AI landscape. He also addresses the competitive landscape and the unique approach of OpenLedger in creating a platform for data contributors and AI builders.

Takeaways

  • Ram's journey into crypto began in 2017 with the rise of Ethereum.
  • OpenLedger aims to build specialized AI agents for various sectors.
  • The platform allows data contributors to monetize their datasets.
  • Community engagement is crucial for the success of OpenLedger.
  • Partnerships, like with Eigenlayer, enhance OpenLedger's capabilities.
  • The market for Web3 AI is still in its infancy.
  • OpenLedger focuses on simplifying complex AI concepts for users.
  • Data verification and attribution are key to maintaining quality.
  • The branding of OpenLedger includes a relatable mascot to engage users.
  • Building awareness and education about OpenLedger is a priority. 


Chapters

00:00 Introduction to OpenLedger and Ram's Journey
06:23 The Unique Approach of OpenLedger
12:10 Building Specialized AI Agents
20:48 Data Contribution and Monetization
25:23 Target Audience and Community Engagement
29:15 Partnerships and Ecosystem Development
33:48 Navigating Competition in the AI Space
35:25 The Symbolism of the Octopus
38:40 Community Initiatives and Final Thoughts


Follow me @shmula on X for upcoming episodes and to get in touch with me.

Ram, core contributor at Openledger. Welcome to the Block by Block show. Hey Pete, thanks for having me here. I'm glad to be here today. Amazing. We'd love to talk about all the cool things you guys are doing at Open Ledger. Before we do so, do you mind sharing with us your origin story, how you got into crypto? Yes, I've been telling this a lot. I got into the crypto. Quite a while back. That's why you see this grey hair. So I got into it while we were building a technology company someone paid us in Bitcoin and Started loving the technology behind that Found out there is huge scope while we were in the space around 2017 when we were exploring this and We're reading about the tech. That's when Ethereum was also like really coming out, right? You could mint an ERC 20 on the Ethereum blockchain. People are experimenting with it and launching ICUs and all that. That's the time that we got into the space. was a wild west. People were launching random tokens here and there. Some of them actually came to be a project. Binance was actually an ICU project when we were getting into the industry. And then there were a couple of other projects that came out as well. There a of scams that were happening. It was quite scary, but one thing that I found out is that there was lot of potential potential here and I think it's a very human nature to you know when a new technology is out there people try to exploit it and there are certain people who actually want to build value that's how you know any industry grows we've seen that happen across all these years and that's what we were seeing happen in crypto as well right especially like with the blockchains that was coming out and all that So saw that there is potential, that a huge space will grow big. We kind of guessed what will happen in 2021, right? When 2017 was happening, we could see probably the next few years, we will see enterprises would come in as well. So we saw that as a potential, started building out a product division around that, sort of like an R &D division, which was like a small 10 member team. And it grew to about 200 plus employees in a very short span of time. And we loved what doing because we're working with pretty much every other people across the globe. had people who had worked with in Africa, had people who had worked with North America, we had people who worked with Korea. So we enjoy the time building in the web3 space. We got to work with a of enterprises as well. We worked with categories, Pepsi, Walmart, India subsidiary, somebody called Flipkart. We've worked with few other enterprises like Viacom and all that. Helped them migrate to Web3. They provided the end-to-end tech for that. And also we had a chance to work with lot of protocols like Hitted a Hashgraph, Aptos and all that during this time. Loved what we did. It was a great opportunity to learn about the industry. But what we also kind of wanted to do that we had a call towards building a project which is... front-facing, right, which enables the builders to launch something good, enables the community to come forward and contribute to something. That's what gave the idea of OpenLedger. We've always wanted to do something around data. Data has never been explored much in the Web3 space and saw that Web3 AI intersection was happening as the market was growing. And we decided, let's build a company which enables data to become intelligence, right? How can we convert raw data that is contributed by the community into usable intelligence and power the next generation AI agents or apps and all that. All of these agents that you see today run on a chat GPT wrapper, right? It's a simple open AI model which is benign. But they can be much more than that, right? These models I see as sort of like the brain that supports this body which is an agent. And a brain should not just be having a generalized knowledge. It should have a specialized knowledge in a particular sector so that it can support the body to do something which is much purpose to them. So agents if they need to excel at a particular sector then the model that powers this agent has to excel in a particular sector, has to be specialized in that particular sector. So that's what OpenLayer enables. It enables for contributors to come forward, build specialized models on top of us and they do that by contributing datasets to us and building that specialized model which will power a native in a particular sector. Assuming someone wants to build an agent in the DeFi space and they would want a model to enable trading to happen better for that agent, an agent can understand to trade, then you would need datasets that has a lot of trading insights, that has knowledge around trading, has on-chain data, has off-chain sentiments and all that feed it to a model. to make it specialized for trading, right? And then we basically provide this entire end-to-end platform where you can propose to say that I want to build a trading model to support a DeFi agent. And then you could let contributors, data contributors to come forward and contribute datasets that you wish to collect, let's say on-chain data, option sentiment, and trading insights. And then as you collect these datasets, you can use our platform to fine tune and build a specialized model, post it on top of this and have that be used by any agent so that the agent can actually go ahead and trade for you. So that's sort of like the end-to-end system is what we're looking to build here at OpenLessor. so yeah, so that's sort of like the history of myself and how OpenLessor came about and what we do as well. Is your approach a departure from the other approaches that we're seeing? We're generalized agents and it seems like you're trying to go after the specialized agents, which makes sense to me. But tell us about the approach versus what we're seeing right now in the market. Yeah, so, see, we've seen this happen in the traditional space. If we take a look at it. About two years back when ChatGPT came about, it was a huge rage. Everyone started figuring out how ChatGPT works. Consumers loved it because they could chat to it and ask random questions and get answers for that. Businesses, startups saw that as an opportunity, figured that out that there is a model behind this ChatGPT which powers this ChatGPT agent, which is a conversation agent, that decided to go ahead and build a wrapper around this model. And basically, there were simple apps which were a wrapper on top of chat GPT you can ask questions about let's say marketing and this is basically a marketing agent that they build you can get answers from that right so we had plethora of like these chat GPT wrappers that came about in the last two years but all of them died down right they were not smart enough right all of them died down when know OpenAI went ahead and announced a new feature platform and made this useless. Then this industry kind of evolved. People understood that they would not be able to build a purpose-driven agent or a particular app on to excel if it was a simple wrapper on chargeability. It is just simply using the same model. They wouldn't be able to excel in the work that they want to do. So they started working on specialized models. That's good example of that is Public City. Right. Public City uses OpenAI to understand a context of the question that you asked. It needs to use OpenAI to just get the context, but then it uses other models as well. It uses various other models, let's say like a math model like FEE to understand your math question and give you an answer for that. So that industry evolved the traditional space where people started using specialized models. Basically, they optimized the brain of this agent, they optimized the knowledge base that they get to make these agents or apps smarter and launched better AI apps and agents. That has to happen going forward as well. Either it's going to be in the Web3 space or the traditional space. All of us need specialized model that are built for a particular purpose so that we can build better agents out of that. That's what OpenNAS is looking to solve and that's what we want to enable because like for example if if you want an agent that can go ahead and stake for us to figure out like the best staking platforms and then do the staking whenever it's required, stake and like exit and then probably stake another platform whenever there is changes in terms of the interest and all that then we wouldn't want to be doing it manually right I would not want to sit over a laptop and then figure that out actually do it. you know, everyone saw this and I even take a much more memeish approach, right? Everyone saw that Trump token came about, but most of them missed it because the Asian community was actually asleep. They all missed out on the meta. So what if an agent was available for them, which could have gone ahead and traded for them, which would have probably bought it at a much lesser market cap when the opportunity was there, you know, everyone would have made much more money. Everyone would have, there would have been much more chain activity, the growth would have been much more higher compared to what actually happened. So all of this is possible only if we can make the agents as capable as a human would be today. So in order to do that, you need to power these agents with better models, which are built for a particular purpose. And that's what Open Nature enables. We provide that platform where you could have Let's say Pete wants to build a model for Web 3 marketing. He thinks that, you know, that's very much needed. Because if you go ahead and ask questions to an OpenAI model, like, actually, if you go ahead and ask a question about marketing, it's going to give you a very generic answer. It's just going to give you what's available, what's the data that is correct from the internet. It's going to like repurpose that and give it to you as an answer. But it's not going to be an expert in Web 3 marketing, right? So if, let's say, you decide to build a Web 3 marketing model. which is fine-tuned and built for Web3 Marketing. Then you would basically propose that on top of Open Nezor as a platform, put up a bounty and let contributors, like marketing contributors across the globe to contribute datasets for that. Share their thoughts, share information they have. It could be documentation, could be answers, it could be maybe a thesis that can be shared to you as a model builder. You could use that dataset, populate that on our dashboard. fine-tune a model with that and then use that model to power an agent and launch the Web3 market agent using the model that you just built on top of us. So that is what we want to enable. think those agents, apps, which are more purpose-driven is what needed in the market. We've seen that happen in traditional space. We also would see happen in the Web3 space and a much more larger AI implementation as AI grows as well. So for that, you need infrastructure like us which can super power this which can basically enable more of these models to be built so that we have more knowledge for better apps. So that's why we believe that specialized agents was very much needed and for that the infrastructure is needed and we're building towards that. And that makes sense to me. think in the, in, I'm familiar with a, with a, big e-commerce company that's actually creating a very specialized agent for their chat agents, for their, to, for their, their sales team. And most of their sales are actually not done online because of the products that they sell are highly specialized. They're in the outdoor space. And so the customers are very interested in the, the, the type of material that's used, the number of millimeters. If they're going on a big hike in Nepal or in the Himalayas, like will this material work in that kind of environment? And a lot of those questions can't be answered on their website. And so what they've seen is that customers call in, talk to a human, and then the human who has experience in Nepal or in the Himalayas is able to answer that. Now they're putting all those chat logs as a data set to feed into a model. And so what you're sharing about OpenLedger and your approach makes a lot of sense. Tell us about the specific feature. speaking to, maybe you can answer it this way. Speaking to a developer, what are some specific features that makes OpenLedger attractive for your target audience? And I imagine your target audience includes both data providers as well as developers that may want to build applications using specific, very specialized data. And then maybe talk to someone who may not be technical. Like, why would this be interesting to them? Yeah, absolutely. I think there is a lot of benefits in terms of various types of stakeholders that are involved in the process. So let's assume someone wants to an AI. I'll take an example that you told us about. This particular e-commerce company which is looking to build AI agents that can answer customer service and support. The best approach they would want to do is like not build a model first, right? They would want to go ahead and use an AI agent and give better prompts, like give it prompts about how it has to behave. It's called prompt engineering and then make that agent Enable to answer certain questions, but that might not really work Because it doesn't have the knowledge of this particular product that they look into sir, right? You can just feed it from because the knowledge base for that would be quite huge So you need to provide a rag which is sort of like a temporary database Which is connected to this? agent where the agent can use this rag and The third way is usually where the model goes in, let's say there is large amounts of data and where you just can't simply provide the context using a rag. You need to actually fine-tune the model to give much more precise answers, then you would go ahead and fine-tune models to power this agent. These are the three approaches that you should take. OpenLegend provides the solution for the second and the third approach where you could use to provide a data feed to an agent or you can use this to actually fine tune a model that can power this agent. Right. So as a builder, you could decide you could look at open ledger as an infrastructure on which you if you take the first approach, you can go ahead and see the data sets that are already available in open ledger. So there are three parts to us. Right. One is the data layer. Second is the model layer. Third is the integration to agents. So as a builder, can come and see various data sets that are available in a platform. Choose whichever one that you need and then provide that as a rag to your agent. If you feel that that's not enough, you were not able to make your agent behave the way you want. It was not answering the questions the way you want. It did not have the precision that you looked for. Then you could go ahead and actually fine tune a model. Go to a model layer. choose the base model that you want to go with, then fine tune a model with a particular data set that you have chosen. So that's the second approach, where a builder kind of trains the model for its own meaning. It's like, let's take, as I go back to the question, the initial approach that I told you about, how a human behaves. is always related to what is the exposure of his mind is exposed to, what his brain is exposed to. I could be an expert in, let's say, programming by going through a course on programming over a year and I could become a software programmer. It takes years and years of training for me to learn more about. software engineering and then become a programmer. Or I could go ahead and do sort of like a crash course and go ahead and build a product over a one-nighter and launch on something on top of that. The efficiency obviously is much more varied, what needs to be done is the approach that has to be taken is based on the product that I want to build or the end approach that I want to build. You know same goes with an AI agent. The model is basically a brain of an AI agent. You decide on how your agent needs to behave. If you need a short-term solution where you want that agent to have a short-term knowledge or a very you know quick easy access knowledge that you could use RAC on top of OpenLegend. Use any of these data sets that are available and just feed that to an agent that you already use. Or if you want the agent to be much more precise, have much more larger knowledge, and you would want it to excel at that particular solution much better than just using a rag, then you go ahead and fine tune that model. Basically, train that model with more data and make it an expert. So that's simply the approach that you can take based on your needs on top of OpenNet. So that's how an AI builder, I would say someone who is building the AI space, someone who wants to use an AI agent can use OpenDestroyer. On the other side, there a of people who have data, right? So it could be an individual who has a lot of data. It could be a firm that has a lot of data. They could see this as an opportunity to contribute data to OpenDestroyer so that these model builders or people who need like RAC can use this data, right? So every time data is being contributed top of us, think of us like to be like a library, you can go ahead and contribute that. And if your data feed was pulled out and used in a model or an agent, we figure out what was the impact of a dataset, right? We use something called data attribution and we figure out what was the impact your dataset created in that particular output that came out from a model, and then we attribute back to you. You get a piece of the revenue that the model makes, right? So that's how data contributors can use OpenDenture, they could contribute data sets to us and get paid for that. You know, to just give you an analogy, this happened in the internet space when YouTube came about. In order to bring in lot of content to enrich their content library, YouTube went ahead and announced their partner program where if you contribute content to YouTube, you get paid for that. And then we had high quality content being contributed. YouTubers became a thing. It became an actual work. Businesses saw that as an opportunity and contributed their proprietary content because they know that now they can monetize their proprietary content, not only on television, they can do that on YouTube as well. Very similar approach over here. We did that to index content to web, and that's why web is so enriched. If you want AI to be enriched, if you want AI to be used in multiple sectors, then you need to index data to AI. We need to enable platforms that can help this happen. And OpenRes is one of those platforms which enables data contributors to come forward and contribute data to us. It could be an individual or a business which owns a of proprietary data, they could index, they could contribute to us and they get paid as if their data was used. So that's the economy that we're to drive. Circular economy where you provide a data and a model gets benefited out of that and if the model is used, you get paid as a reward. So yeah, that's sort of like the other piece of OpenLedger. Apart from providing this infrastructure for people to build models, we also want to benefit the contributors. who made that happen so that there is a circular economy that is being built. Is there a whitelist approach or I guess what are the quality checks for data providers? Because I can see a situation where there's garbage in, garbage out. And so making sure that the data is well formed, it's structured, and it also fits the criteria that you guys are looking for. Because there could be data that maybe they don't own. that and so there could be some intellectual property issues. How do you deal with that? Yeah, so. It is like, know, if you take a look at Ethereum as a platform, it's completely permissionless. Anyone could go ahead and use the platform and launch a meme token, which is probably infringing an IP or launch a DApp, which is really useful and can benefit, you know, on chain backing to happen. So there could be like two different sides of this that could enable open ledger aims to be the same as well. We want to be permissionless. We want users to decide what they want to build on top of OpenLecture. And if they're building a model, what we actually enable is to provide the necessary verification systems, the tools to optimize on what kind of data is being contributed. So they can write a role engine as they propose a model, as they propose a data set to be contributed on what kind of data that has to be contributed to this model that is there. And the rule engine basically verifies this data. And as you told, it's always garbage in and garbage out. If we take a look at hugging face, 99 % of hugging face data sets or the models that are there are useless. And the reason why is because it's an open source platform where there's no monetary benefit for the people who contributed. It is just an open source contribution. So open source contribution can be scaled and can be made useful only if you figure out a way to monetize it. Only if there is a way to incentivize and monetize people, would see open source being scaled. That is what we want to do as well. So if you have negative incentivization, if you have mechanisms in place where if you provide a garbage data and that was not useful, it's just you getting not benefited out of that. It's a negative incentivization for someone to do that. So that's the same principle that we want to follow. We want to make it as permission as possible, but have rules, have mechanisms in place that will make sure that you have checks. Like for example, the verification system is one thing where you can verify whether the data that is contributed was valid or not. And then the incentivizing mechanisms where we can prove out if whether your data was used in a model and whether the model when inferred where your data came in or not, whether your data was useful or not. In most of the cases, if there was large amount of useless data, they don't get paid at all. It is just that they are consuming large amount of space and all of the data can be removed out of the model. it can be rejected. So that's where the attribution comes in. Not only it helps you reward, it also helps you figure out which of the data was useful, which of them were not, so that you can go ahead and remove all of the data. So that's the only way you could do this, right? Because even if you take a look at Copenhagen model or any kind of model that is currently in the market, most of the data that is there is useless because filtering all of that is quite unimaginable, it's not very humane possible ways to do that. But still, we could provide the tools that can enable it to do it. The best way is to keep on optimizing as much as possible. And I think it's absolutely fine. That's how most of the products evolve. Any... community-driven products that if you take a look at today in the market, there is content which is not so useful, but then there is content which is really useful which makes the change. More you have checks and verification systems and monetization and incentive mechanisms, the lesser the garbage of the useless data that comes in. So it's an evolving process, I would say. So it sounds like, speaking of target audiences, you've got data providers. You've got app developers that want to build application using the data that's provided by, that's available in OpenLedger. I imagine the data, the specific structured and specialized data is available in some kind of data store. Tell us about the, what are their audiences that OpenLedger aims to serve, and how are you reaching them? Yeah, so as I told you before, the major stakeholders are AI builders, builders in the space who want to build better AI solutions. It could be an agent or an AI service that needs a better brain or a better model to support. Those are the builders that we would look at. Today's AI agents are solving a purpose. Most of them are a bit meanish, but they kind of evolved from there, right? So they saw this as an opportunity. Builders always see this as an opportunity to try to solve a problem. They want to go ahead and build in that space. That's how agents space kind of grew. you know, speculation always helps people who know they can make more money. Very similarly, in open digital, when people realize what we can do, what the platform is, when we go mainnet, they could figure out that you could build like specialized on the world. You can build the data feed that powers these agents, right? And those are the crowd that we want to attract. People who want to build unique models, knowing that these unique models will be used by the agents. Those are the crowd that we want to attract. We've seen this happen in the traditional space. We've seen this happen over the last couple of years. As AI grew, we saw the community coming forward and building new version models. We've seen models that is being built for math. We've seen models that has been built for healthcare. We've seen models built for languages which are which are very unique, which is not understood by these general models that are there, because they know there is a need for this, right? And GPT doesn't understand Arabic. So that's why you see so many open source Arabic languages that have been built by the community. So very similarly, based on the need that is there, either in Web3 or the traditional space, we believe that model builders will build these specialized models on top of OpenNet. And that's how the usual principles work. And then on the other side, we also see that data contributors, knowing that there are models like these that are being built, knowing that they could get rewarded for these models to be good, would contribute data sets as well. And we've again seen this happen in the traditional space and various platforms like KGL, Huggy, and all that, where data contributors come forward and contribute the data because they just want to enable this ecosystem to right and these are two major audience that we want to attract builders who want to build for AI models and stuff like that and then data contributors want to contribute to this and how are you reaching these audiences? That's a very good question. There's multiple ways to do that. Obviously offline and online as well. We've been writing blogs about what we do. We've been actively talking about this in our social channels right from our Discord. We have a very active Discord and Twitter community which follows it, which is participating in... various programs that we're running. We have a TextNet program that is running on right now. TextNet EPOC one that we launched was majorly for data contributors who contributing their implement data. And that had close to what a million users that we onboarded. Most of them, we saw them interacting and understanding how the product works and then communicating with us and contributing data sets accordingly. I think that's great because everyone is interested in AI. Everyone knows that they could be a part of this revolution that is happening. So we see people contributing data sets to us as a way for them to understand our platform. So that's one way that we're letting people know what we're doing. And then we obviously do lot of events. We talk about this a lot, create awareness. So those are probably some ways that we've kind of made OpenLens a reach. and stuff like this where we go ahead and talk about with the product as well. Let's switch to, I think, a really unique feature that OpenLedger has. I think it's a partnership with Eigenlayer. Tell us about that and how that's helped differentiate OpenLedger. Yeah, one thing that we believe is that we don't have to build everything, right? We have to build the ecosystem and the infra that enables people to use other solutions to scale as well, right? When you asked about like how do we validate data contribution, we as I told you we have like a rule engine where anyone who's building a model can set rules on this rule engine of what kind of data has to be contributed. But for that data to get verified, whether it went through this rule engine, whether it is actually verifiable, whether this data was something that was validated, we needed to have. verification system, right? So that's where we work with Eigen layer. We use Eigen layer VSS to verify the data contribution that is not done with OpenNet. And then on the other side, when we do attribution, we create a proof called Proof of Attribution, which is an optimistic proof. So which basically showcases every time a data was used in a model's inference, we figure out which data contributed to that inference, right? Which showcases, okay, this model, which was trained on these hundreds of datasets, when asked a particular question, in that particular question and the answer that came out, we had the dataset five, 6 and 10 give this output, right? And then we actually showcase the weightage for that. That's the proof of attribution that we create. We have an optimistic proof, so we believe obviously all the proof that is created was true. And if someone goes ahead and challenges that, we use eigen layer AVS to verify that, to run that mechanism, figure out whether it's the same proof that has been generated again, and attest that. So that's where we use again as well. So these are the two places where we work with Eigen layer. We could have built the system ourselves, but it means that we need to build the validators. We need to build that security, which is another whole work on its own. It's easier to just do it on top of another platform, has it like a cloud who would build a cloud in today's day. I mean, who would build like a server in-house in today's day? We all go with cloud, right? I think that's what Eigen layer is also looking to solve. to take away that burden of crypto builders, all protocols, not worry about validation and providing managers and providing that security. I think we could just choose Eigen layer for that. So that's why we chose Eigen layer and we've been pretty happy with that. It seems like a diff really differentiated offering are you ... doing any co-marketing with Eigen layer on this. Yes, we've been talking about this quite a while. think I've been part of their AMAs and closed sessions as well. We would do more as well as we code testnet as the product is live. There'll be more information that we spoke in the Eigen ecosystem as well. Yeah. Eigen layer ecosystem is quite large. And with OpenLedger being part of it now, gives you really opens up lots of opportunities, both for data providers, but also model builders and AI agent developers. Yeah, I agree. And I think the Igilir ecosystem has been really driving decentralized AI, especially with AI agents. think they've been doing an excellent job on building the awareness and also the building the toolkits that is necessary as well. So yeah, so quite exciting to work with them. And like for us, it is more of not because we wanted to associate with Eigen for the hype that is there. There was a need that we wanted to solve, right? We went for the best out there and we've there been one of the biggest in terms of a restaging platform that provides us cloud as a service. So it just made sense for us to work with them. I think that's wise. mean, it came from a need. And Eigenlayer just made sense. Now, OpenLedger is a permissionless, full stack approach to creating very structured and specialized data on which models could be built and AI agents could then come from that could answer very specific questions that users have. the AI crypto space is getting very, very busy. Well, tell us about some competitors that you consider competitors and I guess your approach to how you respond in very competitive market. Yeah, so I think the market is in its very initial stages, at least in Web3 AI space. In fact, it's been a hard time for me to actually find a comment in the space which we had, so that there's a lot of mindshare that we can capture. A lot of people do various stuff. For example, Vana does more on the data collection side, but they're more towards the foundation layer. So building a foundation model is what Vana talks about, and more on the data. side. So we could probably work with them. I wouldn't see this their competitor. could, yeah, we could like work with them in terms of getting access to their datasets. I've not like really seen someone who's thought about and built this space. There's definitely a lot of people in the traditional space like together AI provides model infrastructures similar to what we do. We have hugging phase, cable and all that wide, very similar infrastructures on the same as what we do as well. So there's a lot of people who doing this in the traditional AI space, not much in the B.Synclyse AI space. Hopefully we will see more as well. We've seen this happen with decentralized compute. think Ionet, Aether were one of the few projects that started working in that space. And then we had tons of competitions that came after that. And the only competitor at that time for Aether and Ionet was probably the traditional AWS and Nvidia and all that, right? So very similarly for us also, we see a lot of very traditional AI companies which are doing what we do. Probably we'll see more, we three AI companies as this market evolves. goals. Yeah. Tell us about the octopus. What was the thinking behind making the octopus as part of your, not just logo, but visual identity? Yeah, so it's a very interesting thought. I think we even posted recently about it. There's just a bunch of conversations that we had internally over some coffees. We were thinking about how to pass on this concept that Openledger is all about knowledge, brain, being smart. We wanted a mascot. We didn't want it to be serious. We wanted to have a mascot that the end consumer would also relate because there's a lot of retail element in crypto because everything is token driven. So retail has to understand what these guys do. So we wanted to have a mascot that can represent that, bring in some kind one element and octopus is usually seen as a very smart So it's always seen to be very smart. It is one of the most smartest creatures out there. Myself and my co-founders, we've been in this space over the last seven, years when we were building our initial startup and after we even became the core contributors at OpenLedger. What we kind of always look back is our stories that we've grown up. So if you take a look at it, there was this particular concept in cricket as a sports during World Cup they had an octopus which shows like who will win this match right it predicts that it actually chooses the team and then most of the time it was actually right so it was fun to watch so that always rang a bell and we wanted to bring in that octopus as a mascot to open ledger and then so that's why that's why the octopus is there it's just been fun people liked it when we initially came is a thought and then we post about it and then now it's like a very integral part of our ecosystem. I think it's really smart. projects do that are highly technical with a highly technical audience is to create something simple and relatable that's fun because it makes it relatable, right? And so I think that's really smart. There's a lot of things that, you know, when I like speak to a lot of people, they're like, this is too complex for me. So I have to simplify it and let them know. Even for us, I've been in the blockchain and in ML space for some time over the last seven, eight years. Even for us, lot of elements in the AI space is too complex because it's evolving. A lot of stuff has been thesis which have come to practical use cases right now. So we wanted to make sure that we can simplify as much as possible. Only if you simplify I think that is one of the biggest struggle for crypto also. tech is like really, really complex and it's very, it's very geeky. Right. So that's one of the reasons that we don't see consumer adoption in crypto, right. Because it's too geeky. I wouldn't see my grandmother using any DeFi product. So that has to be like the tech has to brought into the most simplest forms. That is one thing that we want to do as well. So, hopefully we can try to simplify it as much as possible. Let's jump into the community side. Maybe we can make this our last question. We'd love to hear your thoughts on Kaito and how that initiative is going. And I guess what were your initial goals in going into and working with Kaito? I think Kaito is an excellent platform to test people know about a project to get that mind share. It's a good way to incentivize people to talk about your project. There are so many projects that get launched in the crypto space every other day and there's so much out there. And all our attention is in crypto Twitter, right? So if we can figure out incentivize mechanisms to let people talk about your project, I think it's a great way. One thing that we would obviously don't want is to have that as the only driver, right? People need to get organically or naturally excited about what we do. The incentivization is an additional piece or push or reward that they get, right? That's what we would want. So That's the same thing. That's one of the reasons we got into Kaito because we can have exposure, we can tell people what we're doing, we can educate people about what we're doing. There was so much I was, I must have been, we were open on Kaito. know, or else we've always approached more on the technical side and we've always been known to the builder side, really liked us. But the retail started to get to know more about open energy. So that is what we wanted, to build that awareness. We wanted to be in a platform like Aito. I think it's really good for building awareness to any tech product that's being built in crypto. So we've been enjoying being there, fighting over other projects out there to bank. So yeah, so it's turned out to be a real good reach for us. Good, good. Well, Ram, thank you so much for taking the time to meet with us and for sharing with the audience the cool things that you guys are doing at OpenLedger. Any final words you'd like to share? Thanks for this time. I would... be happy to share more information about what we do. Anyone who's interested to know more about OpenLatcher, can reach out to our Twitter pages, you can also DM me on Twitter. We would love to have people experiment with AI, right? I think agents more has to evolve from the speculation that is there today, has to go beyond just being a meme, it has to go to a purpose. Everything that a human does on-chain can be replaced by human. But you need the right infrastructure and the right mindset to build that. We could provide the infrastructure, but the community has to have that mindset to build like really cool agents so that we can do a lot of interesting stuff in the crypto space. For that awareness has to be there, for that people have to learn. So we'd love to have the community know more about what we do, to engage with us, to request for information that we can provide as well. So and then build, just build better products in the crypto AI space. That's one thing that I Excellent. Thank you so much, Ram. Ram from Open Ledger. Thank you. Thanks a lot, It was amazing having conversations with you. You too. Thank you.

People on this episode