Block by Block: A Show on Web3 Growth Marketing

Runescape Bot Creator to AI and Crypto Entrepreneur

Peter Abilla

Summary

In this conversation, Erick Ho, co-founder of Function Network, shares his journey from creating a RuneScape bot to developing a managed AI cloud platform. He discusses the challenges of sustainability in open source models, the role of blockchain in coordinating AI transactions, and the importance of personalization in AI agents. The conversation also touches on the competitive landscape of AI and crypto, as well as the future plans for Function Network.


Takeaways

Erick's journey into technology began with creating a RuneScape bot.
Bots are an early iteration of AI agents.
Open source model sustainability is a pressing issue.
Function Network aims to coordinate developers, model creators, and infrastructure providers.
Blockchain facilitates seamless payments in the AI ecosystem.
AI agents can be hyper-specialized for specific industries.
The Function Network is currently attracting developers to its platform.
Transaction fees in traditional systems are a barrier to scalability.
Personalization enhances the effectiveness of AI models.
The AI and crypto landscape is still evolving with many opportunities.

Chapters

00:00 The Genesis of a Tech Enthusiast
02:55 From Gaming to AI: The Evolution of Function Network
05:59 Understanding Managed AI Cloud Services
09:05 The Sustainability Challenge of Open Source Models
12:08 Function Network: Bridging Gaps in AI Development
15:02 Targeting Developers: The Function Network Approach
18:01 Hyper-Specialized AI Agents in Various Industries
20:57 Defining AI Models and Agents
24:01 The Intersection of AI and Crypto
28:11 Understanding Function Network and Crypto Integration
30:47 Traction and Development Progress of Function Network
32:34 Target Users and Incentives in Testnet
34:12 Challenges in Attracting Model Developers
36:30 Navigating Competition in the AI and Crypto Landscape
40:30 Differentiating Models in a Crowded Market
48:12 Final Thoughts on Overrated Ideas and Influential Builders

Follow me @papiofficial on X for upcoming episodes and to get in touch with me.

Watch these interviews and subscribe on Youtube Block by Block Show.

See other Episodes Here. And thank you to all our crypto and blockchain guests.

Erick Ho, co-founder of Function Network. Welcome to the show. Hey Peter, thanks for having me. So I want to begin with a spicy take that you shared on Twitter. This is back in July 21, 2024. You shared about your background and how you built your first mobile app. You reverse engineered RuneScape. You created a RuneScape bot to kill Vorkath, the end game boss. Is that kind of where your love for... kind of tinkering with technology started. Yeah, absolutely. It's interesting that you dig into my ex like that. And so for viewers, you're not familiar with like RuneScape. It's just a multiplayer video game. And just like with any other video game, it has to put in a lot of work to play the game. You can spend countless hours grinding. And so growing up, my little brothers and bigger brothers will make me play the video game for them to help train their characters. And that naturally led to one thing and another is like, can I automate this? And so I of naturally went into the world of how can I make the computer play the game for me? And many of you may know that it's like bots, bots just automate and task. It's funny that because now these days in the modern world now, post-AI, people are calling bots AI agents, AI agents bots. um But yeah, it was definitely kind of like an introduction to how I even got into computer science. Just because to truly build out a bot, you kind of have to understand the game mechanics, you have to understand how the game is written. And so that naturally led to a pathway of my career in reverse engineering, uh which is kind of like the process of taking the video game apart, breaking down the compiled code, which is kind of like the end code that's written, and then bringing it all the way up to what the actual source code looks like so that we can actually inject and uh actually automate these tasks. And so I got to the point where I created this bot that played the video game better than human players. And the nice thing about it is that generally whenever we're playing video games, you get tired, you get fatigued, you start to mess up. A bot doesn't, a bot doesn't get tired, right? And so... it at maximum efficiency. was a really great time to, you know, spend all my youth to just really understand everything end to end and then kind of apply that to RuneScape. And so yeah, that was a really fascinating experience for me. And I kind of see like the process of reverse engineering, one of like the fundamental basis of how I see everything in the world today, right? Which is kind of like diving deeper layer by layer on how things work and the ability to dive deep and truly understand. the process end to end is like super important to just any business problem, any technology problem, or really anything that you're really trying to problem solve. No, that sounds really interesting. You were making quite a bit of money as a kid in high school too. Is that right? definitely. so, you know, RuneScape is actually, there's actually many people in crypto that also has a uh background in playing RuneScape or World of Warcraft, right? And like there's Vitalik credits a lot of it. I mean, he was very active in RuneScape. Definitely. I think he also played a lot of World Warcraft too. I think like his avatar at one point was like a moon kitten or something. um And so, yeah, mean like inside all these video games, there's items, right? And so like these items can be sold for real money. It's called real world trading. um And so RuneScape has this currency called RuneScape Goat, which actually goes on the market for like 1 million RuneScape Goat might go for like 20 cents real dollar. Right. And so you play the video game, earn gold, earn items and so on for real money. And so, you know, there's this natural incentive for people, whether it's real players or, you know, bots that can play and maximize the amount of gold that you earn, uh, to, know, make real money off it. Right. And so that kind of like put on my entrepreneur hat. Well, originally I was building a bot so that way I could just trade my character, but you know, can I scale this? And so what ended up happening was I was scaling it to pretty much the highest bill you could earn per hour on Woundscape across. It was, believe at peak, was like 40 different accounts running and killing Vorkov. And, you know, we know that we were probably the best player on killing this monster because There's like the high scores that shows how many kills that you had. And we were on the top leaderboard. And so, you know, I was making anywhere from like $40 an hour, $60 an hour. But keep in mind, I was sleeping the entire time, right? Because, you know, you have the bots practically doing this all for you. And this is still relevant to, you know, what I'm doing today as well, right? Over at Function and like the post AI world. You know, people often talk about, how do you automate your tasks? How can you have AI do everything for you? And I say that like bots was like literally a early iteration of what we see today with AI agents. Well, let's transition into that. em So the headline on the function.network website says, Managed AI Cloud, experience bleeding edge generative AI models with limitless scalability, all powered by our distributed infrastructure. So tell us what does that mean, and who are you trying to reach with that headline and subheadline messaging? What I'm trying to reach with that is a bunch of SEO. So you just pop up on uh a lot of search engines and as well, nowadays, you also have a GEO, which is Generative Optimization. So lot of fancy words, but we could definitely simplify it as well, which is currently, if you don't go on function.network today, if you're a developer, or you're looking to get access to open source models, generally what would happen is that you have two options, right? You can go out and buy some GPUs and then take the open source model and host it on top of your infrastructure. So that's like the very DGEN or very DevOps way of doing things, which is you just try to own the entire stack yourself, right? Only problem with that is that as your application continues to scale, now you're kind of like left with managing the entire infrastructure yourself, right? And so that's not really scalable. And as well with infrastructure as a whole, you I used to work at Amazon Web Services as well. As a solutions architect, we focused a lot on scalable infrastructure and I spoke to a lot of customers in the pre-Z to series A range as well. And it's very important that, you know, if you have compute, you should utilize 100 % of it. Because any... compute that is not used is effectively worthless or you're losing money on it. Right. And so a lot of developers who are creating these AI agents, AI agent launch pads, they generally don't host our own infrastructure because they're not going to maximize it. Right. And there is definitely a skill set to manage all these GPU servers and scale them as well. Right. And so you kind of have the second option now, which is kind of using a managed service for it. Right. Yeah. one of the managed providers of open source models today. We currently operate that on a centralized fleet. And so we have a developer platform where you go get your API keys, create a workspace, and then get access to around 15 different open source models from uh Lama models, Eepsee, Gwen, and you just use them to create your application. let's back up a little bit. we could articulate what is the problem that Erick Ho is trying to solve with Function Network? How would you answer that? We're relying on incentives, right? um And so the developer platform is only one piece of it all, right? And so, you know, whenever you build on a network, you got to keep in mind a network is an entire ecosystem. And so there's a lot of moving parts to it actually, right? From the supply side all the way up to the demand side. And fundamentally, the problem statement that Function has currently right now is that we believe that open source model development is actually not sustainable today. And we actually fear that in the future, the existing open source models that we see today can actually go privatized, or we don't actually have very good open source models to use. That's inherently dangerous, right? why do you think it will be privatized? Because effectively, just like with any company, you look at who's currently making the open source models that people are currently using. um so OpenRouter is an aggregator of uh LLMs, and a lot of traffic flows through there right now. And so a lot of developers are using OpenRouter. And they actually publish metrics on different developers that are using and going into specifics of how often that model is being used. And 70 % of the traffic is currently being routed to Meta, DeepSeek, Gwenn, and these, uh, it's just handful, literally only like five open source models, right? And you look at like the funding and where that funding is coming from. Meta makes money from ads. They currently don't make any money from their AI product. They're currently using it for, uh, mind share and as well just capturing, uh, as much developers to use their models, right? Mm-hmm. and it's saying with a deep seek as well, they're a trading company fundamentally, right? And so they actually don't make that much money from their open source models. And so I think from a business perspective, this is always a question that every single business has to ask themselves, which is how do we convert this from, you know, a free product over to, uh, something that actually generates revenue, right? And so while it's free today, we see it time to time again, it's the classic playbook of, you know, subsidize it now, make money later. Right. Um, and so that naturally leads to, you know, eventually, and we see this with some companies as well today, uh, where they're trying to own the entire stack, right. open AI, for example, you know, they originally started off as a, just a simple chat application, right. And then they released the API for developers. And now you look modern day today, they now have an application for you to code using their models. They have an application for you to generate videos using their models. And so naturally, a lot of companies are going to try to move towards trying to own the entire stack and less emphasis on the models that they're creating for the open world and open source community today. So I guess help the audience understand like what is the problem? Maybe articulate it in like one or two clear sentences and then let's get into how Function Network solves that problem. Yeah, definitely open source models like just in one or two sentence, right? Open source models aren't sustainable today and we're in line. Does send us to do so. So function network is a protocol AI protocol that aligns both developers. Model creators and as well, appeal providers to all working work together to create this cohesive ecosystem for staying both open source models, right? And how we do that is actually a pretty interesting approach, which is grabbed from analogies from both kind of like what we see in the crypto world today, where blockchain excels and, you know, where the market is currently lacking. Okay, take us through that. How crypto is helping those, what were the three areas of coordination? There's the model creators and then there's, yeah. you look at the existing landscape right now. There's like 1.8 million different models being published on the face, right? Which is just like a registry for models to be put on, right? Those model creators aren't making any money today, right? They just publish it for recognition to show that they have the state of the art model. But whenever you publish a model on hugging face. um You don't make any money from that unless you are also the infrastructure provider or the compil provider, right? And you sell that, that access to developers. And so that's a huge gap. entities, of the three that we need to coordinate according to kind of your thesis is there's the infrastructure provider, there's the model creators, and what was the third one that you said? The actual developers. the computer providers and then the model creators. got it and those three need to be coordinated and you've created function network to help coordinate those three entities is that correct? Okay. And blockchain is great for coordination as well, right? So blockchain is great for coordination as well for instantaneous payments. um And so, you know, let's just say that theoretically uh in a web two world, uh one day a model creator creates this really great state of the art model. And now they have to decide how do I make money from this, right? You can either go try to create your own infrastructure platform and developer platform. right, and then host your own model and then sell that to developers and just create a traditional SaaS business. Or what you can do as well is um do one-to-one enterprise licensing out to infrastructure providers, right? And so you would go through a lengthy process, a legal review, solidifying the terms, and then as well going out to different infrastructure providers to host your model and getting a rev share through there. um We actually think both options is not scalable because the people who are creating models on one end, they're really great at creating really good models. Fundamentally, they're AI researchers. They're not infrastructure experts. And we saw this time to time again, where a lot of companies kill their budget or their runway because they overspend on infrastructure because they don't realize how to efficiently provision uh and right size their infrastructure. Right. And so they end up committing a lot on top of AWS, GCP. and just buy a lot of GPUs, right? On the second end, you know, they do enterprise licensing, well, that's a lot of legal work and then as well, lot of negotiation, the sales process for that is extremely not scalable. And so that's why we're proposing function for model creators where you can effectively just digitize your open source model, basically digitizing the IP for your open source models and publishing your model weights where you can actually choose how much you want to take. from what infrastructure providers are still in access for, right? And so it's effectively a royalty fee. uh Very similarly to N of T's, right? Whenever N of T's were created, there's generally a royalty fee that's attached to every single N of T. And every single time N of T is bought or sold, a portion of that flows down to the creator of the N of T. Similar fashion here over in the AI space, we're applying a very similar concept. concept, where every type of traffic comes to the model, a portion of that actually goes down to the model. what this does is that it actually creates this natural incentive and natural funding that doesn't require deep VC funding for these Model Creators to continue innovating in a sustainable way to bring more models to a functionist network, right? So that way they could easily sell more models. And so that's the Model Creator piece. Got it. um Tell me about the target customer for Function Network. You mentioned those three entities and how you're using crypto to help coordinate all three. um What does your go-to-market look like and who are you targeting at the very first? It's kind of your beachhead to start getting traction for Function Network. For sure. On one end, it's the developers, right? And that's what we're currently focused on right now. We're currently bringing... of developer, maybe really articulate the type of developer that you're looking for. Yeah, for sure. um So, you know, anyone who's building, so it's any vertical, right? um The AI can be applied to practically every single vertical. And what we're currently seeing within our developer platform and what's currently being developed is a lot of people who are building out AI agents, right? And so the developers are building out AI agents, they always need access to some sort of model, right? And they're experimenting with different models that best fit their use cases, know, trial and error, right? You might start off with using the deep seek model, figure out that, it doesn't actually perform up to my standards. And so they'll swap over to the Lama model or they will use a mixture of models. Right. And so that's one example of developers that are currently on a developer platform. And it's actually really important for us to continue growing that we see a lot of opportunity for us to continue to bootstrap that demand that ultimately flow down to our actual network, right? And so we're currently growing the pie on that end. And it's going to start off with agents, then you have launch pads, and then you have anyone who's building any of your vertical. Might be going out with sales agent, might be building out a health care agent, customer care agent. And so those are kind of like various different sets of builders are currently on our developer platform. So let um me ask a specific question on the agent side. um I was talking with someone the other day and they were building a very specific agent for a law firm. um And so it was like hyper verticalized just for this specific law firm. so um they were labeling a bunch of data, they were digitizing a bunch of paper data and then labeling it and then feeding this model. um kind of legal, you know, legal information. um Is that the type of uh agents you're talking about, like hyper verticalized, like by category or by industry or sector? We cater to all agents, right? But that's just a, but we cater to all agents. That's one example of all agents that we cater to. And I don't see why we wouldn't want to cater to a super specialized vertical agent, like a law firm agent. I think that's a fantastic use case, right? um But kind of like digging deeper into it. um So is it like the law firm that's actually creating the AI agent, or is it like a consultant to the law firm? It was a consultant to the law firm and um the only users of this model, the user interface will be, the customers of this interface are gonna be the attorneys of the law firm. And so it wasn't, it's not gonna be productized and then commercialized. It's strictly for use for the specific law firm. And I actually see that kind of use case more and more. um And I can see that for all types of industries. um where it's kind of like this private non-commercial product that's used in-house, but it's like based on very hyper-personalized um models like from the company itself. Yeah, makes sense. um I'm also seeing very similar trends, right? Where people are doing these AI agents basically for their business case. There's nothing inherently wrong with that because fundamentally what behind every single AI agent there sits effectively a base model, right? And this base model dictates how intelligent your AI agent is, right? And so, you know, really think about it. Um, as the base models get more intelligent, that actually helps increase the intelligence of every single AI agent. Right. Um, while AI agents are still ingested in personalized data for uh any business, it still feeds down to kind of like the mother brain. Right. But you know, what if one day that mother brain is no longer available? That's a huge problem. Right. Um, And so like what if one day Meta is like, well, you know what, we have no longer interest of offering these Lamba models to you. You could either go through our API or, you know, you could use our own law for AI agent instead. um That inherently is kind of like the risks that we're effectively looking to mitigate through our protocol. Because then now we actually provide a sustainable path for model creators to truly thrive, you know, create revenue. And then as well bring that back to the ecosystem. um I think, you know, in general though, right. The AI agent space was what I would say is still pretty early on. A lot of people are still experimenting around trying out different AI agents, trying out different models that is powering the AI agents. And so there's a lot of opportunity for people to continue to experiment with all sorts of other models that can actually help amplify uh their AI agent as well. And so we look at hugging. no, maybe we can kind of we kind of jump right into things, but maybe we can define some things first. maybe tell the audience that when you say an AI model, maybe describe what that means. And then when you talk about an AI agent, you know, maybe describe and define kind of what that means. And then, and let's get into the crypto side of things. You know, we talked about cryptos to helping coordinate the the infrastructure model creators and also the developers, maybe tell us more about the crypto side too. Anyway, go ahead. yeah. So for the audience here, AI um model, you can just think of as basically the brain of everything, right? And so AI models practically power every single AI agent that you will hear today. uh So what's the difference between an AI model and then the difference between AI model and AI agent is that AI agents are a little bit more complex. They still use the AI model. But as well, they have additional more steps to introduce pre-processing, post-processing, and different interaction like personalized data, right? And so, you know, to build out an AI agent, you still need an AI model. They're not different. They go hand to hand uh with each other. um And then, you know, going into like the crypto side, sorry, I just want to make sure I got the question. It was, was it like basically where does crypto come into all of this with the network? Yeah, before we get into that, maybe em I don't feel like we are, we're, fully defining what an AI model is. We're kind of throwing phrases around like hugging face and llama too. But maybe let's let's really define like, this is a model when When Erick talks about a model or anyone talks about an AI model, this is like what he means. And then an AI agent kind of has very specific kind of definitions. Maybe we can talk about that. And then let's definitely get into the crypto stuff. Yeah, I think I touched on it a little bit already, but effectively, once again, the AI model is like the brain, right? It's pretty generic. um It doesn't really have context of what you're really trying to do. And I'll give you probably a deeper example. If you ever asked chat GPT something that you wanted to know about, but it didn't really give you the exact precise answer. Probably just because they didn't have enough context about the question that you was asking. Right. And so generally you need to feed it more information. Right. And that's effectively the AI model. It's like the base information resource. Right. But, you know, even if you have like this large book of information, ah you still need something to help guide, you know, to guide and basically index all that information. Right. And so, you know, The AI model sits as the base layer. And so it's very generic. It's very broad. It's extremely generalized. And then uh because of that, introduce AI agents, which help you go a little bit more verticalized. And so that's why people mention there's like sales AI agents, customer AI agents. These AI agents generally have a more specialized use case. And the way that they actually create these specialized use cases is that cases is that, generally speaking, developers of these AI agents refilled the information to the AI so that we have better context. right? And so in a very similar fashion, whenever you're talking to chat GBT, you try to provide as much information as you possibly can so that way the base model can respond better. AI agents effectively kind of like automate that process. So that way you just get the right response that you're looking for or the right intelligence that you're looking for. And so, you know, they both go hand to hand. um they, I think it's super important to also note that like AI agents are autonomous, right? They could kind of like work on their own. They act very similarly to a Hillman. Those are the goals of being a proper AI agent as well. And so like anything that you do as Hillman, AI agents are meant to replicate that and also be able to perform the duties of. do it in future. And I think it's so amazing. Yeah. Now, one of the issues, so if I could just, I guess, summarize. So a model is essentially how it's a machine, it's this mother brain that you've talked about, but it understands the world like we do. and it's able to categorize through taxonomies and language and be able to kind of uh make sense of kind of what's in the world through, and with very specific context. Now you can get into hyper-specific context, then you're talking about AI agents. And so some of the AI agents that we're seeing in the commercial space, like for customer service and operations and that type of thing. And these agents are autonomous and they can do things on the behalf of the human. if we allow it to with very specific kind of instructions and or boundaries about what they can and can't do. um Now tell us about the crypto stuff because we're seeing a lot of intersection between AI and crypto, which I think is really quite exciting. And I think these AI agents, um mean crypto is like perfectly suited for these AI agents so that they can start to transact in the world and also transact with each other using crypto. Right. um Tell us maybe come back to Function Network and how crypto is like how it works inside of the Function Network product. Yeah, so like you're absolutely right. uh know, AI agents transact in using crypto is a pretty interesting uh use case, but in particularly for function, uh you know, with the three different actors that I mentioned, the model creators, developers and computer providers, uh they all need to be paid in some form or fashion, right? And that's where we see our fun token being able to take a huge place into. Because blockchain allows us to seamlessly coordinate that and as well pay people out in a way that doesn't require micro transaction fees. And then as well as instantaneous, right? And so, you know, when a developer uses our network, they have to pay for inferences. So they'll use our fun token to pay for the inference. And then some of that revenue gets slowed down to the infrastructure of a provider. in the fun token and they deserve to be paid because they're spending money on GPUs to host these models. And then likewise, a portion of that is also flowed down to the model creators, right? uh In the fun token and like, but why is this actually needed, right? If you try to do this with centralized providers today, if you actually go on stripe.com terms and services, they actually prohibit crypto businesses, certain crypto businesses to transact, right? And as well, it's up to them to decide what is high risk of businesses. And it's very similarly with a lot of other payment processors, right? And so that's one problem. It's very possible if you try to build out this business on centralized rails, the payment processor will just shut you down. And now you don't have a way to effectively coordinate and pay. And then the second part to it is that there's a significant amount of transaction fees, you know. You hear it time to time again, which is at 2.9 % plus 30 % per transaction. That's not scalable in the future where we see this massive ecosystem of millions of developers, thousands of hundreds of thousands of computer providers and thousands of millions of model creators all working together, right? That's significant amount of scale that doesn't scale within the existing realm of centralized providers today. um And that's why blockchain is really great at coordinating all this through the payments and as well um Yeah through our fun token Got it. um Tell us about traction. How is Function Network doing in the last... How long have you guys been live? uh So the network side, actually just take a step back. um We decided that we wanted to build out a platform that people can use that feels like a centralized platform, right? And so we spent a lot of time building out this developer platform. um And we currently have around 500 different developers currently building on it today. And in the last month, we just introduced Billing. And so our inference using our endpoints was effectively free. until very recently, right? And that, we went live with that approximately two to three months ago, right? On a developer platform site. And then we only introduced Billion very recently. On the flip side, that's kind of like the demand side, right? Where we're just bootstrapping the demand so that we begin where people come and develop, don't give up the market share to uh other centralized providers. But ultimately, our number one goal is to push that down to our network. And so on the network side, we've been building out a significant amount of smart contracts that's going to be deployed on base, right? And we've uh have finally achieved DevNet and we're going with Testnet towards the end of uh June, or the start of July. And so we're moving pretty great. I do believe this is the first DAI project deployed on uh L2 like babies. And so I'm really excited to actually bring this network uh live and it's all fully coordinated through the set of smart contracts that we wrote out. Explain DAI. That's a... Yeah. Got it. kind of like crypto X AI, another word for it is decentralized AI. Got it. Because that can be confused with DEI, you know, and which has a kind of a negative thing. No, that's really interesting. What are your plans for testnet? how can, um you know, and who's the target user for testnet? Our target users for test net are anyone from those three different actors, right? And it's all incentivized as well, right? So if you're a developer, you can actually come onto a network, use our open source models that we're currently hosting and actually earn points for it. And so unlike, you know, using open AI and these other centralized provider, effectively you're being paid to actually use AI this time, right? Which is pretty interesting. And then you also have the infrastructure providers that have the H 100s, 30 90s, 40 90s. They could all come into our network as well. And then ultimately the model creators as well. And so it's an entire entirely incentivize test net to really stress test our smart contracts and ensure that the technology that we build out really scales. And so I would love for anyone who, you know, experiments around with AI to, you know, have a lot of infrastructure and need to. find ways to use it or if you're creating a really bad ass model, it'd be fantastic to have you on Funks Testnet. Now, instead of vising those three um entities or the three stakeholders, um especially on the infrastructure side, I imagine that they have, they don't, I mean, there's so much demand for the infrastructure that they provide. um How is that going? I imagine that's probably a BD challenge. I actually, maybe it's because also my lens of being deep into infrastructure, but I would actually argue that um infrastructure is actually very easy to incentivize. Infrastructure in itself is not a hard problem to attract. um To be honest with you, and this is kind of like a more hot take, I find that infrastructure providers within crypto are more like mercenaries. They just generally... try to farm as much of the token as possible or farm as much of the points as possible. And if it makes sense from a business point of view, then they will gladly participate into it. And so as long as there's cause you got to keep in mind with a single GPU, there's a very good chance that they're not using a hundred percent of the GPU. And so what if they slice like 30 % of that and allocated to Funx testnet? Well, that's 30 % that they were even going to earn from, from the first place. Right. And so. Naturally speaking, we actually see an oversaturation of GPUs available, and we're quite confident that we can actually get those uh GPUs onto our network without a problem. uh It's oversupplied, if anything. That's a good problem to have. for the stakeholders, is getting the model developers, is that the challenging part, or the developers to use those models? model developers and both the developers themselves, right? Creating the AI agents, launch pads, et cetera. We're focusing on both on those ends significantly on the BD side. And so we have a pipeline, a sales pipeline built out on a model creator side and actual developer side to help us win and make function significantly successful. But yeah, that's definitely where our focus is for sure. And there's nothing wrong with like, you know, You kind of have to sometimes say it how it is, which is uh you want to incentivize the right actors into your network. It's extremely important, right? And so, for example, if you overpay the infrastructure providers and leave no room for the model creators, well, then you don't have to sustain both flywheel, right? And oftentimes, we see a lot of emphasis on only the infrastructure side. But you also have to consider, well, Who's actually supplying to the infrastructure providers, right? What is the infrastructure providers hosting? What is the model creators work, right? And so you need all actors to be properly incentivized to truly make the flywheel work. Now, if you were to look at the AI m cloud landscape, who are some competitors that you look up to and respect and consider as formidable competitors? Yeah, and this is a question that we get a lot because it honestly is a pretty, uh let's just say noisy space, right? A crowded space and it's kind of hard to tell what people are doing in crypto and AI. um And so you see a lot of GPU marketplaces, right? Give you some examples. You see on like the centralized side, you see vast AI where you can just simply rent a GPU. And then on the decentralized side, you see things like Akash, Aether, et cetera, right? um They provide GPUs for you to access. We actually don't see GPU marketplaces as a competitor. We see them as uh basically a way to continue to scale function, right? Because we focus uh on hosting models, particularly. Whenever you ran out of GPUs, you can use that for any of you. You can use that for video editing, hosting podcasts, playing video games, et cetera. It's actually a different business. But sometimes when people just hear like, AI and GPUs, they think they're the same thing, right? um And so just be fully clear, we're not a GPU marketplace. guys being in co-opetition with GPU marketplaces. You could be a customer of theirs. Exactly. And it's super important, right? It's actually a good thing because, like I said, there's literally an oversaturation of GPUs. And so there's a lot of GPUs on these marketplaces that are waiting to be activated, waiting to be used. And so a function can actually help increase the utilization. We could actually see in real time right now that there's a lot of GPUs on the cache, and that's the AI that's waiting to be rented out. And so we'll find a way for them to actually do that in a sustainable way. um In actual competitors, I think it's extremely early, like on the decentralized side, right? On the decentralized side, uh I feel like it's so early, there's not many products truly live. And so it's really hard to say right now on who's truly a competitor or not. On the centralized side, which is to be fair, we're like practically where everyone should be focused on anyways, right? There's no point of PVP across two decentralized AI products whenever It's so early stage. uh the centralized side, we see practically anyone who's building close source providers, sorry, close source models ah to these infrastructure providers that are hosting these open source model, but taking all the revenue for themselves. And so that goes into like OpenAI, Together.AI, Lambda Labs, these very large companies that are pretty much bringing in all the revenue and not passing it down to the model creators themselves. Yeah. Now we see other projects kind of attacking the same, they have the same thesis as you do that, you know, there's only a couple of big players that are kind of owning the AI space and it's going to lead to a world that none of us are going to want. And so you have projects like Sentient from Sandeep and Polygon and also others attacking this problem kind of in their own way. What are your thoughts on that? Yeah, for like Sentinel, right? I'm not too fully informed. I know they have some pretty interesting technology with like model fingerprint, if I recall correctly, which kind of helps people understand who owns the model. um But even Sentinel is trying to figure out where they position themselves in the market, to be honest with you, right? And so like, yeah, it's extremely early. so like, you know, whenever there's some whenever it's so early, Isn't it our best time of use to actually just focus on ourselves and push towards, you know, actually getting this out? Cause you know, one thing that I really kind of like don't like in kind of like the crypto space is that there's too many theoreticals and too many early stage. And so how can you actually change that? Right. I do think that we're moving to a place where people ask where are the practical applications? Where's the practical networks? Where are the actual products within crypto as well? And so that's why we put heavy focus on actually building up these smarter contracts, building up this DevNet and being able to bring it over to actually test that on a chain that you actually likely already use. Right. And so we're not building out our own L1. We're not trying to create another L1. ah We believe that like the existing technology on the blockchain side already exists for us to really use. And so, you know, that's why everything is designed in terms of uh a set of smart contracts that we could deploy. Right. And so uh our fundamental bet and how we different to ourselves, even in the DAI space, is that we ship. We actually send it over to Testnet. We have people actually use it. And we start bootstrapping that right now. uh We tend to lean towards more pragmatic designs. ah And so we really wanted to push for people to actually be able to touch, and feel uh function. Yeah, no, that's so, so from a developer's perspective, I could be building an agent, an AI agent on let's say Solana. But then I need the models hosted, I can have the models hosted on a function network and then transact on the financial aspects of it on base. Like would something like that work? Okay. Yeah. think I do think there's this phenomenon that people generally have towards the end, like these AI agents, right? They try to use pre-processing and post-processing mechanisms to make their AI agent more intelligent. But it all fundamentally leans on that base model, right? But if the base model is not meeting their standards, they actually may lead into them trying to create their own model, right? Like a customized model for what they're trying to do use. And then once they create that model, whether it's a fine tune model or a model met from scratch, they may want to monetize it, right? And so now they could take that model that they created or fine tune and bring that over to Funk Network and just monetize it directly on chain. uh permissionlessly. So they don't need to ask those permissions, so they hey, want to this model on your platform, right? You just go completely on-chain, sign a couple of transactions, and finally, uh they have this model that they're earning from and truly infrastructure providers that are actually hosting their model, so that way developers can actually use it as well. Let me ask this question around kind of model differentiation. So if everyone as an infrastructure provider, function network provides base models to anyone that wants to use its network. And if everyone has access to these base models, like what differentiates the outcome or output of one model versus another? And how can these be kind of unique? Because if I have an app, you know, If I'm building a business and I'm using one of the base models, but someone else is using the same base model, like how can I compete against them if we're using the same base model? How do you differentiate? um So from kind of like a developer side, right? If you have all access to the same model, how do you differentiate? I think that's kind of like what led to AI agents as a whole as well, right? We're kind of going to go in a circle here because once again, this is extremely trial and error, right? And so I'm going to tell you what I see in the industry today, which is um people will try to use different models on the platform. because people are trying to enter in different models right now. And so you do have benchmarks that tell you how well intelligent a model truly is. But generally, that's not specific enough to really tell. And so you have these developers, these businesses that are just trying various different models. And so this might swap from Lama 3.3 to 3.1 or. you know, a 70B model over to a 70B model and just experiment around, right, with their use case. And so, yeah. I take a base model like Lama 3 and then maybe enhance it with, let's say, very specific healthcare data? But no one else has access to that except for me. Yep. And that's what AI agents effectively are, right? They take that personalized data that they might feel like they have an edge on and then try to feed that through the existing base model. Generally, that's what people call a rag pipeline, right? They build up like this rag pipeline, which just basically allows you to take a lot of information and use that to feed over to the base model so that way they understand it, right? And so And can you define rag? Just so the audience is aware. I don't know the actual, I think through retrieval, augmentation, generation, but actually, I don't know, it's a super fancy word for basically a database that helps you index information, right? I always found like the terminology behind RAG, there's a super fancy scientific uh name for it, but basically it's just a simple database that helps you index information, right? Yeah, so retrieval augmented generation. It's a way to enhance a model or augment a model. And so in that example where there's a base model like Llama 3.1, but then we augment it with uh customer service data from my call center. Yeah, and it's actually pretty effective, right? And that can help you build out an edge. And ultimately, that's what kind of like defines what AI agent is. And so a base model can only get you so far. And then you just naturally lean into like different technology, whether it's like using rag or creating your own fine-tuned model. These are all potential uh edges or like, you know, potential ways to differentiate yourselves in the market. I don't think anyone has necessarily figured it out. And so that's probably why you see like, why all these people, all these businesses also have different like selections of models. You have to realize that like whenever you use chatgbt, they don't necessarily have you, they don't automatically detect which model you should use, right? They still provide a choice for you to select different models instead of trying to automatically detect it. It's because it's still extremely early, right? People are still trying to figure out what is the best way to get the information to the actual users and developers. And so there's a lot of trial and error right now. So Erick, this has been a really, really good conversation. I've learned a lot. Maybe a couple last rapid fire questions before we end. uh What's one crypto idea that you think is overrated? Oh, that's a pretty interesting question. um Overrated. Maybe go into that a little bit more if you don't mind. Like overrated as in like over saturated or just like it just doesn't work. Yeah, well, here's maybe a real, real specific one. So you said in a tweet, the obsession with modular is starting to resemble web two microservices over abstracted under coordinated. And so that's what kind of led me to think uh maybe he thinks, you know, the modular thesis is, is maybe overrated. I don't even remember tweeting that. might have been my AI agent that tweeted that. That's hilarious. But, know, I think one that's maybe a little bit overrated is probably building out your own L1 right now. I do believe that there's extremely lack of applications within the blockchain space. And so, you know, There's so many more L1s, but as you introduce more L1s, you actually introduce more fragmentation, right? And that's which is a really big problem, especially on the Ethereum side, right? Yeah. of these L1s and L2s don't really have uh a different trader, to be honest with you, right? They might have one additional feature add. um But then if you really have to carve out your own L1 or L2 for that, ah then you kind of get into a space of where you just basically added a feature for basically the trade-off is that you don't have that many users and you're bootstrapping the users again over and over and that It's a very time consuming process, right? Yeah. It's fatiguing is what it is. It's the consumer, the user is fatigued. Okay, one last question and that you can answer either in the AI space or crypto space, but who's like one builder or person that you admire? And even if you disagree with them and tell us why you admire that person. the crypto builder side? Oh yeah, the AI side or the crypto builder side. I've been digging Peter, so there's this partner from 1KX, I believe he goes like Peter Pan, he's a partner right? I actually just like really like his takes. I wouldn't argue that like he's a builder per se, but he definitely has the right takes when it comes to like the existing landscape of crypto, right? I believe he leads like consumer application investments, right? And so like, you know, probably going back to the first question as well, I've been really digging his takes. really got to respect it because he understands like, you know, we just need so much more demand into blockchain before we go into all of these like scalability issues and potential scalability issues. So yeah. Yeah. Now I've got a funny story about Peter Pan. He and I first met back in 2017, I believe at ETH San Francisco. And um he's done really well for himself since then, but he's great. I love his takes on Twitter for sure. Well, Erick, thank you so much for taking the time to speak with us. I'll be sure to share in the show notes about Function Network, your Twitter handle, so that developers and builders can get involved. thanks for having me Peter. Pleasure be part of a block by block. Thank you. Cheers. Thank you.