Block by Block: A Show on Web3 Growth Marketing
Each week, I sit down with the innovators and builders shaping the future of crypto and web3.
Growth isn’t a sprint; it’s a process—built gradually, step by step, block by block.
Let’s build something incredible, together. All onchain.
Block by Block: A Show on Web3 Growth Marketing
Decentralized AI for Model Developers and the Role of Crypto in Allora Network
Summary
In this conversation, Nick Emmons, founder and CEO of Allora Labs, discusses the Allora Network with Peter Abilla, a decentralized platform aimed at improving machine learning through collaboration among various AI models. He explains the structure of the network, the roles of different participants, and the importance of data in AI model development. Emmons also highlights the incentives for network participants, the go-to-market strategy, and the potential impact of Allora on the AI and crypto industries. The discussion emphasizes the need for community involvement and the future of decentralized AI.
Takeaways
Allora Network aims to decentralize AI model coordination.
The transition from the information age to the intelligence age is crucial.
Decentralized AI can serve as a public good.
The network allows for collaboration among various AI models.
Incentives are based on performance and accuracy of predictions.
Application developers can focus on objectives rather than models.
The go-to-market strategy targets the DeFi sector.
Abstraction of crypto complexities is essential for AI developers.
Community involvement is vital for the growth of Allora.
Non-developers can contribute to the Allora community in various ways.
Chapters
00:00 Introduction to Allora Network
02:47 The Philosophy Behind Allora
06:02 Understanding the Model Coordination Network
08:55 Roles Within the Allora Network
12:06 The Importance of Feedback Loops
15:01 Topics vs. Domains in AI
18:01 Nick Emmons' Background and Journey
20:52 Data Sources for Inference Workers
24:00 Optimizing Efficiency in Data Systems
26:03 Incentives in the Allora Network
27:06 The Role of Application Developers
29:03 Innovative Use Cases in DeFi
32:06 Exploring Perpetual Exchanges
36:02 Go-to-Market Strategy for Allora
39:58 Abstracting Crypto for Model Developers
43:09 Current Status and Future Roadmap
45:00 Disruption in the AI Model Market
46:53 Community Engagement Beyond Developers
Follow me @papiofficial on X for upcoming episodes and to get in touch with me.
Watch these interviews and subscribe on Youtube Block by Block Show.
See other Episodes Here. And thank you to all our crypto and blockchain guests.
Welcome back to the show. We're joined today by Nick Emmons, founder and CEO of Allora Labs, which is building the Allora network. Allora is aiming to bring adaptive self-improving machine learning to crypto. Welcome, Nick. Yeah, thanks for having me. Let's get right into it. Can you help explain what Allora Network is in plain English? You know, pretend I know nothing about crypto. Sure, for sure. Yeah, so basically what we're building with Allora is a kind of uh aggregation layer for AI models, or you can think of it also as a model coordination network. And I think it's useful to sort of zoom out and understand the existing paradigm that exists in AI to really understand what that means. So when you or I or anyone, developers, companies, individuals, et cetera, are looking to interact with some AI model, we're sort of picking specific models you want to interact with. We're saying, I use chat GPT a lot, so let me use this. Or I'm saying, I've heard Claude's better at coding, so let me use Claude for this. We're sort of tasked with constantly surveying the universe of available models at any given point in time. And because of this and because of the way models exist today, all of these models are sort of in these isolated silos, both from a consumption perspective and like uh a performance perspective. We can't take all the good bits of GPT and all the good bits of Claude and all the good bits of Gemini and really merge them together to create uh a new form of intelligence that represents the sum of the intelligence or the unique intelligence from those underlying models. And this really, one, slows down the rate of progress for AI generally, because everyone has to build in these isolated silos. And two, it creates this consolidation of sort of AI market share that we're seeing where a few large companies increase their lead day after day in terms of being the source of the world's intelligence. so what we're building with Allora is basically this decentralized network where many different models can come together and share their outputs with one another in collectively solving different ML or AI problems or responding to queries from users and learning off of one another in the process to try to combat some of these hindrances or these impediments that are present in the existing paradigm of AI, which is largely siloed and isolated. so that's what we're trying to build with Allora, essentially. And we're even seeing the siloing within some of these AI companies, right? Like with even with chat GPT, it has like five or six different models and no one knows like how they're different from each other. um And so even within themselves, they're kind of like fragmented, um where like a supermodel where the user doesn't have to decide which one to use, because they really don't know the difference, would be more preferable and more user friendly. That sounds really cool what you guys are doing. um I met with a sentient a couple of weeks ago and an open ledger and a bunch of different AI slash crypto projects. um And I am really encouraged that Allora is in this space and that there's an attempt to make crypto and AI work together. Tell us the genesis of how you started Allora and what was like, you described the problem, but I guess what led you to have enough courage to, let me try to fix this thing. Let me try to bring a solution to the problem that I see. Yeah, I think a lot of it is kind of philosophical in that I think we saw this in sort of like the internet as a preamble to this. think like the period of the internet really gaining ubiquity to now has been, you can kind of designate or define as the information age. And I think the age we're entering now is the intelligence age. But looking at the information age, I think the internet was built on the set of values around decentralization. It was built around the idea that information flow in society should be kind of free and open and not controlled by anyone who's built on this set of open standards and infrastructure. And even though it was ultimately kind of uh commercially captured by various institutions or enterprises, I think this underlying set of open infrastructure is a lot of why we've been able to sort of realize this kind of stage in society we're in today. And now as we entered this intelligence age, I think it's even more paramount for what will increasingly become humanity's kind of outsourced cognition to be instantiated as a kind of decentralized public good, as opposed to some private thing that is uh developed and uh monetized by a small number of large enterprises. And so I think philosophically, it's not only like... a useful and beneficial thing and a number of more objective metrics. But I think it's actually quite critical or even existential for society to instantiate this decentralized brain more or less, or our outsourced brain as some sort of like decentralized autonomous uh thing, as opposed to it being some private good again, owned by a small number of enterprises. And so that's a lot of the kind of motivation to start what we did. I'm happy to go into some more tactical elements of the of the timeline, but I just see it as being critical, as AI increasingly and at a faster and faster pace sort of redefines or reinstantiates a lot of society's core functions, especially as it pertains to sort of like intellectual freedom or just thought generally. And I'm seeing that kind of, I guess, position from, you know, other crypto slash AI projects where they see the incumbents and there's like five or six major incumbents. And so it becomes like big tech all over again, except it's now in the AI space and they have, you know, tremendous funding, kind of, you know, unlimited resources. And so it becomes a very big challenge for for mission-driven projects like Allora to decentralize this AI thing, like how to make it kind of like from the people for the people versus just from these five or six different big tech companies. em On the homepage, this is the abstraction layer for intelligence. And then Allora is a self-improving decentralized AI network that harnesses community-built machine learning models for highly accurate context-aware predictions. maybe we can go under the hood a little bit and help us kind of understand like how these models are developed, how they get into Allora, and maybe give us a sense of like the maybe compare and contrast to, you how helpful are these models versus let's say, you know, a chat GPT or whatever. Yeah, I think that's a great question and good to provide some context. So basically, the network is a network built around the sort of ultimate objective of commoditizing the world's intelligence. It's the sort of general model coordination network to bring together the world's models and optimizing like various ML objectives or ML problems. And the network is broken into these kind of sub networks called topics that are these tightly scoped environments within the broader network. that are each defined by different sort of ML problems or objectives. So you might have one topic for uh predicting the price of say Bitcoin to USD an hour from now and doing that every hour. You may have topics related to sort of fraud detection in various sort of financial domains, banking domains, et cetera, maybe topics related to anti-cheat detection in gaming, whatever it is. And then within each of these topics, the participants are basically broken down into three buckets. The core participant are these, what we call inference workers. And these are people that are building and running models to try to solve the core objective function of that topic. So if you and I have some insight and maybe distinct insights on how to best predict the price of say Bitcoin to USD an hour from now, maybe we'll go and we'll build some model to do this. We'll iterate it on over time until it gets to some adequate performance that we're happy with. And then all we do to join that the network and contribute to its sort of like aggregate inferences or collective intelligence is just start omitting the outputs from these models to the network on a regular cadence as defined by that topic. So every hour we're saying an hour from now Bitcoin USD is going to be this price and then the next hour and do the same, et cetera, et cetera. And what we're doing and doing that is we're sort of joining the other models that are participating in that topic in creating a single aggregate inference every hour to most accurately predict the price of say Bitcoin USD. When the second bucket of actors within a topic are people who are also building and running models, but they're trying to solve something a little different. They're called forecasters and instead of building models to try to solve the core objective problem of the topic, like the price prediction, they're trying to, whenever those first category of actors are producing their outputs, they're trying to predict how accurate each of those outputs are going to be at the end of the hour, at the end of the epoch and sort of like network nomenclature. And so what they're doing is actually like quite uh critical for the network. They're sort of why the network works and achieving some of these kind of self-improving objectives and the network being able to outperform individual model outputs. They're sort of exploring this more unbounded or nebulous domain space in context that may inform or provide useful signals to when some inference workers output is going to perform well or not as well. Maybe some inference workers end up just doing better on Mondays for whatever reason. Maybe others are better in really non-volatile markets and others in volatile markets. These forecasters are bringing those sort of out of band signals into the inference aggregation logic to make it this kind of context aware network inference. And so both the Forecasters model outputs and the inference workers model outputs are then kind of being merged together to create this context aware aggregate inference that's being delivered at that time of uh inference request. And this is important because obviously in a lot of these domains, in this example we're talking about with price prediction, we don't know what the price is going to be an hour from now yet. We need to wait an hour for that to be revealed. And by that point, the inference now is no longer useful to us. don't have the alpha we would have gotten from having that inference at this point. And so with both of those first two categories of actors, we get inferences at time of request. And then the third category of actor, they're not running models. They are the model evaluators. They're called reputers in the network. at every sort of time step at the end of every epoch, ah in this case, at the end of each hour, they're coming in, they're saying, what was the price of Bitcoin to USD actually? how accurate were each of the inference workers and forecasters in their respective outputs, and they're assigning the relevant weights to each of those actors. And the weights are important because now in the next epoch, they inform how much sort of uh out of the box influence they're going to have on the aggregate inference from that next epoch, but they also determine how much of the rewards or incentives are going to be uh proportionally given to each of those. each of those actors in those first two buckets. If I was X percent more useful in constructing the best possible network inference, I'm going to get X percent more rewards, et cetera. And so that's kind of how it works under the hood in terms of taking these three distinct classes of actors within each of these sub networks, these topics on the network to regularly produce aggregate inferences, as well as update the network sort of reputation system or waiting system as time progresses too. Okay. You said a lot there. Let me try to summarize. So there's four major kind of concepts. So there's the concept of a topic, and then there's inference workers, there's forecasters, and then the last one is UrepU. What was it? Reputors. So they're kind of like adjudicators to see like how close to the mark was the prediction, right? Something like that. Got it. And then they help kind of with a feedback loop for learning and then also with how close they are to the mark, then um they get some kind of boost for the next kind of prediction or something like that, Okay. oh Help me understand the forecasters, how they're different from inference workers, because they sound similar, but it feels like they're also looking at kind of the tails. Mm-hmm. Yeah, so an inference worker is just solving the core problem of a given topic. So in this price prediction case, maybe we're breaking out the quant textbook and we're saying, these things are signals in markets. These things are going to be useful in predicting the price of some asset to another. Let me try to experiment with these different things, these signals and features, and see how good of a model I can produce that predicts prices accurately. Forecasters, maybe they still have some of that in their models, but What they're trying to do over time is learn the little idiosyncrasies in each of those inference workers outputs to better inform an accurate prediction of how well they perform. These forecasters aren't predicting the price of Bitcoin USD an hour from now. They're predicting, oh, inference worker uh A received this loss was this accurate. Inference worker B is going to be this accurate, et cetera. And so they are tasked with this kind of uh like more nebulous task and more complex one, arguably of saying, all right, over time, let me improve my understanding of how each of the inference workers perform in a topic to like improve the network inference beyond just what the inference workers are able to achieve. And it's because of these forecasters, it's because of the introduction of these like context specific signals that the aggregate inference that's generated from the network at each time step. is able to consistently outperform any of the individual models within the network. Because if you weren't exploring this additional domain or feature space through forecasters, the best network the aggregate inference could ever be is the performance of the best individual model in the network. And at that point, why not just route to the best model all the time? We're trying to build something where the whole is greater than the sum of its parts, where a network can self-improve over time and consistently be better than its its of inputs, its models that are contributing to the core objective. So that's sort of the role for a faster plan. differs from inference workers in the network. It sounds like you've really prioritized kind of the self-learning kind of uh feedback loop with with inference workers, forecasters, and reputers. um That's really, really cool. When I think a topic, is that how are topics different from like domain specific or verticals uh like law, for example? Is topic much more like a sub? like a subsection of, let's say, law or humor or... it's a bit more granular than that. So you could think of like a general, like subject or like vertical being made up of many topics and sort of like, like at the lowest level and sort of like, like, uh, in kind of AI language, a topic is just defined as a target variable, the thing you're trying to predict and a loss function, how, how we're measuring accuracy in predicting that target variable. And then that the rest of sort of the topics logic fits into this. So you could theoretically define a really general target variable with an applicable loss function and have a topic pertain to a really general problem space. But I think the way you get the most out of the network is by instantiating topics that are as sort of tightly scoped or specifically defined as possible. And that's what topics sort of enable out of the box. Okay, that makes a lot of sense. that kind of feels like, uh you know, something I've heard that, you know, some of these large, large language models, like, you know, open AI, the models that they use, for example, they're like 90 % kind of on target. But then it's like that last 10 % is actually where kind of a lot of the value is. And that's kind of like that last 10%. is pretty much like all these other kind of AI companies that we're seeing that are very, very domain specific. Like Harvey AI, for example, focuses on law. I'm involved with a company that is, or with a project that's building a large language model for humor and like teaching a model like about like what's funny. And so, um which is really, really interesting, right? Because I don't think I shared this, but a long time ago when I was in grad school, my thesis was on computational linguistics. And I built a neural net. This was a long time ago before large language models, but the neural net I built was um aimed to predict um a problem called word sense disambiguation. so like, you know, these models have a very hard time distinguishing like, you know, irony from, you know, metaphors and analogies. It's like, They just have a hard time. so being able to predict like what, in what sense the word bank, for example, was, is used is bank like financial institution or is a bank like a type of basketball shot or like a side of a river. Like a lot of these at the time anyway, I think they're better now, but uh being able to distinguish like in which sense a word was used. Anyway, I built a neural net that, that aimed to do that with very, very low accuracy. And so, but it sounds like that's kind of uh with Allora, you know, these three actors, right? you know, um kind of like help to, what is it called? The word escapes me right now, where they kind of like balance each other almost to get to the most accurate prediction. Does that sound about right? Yeah, that's right. Yeah. I think they're all taking each other's inputs or outputs as inputs into their own sort of logic and using that to kind of rip out the best pieces of each other's outputs as a function of that. Then those get incorporated into like what the network ultimately generates and delivers basically. Let's go into your background. How did you get into this? You sound very well versed in AI. Is that in your background? I'm a little bit. Yeah, we've been building kind of AI stuff in the crypto space since 2020. We've been building some of the earlier kind of AI powered crypto infrastructure. um With my background, I come from uh more of a crypto background. I first got into the space and I think like 2014. um Prior to starting uh the company, I was uh leading blockchain development at one of the larger asset managers and insurance companies in the US, John Hancock, which has a kind of an international company called Manulife. And there we were doing a lot of the early work around public blockchains to be done by large institutions at the time. This was the beginning of 2018 when enterprises ah experimenting with blockchain meant private blockchains, enterprise consortiums, DLT technology, et cetera, missing a lot of the core benefits of blockchain technology, in my opinion. And so we were working on a lot of the earlier public blockchain stuff and a lot of stuff around building efficient markets for uh pricing and hedging against long tail exotic risks. So there's an AI element to that as well. And then when we started the company, it started as more of like uh a research endeavor into how do we build kind of decentralized networks or decentralized consensus mechanisms for uh reaching resolutions to subjective questions through a decentralized way. So not kind of like deterministic or objective things like, like, did we update the state transitions according to the virtual machine of the, of the network, but trying to query sort of subjective inputs, a kind of generalized Oracle problem. And as we were building that, were kind of thinking about how to best come to market with it. We were experimenting with some sort of crowdsourced use cases around pricing long tail assets and actors pricing those assets like NFTs, for example, being coordinated via these like subjective consensus rules. And we very quickly realized that humans and I guess like more analog inputs into those types of systems are just very inefficient. They're very inaccurate. And so was around that time, late 2020, beginning of 21. when we started building uh models, AI models for pricing long tail exotic assets, building various sort of AI informed or AI enhanced DeFi infrastructure, especially for long tail assets. And so that's sort of what, what led to where we are today. And then as, as we start, as we reached like a, a really solid base and foundation and building models in the domains we were operating in, we, this problem of siloed machine intelligence became very directly known to us. felt it ourselves as being just a single model developer in sort of the broader sea of use cases that are benefited by AI. And so we took a lot of that early work we did around subjective consensus mechanism design, combined it with the years we had spent building AI models that applied to crypto domains. And it basically became the Allora network in a lot of ways. Gotcha. Let's talk about data for a second. So a lot of these models that inference workers and also forecasters are basing their predictions on is based on data. does, in the data supply chain, where does that come into play? How are inference workers accessing, where are they getting the data from? Mm-hmm. on which they're kind of running their models against. Yeah, the answer is a kind of anticlimactic in that they're just getting it from wherever they want. One sort of core principle we've held when designing the network is that the network has to achieve its core objectives without any sort of opinionated approach to what data models are using or how models are being run. And that means that one models that are running on the network A lot of them are closed source because all they're sharing with the network is the ultimate model output. don't even know anything about the structure of the methodology, the model, but also we don't know what data they're using to inform their model outputs. so people are getting data from all over the place from centralized data providers to data warehouses to sort of aggregating data on their own. uh Some people are leveraging other types of data outside of just market related data, like social data that they may be pulling from some social social network APIs or. or uh news data they're scraping from a variety of news sources, whatever it is. And so the network's been designed and built purposely in a way where we don't place any sort of opinionated directive on model creators about where they get their data. We're working to build adjacent infrastructure and tooling to make getting data from different sources as easy as possible. But at the end of the day, my design philosophy for networks is that building modularly and building kind of as tightly scoped of primitives as you can is the way to sort of optimize efficiency in these systems. And so I think outside of even centralized data providers, there's additionally a lot of like, like, uh like really proficient data networks that are spinning up, obviously in the crypto AI space and adjacent that I think model developers on the network are plugging into to inform their models. And so yeah, they're really getting them from everywhere, getting data from everywhere. Now, if you have a, let's say an inference worker that is that's consistently on the mark on, you know, a number of topics, would it benefit uh the Allora network to know like on which data or the type of labeling that's being used that there's like, how are they so accurate? Is that something that you guys would be interested in? Maybe, but I think I operate from the perspective that markets are the greatest coordination mechanism that exists. And if someone's able to out-compete all other market participants to such a large degree and so consistently, then that's their alpha to maintain. And the more they do so, the more they're consuming disproportionate rewards to others, the more incentive there is for others to... go out and continue to experiment with data, import new data sources into their model, experiment with different feature designs, et cetera, to try to out-compete them. yeah, we approach it very much from like, we've designed a very specific kind of market structure or market environment for provisioning machine intelligence. And that's sort of where, like we have to, that's where we spend our time is how do we build the most optimal market environment possible for provisioning this type of resource? And then market dynamics are the specific sort of elements that go into how one market participant is active in that market is sort of left to be governed by those participants within this market environment. And that makes sense. It's like they're alpha to maintain. And I think that's totally appropriate. um Let's talk about incentives. um How is the network incentivizing these three actors to do work? Yeah, so basically at a high level or fundamentally, they're getting rewarded based on how well they do their job. Inference workers based on how accurately they produce outputs that solve the core ML objective function in the topic, forecasters based on how or how accurately they predict the performance of inference workers, and reputers based on uh how honestly and accurately they reveal uh the actual performance of inference workers and forecasters. And then where those rewards are coming from is from two sources. One, it's coming from fee revenue from consumers, applications, developers, whoever it be, who are paying for inference. Those are coming into this bucket that then gets dispersed to these actors, as well as uh emissions from the network. There's a native token that sort of governs the network. It sort of facilitates all this coordination. so fees come into this bucket along with the like emissions or inflation of the network. And then those are dispersed across the core actors, again, based on sort of how well they do each of their respective jobs. And so that's sort of how the incentive flywheel works within the network. Now, one set of actors we haven't talked about are the application developers that need um predictions from these models. Let's talk about that. That's kind of the other side of the market, right? It's like these developers who need that type of information or data um or prediction. Tell us about that. Yeah. So I think like what the network really does for uh application developers or people in quotes on this demand side of the network um is, is it sort of shifts the paradigm by which we interact with AI from what we have today, which is very model centric, which we kind of talked a little bit about earlier to one that is objective centric. So model centric again, being that like I as application developer, whoever needs to constantly be surveying the landscape or universe of models. trying to pick which model is the best for my given use case across all of the domains and contexts I'm going to operate in, and then constantly be doing this again as new models arise, as my use case uh is iterated upon or changes, whatever it is. um And uh that's sort of a model-centric paradigm for interacting with AI. What the network enables application developers to do who want to integrate AI into their product or application is they just have to specify what they want AI to do well. They just need to specify an objective function and then this efficient market of models are all kind of competing and in turns working together to produce the best possible outputs as a function of that. And so it removes a lot of the overhead from the application developer to get access to the best AI for any given use case. And the way we've uh kind of designed the network in these initial versions and the kind of verticals we've paid particular attention to. is in the kind of DeFi verticals, finance verticals, DeFi adjacent things. And so a lot of the application developers building on the network today are DeFi developers, they're DeFi protocols, they're people building uh AI powered vaults in DeFi. There's a lot of different AI agents that are interacting on chain, sort of outsourcing their financial cognition to the network. And so that's where a lot of the kind of application builders today are building. And yeah. Is there a specific DApp developer that you could spotlight? Maybe just to give us a sense of the liveliness and what it looks like maybe for the audience. Yeah, I think a few kind of cool examples are one of the earlier applications that went live when the testnet for the network started was PancakeSwap. um And they have this kind of prediction market game where every five or 10 minutes, people are betting on whether or not they think the price of each USD is going to go up or down in the next 10 minutes. And they're betting against each other in sort of the V1 state of this game. And what they did when the LoRa testnet came live is instead of you and I betting against each other in terms of it will go up or down. The kind of network of models generating these predictions on Allora are emitting these predictions on a regular cadence. And then users are betting that the AI is going to be correct or they're betting the AI is going to be incorrect and they're paid based on that they're betting with or against the AI, which I think is this kind of an interesting way to make these types of like long tail noisy prediction market verticals. more efficient as a function of injecting some more informed input at the kind of base layer. There's been some other pretty cool stuff around prediction markets, I think. I think prediction markets are interesting for the network because they're hyper long tail. They really do benefit from having this more efficient source of compute, which is AI. And so there was a uh team that we work closely with called RoboNet that builds AI agents in the in the DeFi space. And when the US presidential election was happening, they built an agent for trading the polymarket US general election markets based on a myriad of different political models that were running on the network and a political or a presidential election topic on the network. And it was able to trade quite successfully taking a fairly hedged posture or risk mitigated posture in these markets. I think it generated like 68 % APY annualized just by trading on these markets. which is pretty cool in such like a nebulous long tail market just plugging into these different AI models. And then you see things that I think are like more accessible or more like familiar to everyday DeFi users, just like general DeFi vaults. There's a vault, there's a number of vaults live, but there's a vault um on a protocol called Vectis right now that's taking a bunch of price predictions for Seoul. And then from the network and then using those to inform a kind of directional SOL trading strategy powered by AI that any sort of user can get access to or exposure to just by depositing in the vault. And it's performing quite well. It's, it's, ah it's, it's leveraging this kind of more informed or more expressive form of compute in a bunch of these AI models, predicting the price of, of SOL to get access to what like AI powered DeFI five primitives could look like. like. And so those are. I think are kind of cooler that are in market right now. That sounds really cool. As you were speaking, I was thinking of a perps exchanges and how the output of some of the models on Allora could be really helpful for perps traders. ah Have you looked into that? Yeah, I want to say the Vectus Vault is trading perps markets, because you can you can think of like even more sort of even like higher yield or more capital efficient strategies that instead of a strategy that say using uh like SOL USD price predictions to go long, or to buy SOL when it predicts the price is going up and then sell SOL into USD when it thinks the price is going down. instead going long and maybe even informing its margin when it's predicted to go up and then going short and informing its margin when it's going down. So you can kind of capture both direction, both directions of kind of price movement as informed by these AI models. So yeah, we're doing uh a bunch of stuff in kind of the perps verticals. We're seeing a number of developers in the ecosystem experimenting with different perps vaults or perps tools that are leveraging the network right now. I see that as a massive market. As I think of crypto, there's stable coins, there's perpetual exchanges, um and really capital formation are the three main product market fit things in crypto that I think will be exported to TradFi. um Perp exchanges is an absolutely massive market and so innovative that... I'm kind of proud that like that was born in crypto. um But I can totally see, you know, Allora working with, you know, one of these perps exchanges. um Like that that would be so beneficial, I think, for for perps traders. um Yeah, I agree. think you could even start to do even more exotic and interesting things as well in that, in especially in like cash settled, uh, markets, you, you don't need the underlying to, to interact with those markets, right? You need a price feed and collateral and capital kind of trading in them. And so you could, you could theorize like uh a whole new suite of perps markets by just plugging into kind of AI generated price feeds being produced by these clusters of models on the network and then using those as the kind of Oracle powering markets that are too long tail uh or too exotic to be supported in kind of existing Oracle infrastructures. And so I think there's a lot of really cool stuff you can do even just outside of the existing like market domains where perps are really active. um I met with a Hibachi last week. They're an up and coming perps exchange. And we talked a lot about kind of like what the competitive landscape looks like. And there's going to be a lot of winners, right? But right now it's kind of like Hyperliquid is like the main actor. uh But there's going to be a lot of winners. It's a very large market and there's a lot of space for everybody. I wonder if Allora, if you guys have thought about like working directly with hyperliquid, for example, and using their builder codes, um and where perps traders could work through Allora versus working, uh you know, trading directly on hyperliquid. Is that uh something that you guys have looked at? It's something we've chatted a little bit on the team. We've been supporting a number of build-outs on, uh of vaults on hyperliquid informed by like Allora strategies for a while. um But some of the stuff relating to builder codes is still kind of in the early, early stages of just kind of like thinking through it on the, on our team, seeing kind of what that could look like basically. Yeah. Let's talk about the go-to market and like where Allora is in terms of like, you know, testnet, mainnet, et cetera. uh The go-to market feels pretty complicated because, you know, the supply side, you've got, you know, inference workers, forecasters, reputers, and then you've got developers on the other side, the demand side. Like, what does your go-to market look like and how are you getting the word out, brand awareness, and getting people to become involved in the network? Yeah, on the demand side, think it like a healthy ecosystem has built up at this point. think DeFi is where most of crypto's product market fit has been found to date. I also think AI is sort of most mature in financial domains and used in finance for decades at this point. And so we found just a lot of like like these kind of uh light bulb moments and sort of talking about AI enhanced DeFi and seeing a bunch of excitement and development from kind of individual developers, different protocol teams, et cetera. And so starting with a vertical such as DeFi where there is such a clear synergy with AI being integrated, where it does represent the majority of crypto's activity, et cetera, uh has been sort of a core piece of the go-to-market on the demand side. And then on the supply side, for lack of a better term, these model developers, I think the value proposition is fairly clear and compelling in that today, if I'm a model developer, there's a pretty long and direct path from developing some useful model and capturing value from it. I build a model and then if it's worthy enough to sort of turn into a company, I have to go raise capital. stand up a bunch of kind of the administrative pieces of running company, build a product, achieve distribution and PMF, things like that. If it's fund related, I go through sort of the fund related pieces of overhead. Maybe I'm building a model just to hopefully get hired at some AI lab or something like that. But what Allora does, it creates the shortest path possible essentially from having some useful innovation in model development, building some good model, and then turning that into value, capturing value from it just by running it on the network. not too dissimilar from what Bitcoin's done to energy markets, And that prior to Bitcoin, if I had access to energy, I'd have to sell it to a grid, send up an energy company, find ways to turn that energy into value. Bitcoin creates this efficient market for energy just by allowing people with energy to turn it into value by mining Bitcoin. uh What Allora enables is kind of that for model developers, data scientists for turning models into value efficiently. And so we've actually seen a lot of adoption amongst model developers. think close to 300,000 workers or model developers have or models have been registered to the network since testnet started, which is I uh think like a signal that like that value proposition is sound. And now a lot of what we're doing is working to sort of better make that side of the network accessible for less crypto native AI. because there's still bit of DevOps, there's running nodes, there's participating in this network, etc. And so abstracting away as much of the kind of blockchain, the blockchain pieces from the model developer journey uh is becoming a big priority. And we're working on a few exciting things to really abstract away that blockchain piece for AI developers. And so that's what a bit of the go to market is the value prop on either side, I think is quite sound. I want to go into a little bit about the abstracting the crypto piece, because that's kind of a theme I'm seeing with a lot of projects now that, you know, the crypto aspect becomes like a source of friction for a lot of, you know, both users as well as developers. so abstracting that is has become a priority. Tell us about kind of like the what led you and the team to kind of think more about that and how it affects the I guess the model developer journey. Yeah, think so. Just to make it concrete, think like one of the most uh material sort of headwinds for model developers getting onto the network is ensuring that they're packaging their models in these kind of worker nodes and then deploying them, ensuring they're always online, ensuring that they're sort of delivering the right structure of information to the chain on a regular basis, things like this. And I think about uh when you think about sort of the pool of model developers, There's a pool of model developers that are really good at building models. And then when you impose this additional requirement on these model developers also have to be somewhat crypto native, or they have to have like material enough, MLOps or DevOps experience, that pool shrinks considerably. And so it becomes kind of paramount in just maximizing your total addressable market on the supply side of model developers to try to keep it as purely to model developers by abstracting away all these sort of confounding elements that may reduce the size of that market that may create like the address market being some meaningfully smaller subset of the total market. I think in addition to that, the uh kind of rationale behind further abstraction is that while the crypto space, I think it's safe to say at this point, has gotten quite excited about AI, I don't think that's a bidirectional enthusiasm. I think the AI industry in large is still quite skeptical of crypto. um in various ways. And so I think in order to like tap into the sort of largest and most competent talent pool for AI developers, you have to kind of create an experience that feels comfortable. It doesn't feel like crypto for them. And so that's been a big component of it as well is creating the sort of bridge coming back the other way of getting AI people more into the crypto ecosystem, largely via abstraction of the crypto pieces. I think that's really a mature kind of approach because it shows that you have an understanding of AI developers or model developers and really kind of the AI developer persona, and that a lot of them just don't... have not been interested in crypto historically and may not and um maybe have no interest in it at all. But they're still interested in creating a model that is helpful and useful. um so abstracting that, the crypto piece away, I think makes a ton of sense. And I can see how that will increase adoption of model developers for Allora. um In which stage is Allora right now? Yeah, so we've been running test net since I think last July, give or take. We've gone through many different versions, like iterations, version upgrades of the network throughout test net. Back in February, we released Dev Mainnet, which is basically the main net instantiation of the network. But like not with all of the features, emissions, et cetera, turned on more so for developers to get onboarded to. the main net environment that will be the environment when public main nets released. so ah it's just about wrapping up a few of the last pieces with that main net and then kind of turning on the full feature set and releasing main net to the public. And so it's like fairly mature in its development cycle in terms of being quite close to this kind of public main net release. That's exciting. um And you don't have to share any dates, obviously, but I'm curious about like, what does that roadmap to Mainnet look like and kind of the work involved for you and the team? Um, it's there's not too much, frankly, it's, it's, uh, the network is in a quite stable state. It's running well running as expected. So it's, it's more about sort of wrapping up some development around some of the adjacent infrastructure, adjacent tooling, kind of like visualizations of the network. So people can like more easily access the data flowing through the network, continuing to onboard more cohorts of. of model developers onto the network, things like that. And so it's really just kind of these last sort of adjacent pieces to ensure that the network is as accessible and as populated on day one as possible. Okay, a spicy question. You ready? Okay, so let's say Allora works and is, you know, it works like, imagine like the wildest dream you've got and Allora totally works. Like who is disrupted? Like which incumbents will be disrupted the most? Um, yeah, that's a good question. I think the easy answer to that, which maybe is just my answer, are probably a lot of the large model companies today. Not because I think they go away or anything. think there is, is such a imbalance in terms of market dominance that they hold today as a function of there not being a, like a competitive alternative in terms of like these efficient market environments for provisioning and coordinating models. And so I think the easy, like the most obvious answer is probably like the existing model companies and they will hold their portion of the market still in like a form factor that looks like how it looks today. They may even like people or people leveraging their infrastructure may even run models on the network and introduce these additional sort of revenue streams as a function of just like contributing to the network's intelligence and in turn capturing value from that. In terms of one category of existing participant and industry, that's probably the one that is most disruptive. Nick, is there anything that we haven't talked about today that you wish we had? I think you did a great job. think we covered everything. I think this is pretty comprehensive. I think we covered everything. I think the more people want to get involved in the crypto AI industry, the community, joining various communities, not just our project, but I think there's a kind of excitement that is often only present in the very early days of a new category or industry being stood up that is palpable. it's exciting to be a part of. And so I think there's like a, there is a really interesting sort of uh notion and kind of anyone interested in this space even vaguely and getting like integrated into these communities and playing a part in it or just being a part of it. Okay, last question. um Since you mentioned community, I want to talk about that. Now, with almost all crypto projects, you've got various personas of community members. You've got, at least for Allora, I'm guessing you've got model developers, you've um got application developers who part of the community. Is there room for community members that are not developers, that are not AI? uh maybe they just want to be part of a project that they believe in. um Is there a room for that type of persona? And what are some things they can do to contribute? Yeah, for sure. think there's pockets of the community today even where it's a less technical crowd that is just really enthusiastic about crypto AI, specifically what we're trying to do in decentralizing intelligence, things like this. And I think the lowest-hanging fruit for being involved is honestly just being a positive for spreading the message, bringing new people into the community. assisting in this almost societal mindset shift of thinking of AI as more multi-dimensional than we think of it today as centralized AI not needing to be the only option or as open source AI not being the only alternative to centralized AI. There's a whole other category of AI that is decentralized AI and agnostic to open source or closed source that I think is still underexplored in a lot of what we're pushing that um community members are very actively involved in today. And then like in terms of getting involved in the network more directly, I think there's a myriad of interactions from contributing to the network's economic security, staking in the network and like one capturing some of the emissions and fees from that, but contributing to the network sort of overall economic security, contributing or interacting with or experimenting with like uh lower code, no code. tools being built around the network like no code agent builders that are sort of powered by a lore out of the box or other types of sort of products that are being experimented with that require a less technical user base, those types of things. I think there's lots of ways for less technical people to really be like an important part of the community. Cool. Well, Nick Emmons from Allora Labs, thank you so much. Yeah, thanks for having me.