ABOUT THIS EPISODE
Toby Simpson is COO of Fetch.ai. This company is building these Autonomous
Economic Agents that can perform proactive economic activity using the
combination of blockchain and AI technologies.
The Hashing It Out Social Media
Episode 96 · 2 years ago
SHARE THIS EPISODE
Episode 96 · 2 years ago
Hashing It Out #96- Fetch.ai COO Toby Simpson
ABOUT THIS EPISODE
Toby Simpson is COO of Fetch.ai. This company is building these Autonomous
Economic Agents that can perform proactive economic activity using the
combination of blockchain and AI technologies.
The Hashing It Out Social Media
Hey, what's up? So Avalanche, let's talk about it. What's an avalanche? Snow comes down real fast, fierce gains momentum. But I'm not talking about the natural disaster. Or if it's not a really disaster, I guess it no one's around. But anyways, avalanche. What is it? You've heard about it, not, you're gonna hear some more. It's an open source platform for launching decentralized finance applications. Right, defy. That's what you want. Developers who build on avalance can easily create powerful, reliable, secure applications and custom blockchain networks with complex rule sets, or build an existing private or public subnet right. I think what you should do right now is stop what you're doing, even if it's listening to this podcast. Stop, pull over, go to the gas station. If you need to go to a subway, there's a subway, like everywhere. There's always a subway, all right, all right, there's always a kroger. Just stopping a parking lot somewhere. Go to Alva LAV, Alva labs dot org, to learn more. All right, stop, go to Alva lab. That's a va labs labs dot org. Now entering incast work. Welcome to hashing it out, a podcast will we talk to the tech innovators behind blocked in infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. Welcome back to hash get out. I'm your host, Dr Cory Petty. Today's cohost is going to be dean. I canman say what's up team, what's up? Being there's Mr Dean's you know his voice, and today we're interviewing fetch Ai Toby Simpson, which is the CEO of fetch dive a little more into what ai is and how it could potentially help the blocked in industry or how those two technologies fitted together. Toby, why don't you do the normal thing and give us an introduction as to kind of who you are, where you came from, and I'd a quick overview of kind of what fetch ai is and what it is trying to do. Yeah, okay, well, I'm toby. I'm cofounder and CEO fetched. In various previous lives I was involved heavily in computer games production and design, and this is how I got to learn all about the dangers and evils of software complexity and different approaches to dealing with it, and that set in in motion a long path through various different approaches to managing all of that, that AI, artificial life and some very other very interesting things, before I ended up at Blockchain, looking at how that can act as a mechanism for product providing huge scale in these systems. I'm curious about that, like, if what what point you have? I guess professionalism and the concepts of kind of AI? What issues for how it hues? Where the bottlenecks are? What part of blockchain do you think Mary's well with this? Why is it? Why is it something useful to bring in and try and use these technologies together? Well, that's actually a very interesting perspecially it took me awhile to fully appreciate what what this is all about. One of the things I was doing back in the two thousands was building massively multiplayer online games as using an agent based approach to doing that to give me huge scale. But one of the things that I couldn't fix was how would I provide integrity in a network that was effectively peer to peer as opposed to client server. So even though I was able to build software that would scale in a linear fashion, there's only so much in the way of CPUS and memory that you could throw at it at the time to give you that linear scale. Eventually you just run out of computing power and that would be the end of the line. Now, if you make that appear to Peer Network, then you've got big trust issues because you can't create an incentive mechanism where it is more profitable and productive for people to be honest and dishonest, and you can't get the kind of scale and bring on board machines and have them go in and out without that hugely affecting the system that you're building. And what occurred to us, my fellow cofounder and I, when we were looking at how we would solve the problem of bringing the entire economy to life. So how would we create autonomous economic agents? Everything, IOT devices, data, people's services, you name it, we'd create one of these. You're looking at populations in the hundreds of millions, if not billions, of these things and and scale will be a problem. And then we worked out...
...that in fact, when you combine ai, when you combine this agent base approach and when you throw in blockchain to this as well, you can create a network, or effectively a digital world, that can be as large as you want and you can keep making it bigger by adding more machines. And you've got a fundamental incentive mechanism that underlies all of this that encourages people on average to be honest rather than dishonest and that creates that environment where you can keep making this world bigger to take into account the kind of things that you want to throw at it. You see agent based approach to you elaborate on animal bit. Yeah, this is a this is part of the journey I guess I took back in back in the early S, writing computer games on the Commodore Amiga and pretty much rowning under a hundred thousand lines of sixty eight thousand assembly language from beginning to wonder whether or not I was ever going to ship product and in the end, the very moment that it worked and didn't fall apart, we got that on a disc and on a bike and to the duplicators very, very quickly indeed, on the basis that if you touch it anywhere, it would fall apart, and that was because the the approach that I was taking was I see a problem, I write software to solve that problem, and that's a very top down approach. It's a very human approach, I might add, because we do like to know the problem that we're solving. It's not a natural position for human beings to put in place the components that will allow the problem to solve itself, and I got to learn about this seeing a colleague of mine building these worlds where he made all of the individual components, all the individual characters, wandering around their own autonomous units, called them autonomous agents, and they would go around doing their job. Now, in in a medieval type world, if you killed all the people that were dealing with burying everybody, then suddenly all the bodies would pile up. So what you did had consequences, but the most interesting thing was you didn't have to script those consequences in advance of them happening. It's like Lego, and nobody at Lego needs to know what you're going to build with it in order for you to be able to build it. And one of the advantages of this approach is you end up effectively with a large population of very simple things rather than a smaller population of very big, complicated eight of things, and you allow them to solve the problems themselves. So it's very, very scalable. It is very reliable because you're no longer dealing in vast amounts of software or dealing in smaller amounts of software, so it's easier to make it hold itself together and it becomes very flexible. So the kind of thing that can adapt itself in real time and that agent based approach serves large, complicated systems very, very well. And when we came to looking at fetch, we were thinking about how we would get that scale. We change them from just automous agents into a turmous economic agents and allowed these agents effectively to negotiate and trade with each other and built this large scale, decentralized world in which they can find each other and then talk to each other. And of course various aspects of AI are part of that search and discovery process, which is very, very exciting. But it means then that the problems that are potentially solved you don't have to know about in advance of bumping into them and that's a really, really exciting way of making things happen. So, bringing this entire thing together, what's think of the main use case you see fetch being used for, like just for our listeners and for me to like kind of understand really what the vision is of fetch? Anything that's involves spinning a large number of plates is potentially something that fetch technology could do, and that is a very important part of all of our life. So transport and mobility is a very, very good example. It's a good example because there's an enormous number of moving parts and they're very, very difficult to coordinate. And human beings, we we allow a lot of hassle to go over us without complaining, but the reality is, conducting that orchestra of pieces is actually really difficult to do. And when you think about it, if you conduct if you're going through any large scale journey across the world or even and down the street, the number of things that you have to worry about in order to be able to get from A to b without something going wrong is is enormous and the responsibility for worrying about all of those things lies very, very squarely on your shoulders. Now, of course, what you don't have, and what I don't have, is an army of personal assistants who are going out in front of me solving all the problems before I get to them, effectively rolling out the red carpet for me. But when you start looking at this autonomous, second aged approach, well, suddenly everybody has this, because these are digital entities that are acting on their own behalf as well as on...
...behalf of what they represent. They're able to talk to other digital entities in order to solve problems, so they can come to you with a solution before you even know that you've got a problem to solve. You work. You've should the term autonomous economic agents. I in the past have referred to smart contracts as that. That's a better name for smart contracts in the there in blockchain. But what you're referring to as something different, based on I would say, scale. Can you kind of as I think you are? Can You? Can You? Can You? Yeah, firing, but you do you believe that's that's an appropriate description of what smart cart smart contracts are and if they're in blockchain, and how are what you're talking about different from that? Well, of course, interestingly, smart contracts. I think I have lots of people are trying to come up with better names of them because because they're not necessarily particularly very smart or very contract looking, depending on what perspective you look at them from, but and certainly very large amount of code in there would be economically unviable to operate. And they're not autonomous in that they can take decisions without something going on in the outside world. Now there are services and bits and pieces that are attached to that that allow some degree of autonomy to be achieved, but in nest that interacted with, they don't particularly do anything. So what you can't do is have them actively going around looking for potential people who might actually have the value that they have exactly. And and the thing about automous economic agents from my perspective is that these things act on their own behalf. Now, I mean we live in an extremely wasteful environment and data is one of the grandest examples that. Of course, in the amount of data that's out there that might be useful as vast but of course you can't incorporate it because you don't know it's there and you wouldn't know where to find it anyway, even if you did know it was there and what we were looking at as well. Actually, if you could attach an autonomous economic agent to all of these data items, then they could effectively go out and sell themselves. They could go looking for other agents that might want that data and then get into a negotiation and potentially a transaction as a result of that. And I got into a very interesting conversation with somebody runs a large scale telecoms network in Southeast Asia who is saying that, well, because of data protection laws they're collecting a huge amount of data every day but of course, thirty days later it has to be deleted in a lot of cases, and that's potentially as it's several petter bytes a day of data that's coming and going and they're getting used because the cost to exploit it exceeds its value. If you can effectively, cheaply, if not at almost zero cost, attach agents to all of this data and they can go out looking for possible places where that could deliver value, then that changes the economics of that entirely. Okay, because then alongside, alongside your existing mechanisms, you've now got all of these autonomus second agents rushing around finding people who might want that data. Now. That works across the economy, which is also wasteful in other ways as well. Empty Hotel rooms, last minute cancelations causing things to go on fields, shipping containers that aren't as occupied as they should be, not the most effective rooting and usage of energy. And when you start boiling these down to sort of a hyper local approach of having these agents negotiating with each other to get this stuff done, then you're potentially looking at something very, very interesting. You're trying to make a lot more out of what we already have, which is pretty good from efficiency perspective as well. Let's talk about the architecture here, like how do how do you make these agents? And then how do they how do they fit into a blockchair? Okay, so there's a bunch of bits and pieces involved. The we've got the agent framework that we've built that allows people to create these agents and that gets easy and easier to use all the time, and there's going to be visual tools so that as well, because what we want to be able to do is make it so going for for for those for those of us who've used to scratch, for example, as a dragon drop instant programming language. Seeing or watching children build amazing things out of that very, very quickly is is incredible to watch and it just goes to show that actually what is usually a very complicated programming task can be broken down into components that are easy to use. We want to do the same thing with agent building, so that anybody who's got a data source or something that they want to potentially get out there and monetize...
...can do so very easy with the dragon drop. So that's the agent framework and the associated tools. On top of that we've got this thing we call the open economic framework, which acts as a decentralized search and discovery this is where agents connect in order to be able to find each other. There's a number of ways they can do that. One of the very interesting ways they can do that other than geographical search. Like I am here, what's around me is these semantic search its, which is a great application of Ai, where you use dimensional reduction to effectively position a description of yourself in and amongst everybody else's descriptions. And of course it turns out that if you are relatively near in this strange semantic space to somebody else, the chances are you're related this. This can be seen. This is this is seen in a lot of AI applications for trying to do, for example, recognizing characters that are draw you build a model from a good dimensional reduction and then if it's near to the key one that you want, then it probably is that one. I'm so that gives you a semantic and a geographic search and that's active, or can be active as well as passive. But of course all of that involves computing time and under underneath all of that you also need to be able to transact. These these agents need to be able to negotiate with each other and then they need to transact, and that's where the blockchain comes in, because that provides, of course, the method by which those transactions can take place, but also it provides the underlying incentive mechanism for people to scale and build that network and provide that search and discovery system, because those high computational load tasks, particularly once relating to AI, are going to cost and they cost in tokens in order to do that. Okay, see, we have the trend. Rephrase that a little bit or repeat it back to you. You have a framework for creating autonom stations will say that's that and you're working on the yet. But Dragon drops that people can make agents based on whatever criteria they have, that that's relatively easy to use and they can deploy somewhere. That's somewhere, is a market in which they can discover each other, either geographically or through some semantic location, which is like, you know, I do this, this, this. What are other things that do something similar? And then those things are able to transactive each other on some blockchain which provides transaction and verification and so on and so forth. What is that blockchain and how to work, and who are the agents that are again, are that are operating on it and doing some type of consensus? Okay, so we've got out there. At the moment we've got an existing maynet and we've got a new one coming up early next year, and we're about to be running a whole bunch of incentivized test net's, funny enough, first one starting tomorrow, where we're unrolling all of the key technology pieces that lead to a position where all of those systems are actually working in their entirety for the first time. So the the blockchain is very tightly integrated with the agent framework. That's been a key part of this right from from the beginning, when people be at a fire up these agents and allow them to work straight away. We it is a proof of stake consensus mechanism and we're we've got a sort of a unique approach to this related to we call it minimum agency consensus to avoid those those issues where too much responsibility clumps into too fewer people, because that's a very important thing. Obviously, with the agents, which is the key thing that we're enabling with this, it's important that these agents are able to be able to transact at quite a high rate, and one of the reasons why it's not possible for these agents to live on a network black a theoreum at the moment is that if you've got many, many thousands of, not tens of thousands, hundred thousands of agents all doing their work, then there are a lot of transactions going on and they're potentially quite low value transactions as well. So if you're buying up lots of weather information from surrounding sensors, then you're not going to be paying a huge amount of that. So it's quite important that the transaction costs scale with that too, so that all of these these agents can can get their work done. So it's a combination of technologies effectively in order to be able to deliver this. And you know, there are great, great shoulders to stand on out there with all of this stuff, and smart contracts are very important part of an agents work for what we've been building, for example, with decentralized delivery network where,...
...and we refer to that as delivering people pizzas and packages instead of individual silo businesses. Why shouldn't you be able to coordinate delivery of everything, but without centralized dating agency to do that? That's uses smart contracts throughout its process to handle s grow for her transfer of information across, but also verification and dispute resolution. So all of these key pieces of technologies and been able to operate them in an entirely decentralized way are important for building these large scale agent based applications, because the the DDN wouldn't work without them, and that's one of the things that we've been particularly excited to be building and demonstrating to people relatively recently and got some very exciting stuff coming up from that in November. So I assume you're familiar with Seniu larity net. If you aren't, that's fine, but if you are, then my main question for you. What makes this different than something like similarity net, which is quite an og crypto project that I believe did some or from what it sounds, I did something quite similar to what you guys are working on. which aspects of that are you thinking are similar? The entire thing just seem I'm not, all right, the most indepth person when it comes to simular that's fair enough. And the the the autonomous economic agents perspective that we're taking is there are a lot of people who are looking at this from data or an AI marketplace perspective. Now that, I think pretty was right, and you know that's that's that's that's an extremely important thing to be able to do. Know, if lots of people have different aspects of Ai and machine learning and you want to be able to connect them together effectively so they can make uses of the services that they have, then then that's a very important service to provide. And of course we can do all of that with our autonomous economic agents as well, because they can provide those ai services and they can find each other on on the network that we have either by approximation through the semantic searches or geographically define things that can crunch numbers that are relevant to you in whatever context that you want. But the the actual approach of building these autonomous economic agents and having them actively deliver their value and and go out there and find each other and trade, negotiate on build bigger applications out of each other is, as far as we're aware at the moment, unique to the Fetchi project. Okay, I'm looking at I'm curious about it. I would add one to try to get as going to say, We'ul add one thing on that. On that one to Adamly, but this is this is one of the things that's really important about this space that none of these things operate in isolation and, as we've discovered, blockchain is into thing all by itself. It gets very exciting when you start combining it with other technologies like ai and some of the other cryptographic technologies for fair pible credentials and other bits and pieces. It's a combination exercise and there are a huge number of different applications that can be built out of these technologies when you arrange them in different ways, and it's not necessarily a case of saying it's this one or that one. It's a case of interoperability and allowing the unique functionality and the unique abilities of all of these different projects to be able to interact with each other in a meaningful way. Agents, for example, can act as gateways, they can act as gateways between networks, gateways between the inside world and the outside world, to allow all of those features to be incorporated into agent based applications and that kind of thing is also very exciting and I think we're beginning to see a lot more of this going on right now, and I'm very excited by that, as more of us are collaborating to see how we can combine the things that we have in order to build interesting new technologies or capabilities. I couldn't agree more with that statement in terms of kind of combination of these, for I would consider expinital technologies and what new things we can build, either like building things better than they previously are completed, or completely new things that could have been built previously because we didn't have the technology. HMM, what I've been and I like that you're exploring the space what I but I'm also very concerned when people start doing that if they're not learning from the key insights of what these technologies should be used for what they're providing. And so I when I look at the blockchain in like integrations into projects, sometimes it's it seems unnecessary and my opinion of blockchain is basically useful for providing distributed trust and allows for and then allows for features like digital scarcity and ownership transfer of agents on that on that blockchain.
But how that trust is created, based on the actors who were participating a consensus, is incredibly subtle and, yeah, scaling problems associated with what they're coming into consensus with and how those agents who are doing consistus need to keep track of the stuff, as well as maintained data availability of the entire blockchain for those who aren't participating Sinsus is also quite subtle and we're not seeing that be like we're not seeing the solutions to those problems. And it's all only recently, as we seen some of the blockchain the larger blockchain projects start to fill up in capacity and then have to deal with this stuff. So my question is, like, who are the participants on the blockchain level? Why are they participating in the kind of like what types of things they need to keep track of and do you see those becoming a problem later on down the line? Well versa, I agree on a couple of well on more than one level, with what you've just been saying. When one point there is that obviously blockchain has a particular application that we all collectively pospect, perceive and understand, and the way in which that is applied is is indeed very subtle in some cases, and it's one of those things that were all of those individual parties involved a very delicate incentive back in this and that it's easier to upset it than it is to just make it work better. If it was possible, as we all know, if it was possible just to increase the speed of something by a hundred times very quickly, people would have done it. And there is something uniquely unworkable about proof of work, with the exception of the energy costs, about it. That's that's very hard to argue with. And in the IT does the job, does exactly what it says it's going to do and it works reliably. And when you start looking, and we've seen this with different proof of stake mechanisms, that's actually much harder to get right and people are exploring a bunch of different mechanisms for doing that and I'm not sure all of the mechanisms for all the answers have been dug out at this point and people are still discussing some of those those issues involved. Is Very interesting to watch that that play out and I'm pretty sure that collectively we're right at the beginning of that journey and certainly nowhere near the end of it. And an understanding, the economics in particular, of what goes on with WHO's incentivized to do what, why am when, is is also a complexity. Pick up on the point where you talked about transferable individual assets, and I guess that's a reference to nonfunchabal tokens as an example of that and one of the things that blockchain does. That's actually very, very important for what we're doing because certainly if you look at the decentralized delivery network, agents that represent hotels, for example, or on trains or planes or automobiles, are faced with seats and rooms that you need to be able to establish that you've got ownership of that particular thing that represents that thing on that day at that time, and that's a an asset that you need to be able to cryptographically prove that you're entitled to it and potentially that asset needs to be transferred to somebody else, and doing that in a centralized way as very dangerous. Doing that on a blockchain way is open and itself service trust for everybody involved. Also, it's cryptographically and provable that those things actually happened, so that the blockchain isn't just something that one bolts on for the sake of doing so. There are a bunch of different reasons why it's unique capabilities, such as transfer of ownership of unique items and assets, digital assets or otherwise ones that represent real ones, is extremely important and when you've got a network like that, obviously the incenters need to be in place to ensure that people actually operate all of those component parts, and that is another aspect of trying to create something like this and making it work. And I guess your question, which is what are the what are the reasons why everybody would be involved in doing all of these things, is it is a very broad question, but you've got the transaction fees and you've got the consensus rewards for providing the raw integrity of that network. You've got the fees from operations of all of the smart contracts. You've got the fees for operating the decentralized search and discovery mechanism and it's one of those things where the more agents that want to be connected to the network, the more demand there is for the search and discovery system, because any one given search and discovery node is only capable of servicing a certain number of agents. So rather interestingly, certainly from the models that we produced, it shows that there is an incentive for further decentralization. So you...
...might, for example, start out with search and discovery no representing all of London and then the demand around the main airport or main airport becomes high and someone creates another one to service agents that are, for example, connected to London Heather airport, or someone might do the same thing for JFK and that creates a demand because there is a value for doing so. In the rewards for setting up and operating those things exceeds the cost of doing so. It's a complex into players as because, as we all know, and and I don't think anybody has all of the answers and particularly when you're doing something that's potentially new, where we've got an additional customer involved here, which are the agents themselves, and also an additional layer to the network, which is the search and discovery system. So walk me through. Say I want to participate, right, I I would like to participate at the base layer. I don't care about making agents. I would like to serve the agents by running some hardware somewhere. What about? What are the resource constraints? What do I need to do, what do I need to have and in order to like participate at the base layer in terms of like maybe like providing data for these agents or participating againstensus, like you said, you're rolling out test that. If I would like to processes, can you help me figure out what I need to do in order to do so? We certainly can. So, if you're not interested in operating agents, then one aspect of that is operating search and discovery notes, and this is something that we're going to be very excited to continue rolling out over the coming months where more and more of these nodes are they're servicing agents in particular areas. The computational requirements to run one of those is actually very, very low indeed. It's the kind of thing that you can just leave tipping around on a computer somewhere, providing a service to local agents at the area or subject area that you declare. Obviously, operating full, full network nodes and participating in consensus requires a little bit more computing power, but it is very, very important that that that is straightforward for people to take part in as well. So you've got those that are operating the fundamental network, you've got those that are operating the search and discovery knows and you've got those who are operating the agents. And actually building and deploying these agents is relatively straightforward and as we increase the language options we we expect people, and we're already building supply chain stuff on raspberry pies, for example. We expect people to be able to run large numbers of agents on relatively small devices, which which sit there and deliver their value, having been actively discovered or then just sitting there and waiting for people to define them. So that's a relatively low cost and low effort thing to be able to do because, as we all know, even from user interface experience, every single step that you put in the way you lose people and you start with a hundred people going into people coming out the other side. It's about making this really easy and it's about making adoption of this kind of technology something that operates nondestructively in parallel to existing business systems, and certainly that's one of the things that we've discovered when I'm talking to people about integrating, on the second mic agents into existing systems. It has to operate in parallel with what they have, not cause it to be completely ripped out and replaced. It's about finding additional value and then figuring out how they can optimize those systems using this new technology. So there's lots of ways to participate and the incentivized test next is a journey to the mainor version to where we're focusing on different areas of the technology as we go through that and encouraging people to get involved incentivizing that process. And the first round of all of that is heavily focused on autonomous economic agent, is a key part of the fetch thing, but will also feature some stuff relating to governance and how you make decentralized governance work without involving everybody in every decision. So there's lots of interesting aspects of that that we're seeing discussed at the moment in the community. Yeah, so a few other I give t the reading was looking at the titles and abstracts of some of the publications you've had as a company on a lot of the stuff. It seems like you've done some real work here in terms of looking at some of the problems or constraints of how you're buildings together and what might be vital solutions.
Yeah, I think that that that's right. It's when you've when you've got a system with it well, to a certain extent it's ironic when I talk about their solving problems involving lots of moving parts, when when actually it's one collection of moving parts itself. And it's one of those things where you have to think about this stuff because when you're when you're adding stuff to all of this, such as the the agent economy and the search and discovery economy, then it's it's it's something that multiplies, not not adds. So you know, you add all these new things and the economic complexities and some of the other issues of making all this stuff hold together grow very, very rapidly indeed and it's extremely important to do the leg work on on figuring that out. And Yeah, it's very exciting to get to a point where we've done enough of the leg work to actually deliver the working system. And and certainly over that the last half a year in particular, we've been building an increasingly large number of these agent applications and running them on network and seeing them work and do their stuff, from from the delivery network to a mechanism that we built to create an augmented reality for self driving cars. Stuff that we've done in supply chains and energy is all stuff that's possible now that we couldn't have done a year ago. And and part of the this key journey now is introducing all of these technologies to a broader audience and allowing people to see what they can build and how they can take part in this new system. Trying to wrap my head around this, and and doing it an audio format, especially for those who are listening, is not very easy. So I'm trying to figure out how I can ask a question that helps them get a mental idea of how data flows throughout system. So for the preference this I did a Iot, the first IOT proof a concept, and on atherium a lot of years ago, where we would deploy basically a device that captured environmental data around it and then encapsulate that instead of back to a smart contract which logged it and allowed for another system to basically perform checks and balances on that environmental data and alert based on various criteria given by the user. But that was a relatively relatively simple set up. Right you have you have this thing that captures data on some IOT device, it broadcasts it as as a transaction, it gets accepted and it logs its ones and then someone just tracks all that stuff and alerts based on certain criteria. There's no there's no market place, they're none of that stuff. So I like, where does the data come from for these autonomous agents? How are they cap how are they doing things? How are they munging that data? Where they sending it and WHO's consuming it? And I imagine the market place is like or that the autonomous asient is like, this is what I'm doing, this is how I'm transforming things and this is the insight that I'm providing based on the information in some standard language, your semantic space, I think it's what you call it, and you have a market place for things that like. I want that, I'm going to use it for this, and you have this composability of building larger insight based on a lot of small pieces doing that type of stuff. And that's all done transactionally through the blockchain, unlike how they communicate with each other and transact value. Is that overall picture of what you're doing or if I have, I have I that was missing. That was okay, now, that was a that was a that was a very good picture of it. But as well as but it also has a market place that works two ways. It's not out they're saying I've got there, you might want it. In a lot of cases, stuff comes to you and says, I think, given what you're interested in, you'll like this. Okay, okay. That's a part of the eggs part of the AI process as like. Not only am I right, but it's blindly throwing things out and someone can consume it. I'm actually actively looking and saying, you may want this too, yes, and and you're able to do that and then. So that means that agents that want something are able to sit there and wait potentially for other agents to come along and say, well, given your profile. That's think you want this and that's yes. And what's really interesting about that is the mechanisms that do that delivery. If, as a result of an introduction, a transaction takes place, then the the underlying ai can affective. It's a reinforcement learning system. The underlying AI can can learn that that was a successful...
...introduction, because you've got some problems with, as other friend earlier, this dimensional reduction thing. You know, if you flatten the planet earth down to a disk, Sydney as a lot closer to London than it should be. Now that's a mistake. Now, if you start making introductions based on that, they don't result in transactions and therefore those connections can be eliminated quite quickly as a result of reinforcement learning acting on all of that. So there's lots of really interesting ways that this system can adapt itself in order to ensure that the right thing connects to the right thing. But it is that active stuff that that's a key part of it. Now the agents themselves. I'm I think of them as little little digital life forms or little computers in their own right. They're the ones that collect their data and they're the ones that hold it and they describe what it is that they have so that the search and discovery system can introduce them to other agents or other agents can find them based on that. So the data lives there. And then when when the agents introduced to each other, there is an underlying age and framework based peer to peer network that allows them to securely talk to each other, negotiate and then transact. That's interesting. So the data lives with the agents. There's an active and a passive approach to finding other agents to work with. That is a key part of making all of this work because because you can do very, very low cost agents that represent data and just create them. All they need to do is advertise what they have and then if there's any potential matches, they'll have that introduction made automatically and that's that's fine. That's a great by economically viable transactions as opposed to just some economically viable. Yeah, that's right, and that's why we get in people a lot smarter than me to work out the underlying economics of all of this as well to build these models. You see, you said, some of the papers that you've seen that talk about some of these things, both from the economics perspective, the cryptographic perspective and other aspects of this. So we've had to assemble the team to make sure that all of that work. But of course, in the end, with any large scale decentralized system, actually building it and running it and creating a mechanism where these things can can effectively error correct themselves over a period of time is another important aspect of it. Absolutely, Did you something? Yeah, I just wanted to circle it back maybe, and mainly asked you because you said that you guys have been working on these use cases on top of fetch Ai and you've seen some other people who are running agent Song Raspberry Pies. What use case are you currently, like most bullish about that you guys have either worked on or what are you like really hoping to see in the next year that has been built on fetch? Well, I'm hoping to be surprised as well, but the thing that I'm personally most excited about is a decentralized delivery network, because it is one of those networks where, as well as as creating a mechanism for connecting somebody who wants something to somebody who has it, either delivering a package or some food or delivering yourself, like, for example, decentralized right hailing type thing. It means that others, independent people, can create agents that provide information that is important to that network and they can do so without the permission of anybody else and just take part. So it's one of those things where the more people take part in providing other information, traffic sensors, information about signage, other bits of information that they can create agents and put in that network, everybody benefits from from that and likewise all the participants in the delivery network can deliver information relating to traffic, hyper local traffic situations and other bits and pieces of interesting to others. And one of the things that we discovered whilst building and testing all of this stuff is those agents doing their work around a any given city provide a useful population of information sources, because a lot of these a lot of these things driving around and moving around. I mean we walk around with mostly with these these mobile phones, which have an enormous amount of sensors, and you're not monetizing that information at all. Much could, and it turns out that lots of little pieces of information that don't seem relevant when they're combined can become bigger pieces of information that are relevant. An example I used to give is that...
...if a whole load of people on the London street suddenly put their phones away at the same time, chances are at started raining and there's an awfully large amount that you can potentially figure out from the actions of many. So you can get agents that buy up low value information from other people combine it into higher value information and and that's where it starts getting interesting, that you start having agents attached to vehicles are just running on your running on your mobile device that you're carrying, and people are or other agents are buying up that low value data and applying it through machine learning models and other things in order to provide other prediction services for where you should be at any given time in order to go places, agents that represent different traffic zones and pollution zones and another bits and pieces like that can also contribute to optimizing route handling and other bits and pieces along those lines. So so from that perspective, the DDN is really interesting because it solves it solves a problem, which is getting things to one place to to another effectively, and it allows everybody to take part and it uses a broad spread of the technology and it's a complex optimization problem that benefits from very, very local solutions. So yeah, that that's that's from my perspective personally, that's the one I'm I'm most excited about that. There's a pretty good job of explaining kind of like how these pieces fit together to provide a larger service. I'm interested in this is this makes requires some thought and, I hope, some transparency in what road blocks you see in front of you, what things are required to get you to the finish line, to make this a success that you feel are very difficult to get over, like where do you find the difficulties in making the something that's relatively ubiqulous? How you get people involved in? Where is the bottleneck going to a nevily find itself? Like, how are you thinking about these things and what have you come up with so far? Yeah, and you're right, and that that that boils down to adoption and and I guess you know, we've all seen seen these, these kind of things existing as problems, and I guess one comparison I would make is the approach that facebook took to rolling out their social network as a pro as opposed to the approach that Google took when they did it. FACEBOOK's approach was to get a complete domination in one Ivy League college after the other and get to the point where they had the adoption in that area so high that you could not be involved. Otherwise you've been missing out on absolutely everything. And of course it meant that advertisers thought, well, you know, this is a no brainer. If we want to talk to people here, we have to do this. So it created its own business model as it went, whereas if you just sort of bleed it out globally in one go, people turn up and discover they don't know anybody who's there. You never reach critical mass. And one of the problems worth this, this agent based approach, is if we had a bunch of agents that were just in individual cities all around the world, it might be a high population, but there's not a big enough of population in any given area for it to actually be useful. That's a there's an example case of that with I think it was one of the projects. I'm not sure other doing now, but they're rolling out like kind a distributed GPS, and that's wonderful. If you can get everyone to do it across the globe has services, but if you can't, then you're going to have maybe you're going to shit service, and almost everywhere in made a few locations that don't. So yeah, you're absolutely right, and that's why, and this is one of the areas where, for example, the approach we're taking with designing the DDN is is is relatively it lowers the risk of doing that because you introduce a network like this in a bunch of cities around the world, say you go for London, Berlin and a few other places, then it's much easier to achieve a critical mass in a small area or a portion of that enough for it to be useful to a broaden number of people than it is to do that gradually in a non organized way. So certainly, when it comes to these these applications, and we're also doing stuff relating to hospitality and supply chains, healthcare and a bunch of other bits and pieces, it's the the approach that we're taking is to generate something that is genuinely useful to the people who are using it at the point it's deployed, but becomes more useful when more people start participating and for the incentives to exist there, then for people to participate and then for as a result of that and the interesting data that it's there, for there to be a genuine application for acquiring that that data and processing it...
...into something that's higher value. This has great use in in in cities, for example, for working out a lot of a lot of these the cities now very interesting, trying to work out noise levels, pollution levels, etc. Etc. Etc. And actually they've got a large network of vehicles that are driving around where a relatively small center set would provide them with all of that information, for example, looking at where mobile networks have best reception and were the relatively small sense of pack attached to scooters, delivery vehicles and and so on so forth. You can actually end up with a surprisingly large amount of data that's really useful for a planning perspective. And one of the things that we did an actual use case that we've built for optimizing self driving vehicles or in our selfdoveals electric vehicles and battery charge and we produces huge simulation of these electric cars driving across Europe and we attached agents to the vehicles and to the different charging stations and by allowing those agents to hash it out between each other. We were able to optimize journey time down by an average of thirty percent, which is substantial sure, and that's because humans. Yet humans are not good at this stuff, because you know what we're like. You run until you're on fumes and then you pop in and you you need to fill up. What we don't do well is thinking right, if I stop now and I don't need it, do a twenty minute fast charge while I grab a coffee, which I really need, and have a rest break, then actually are that's the most effective usage of all of this in order to get to my destination correctly. And if everybody is working that way, then you end up with an incredible optimization. And we did this as well with solving of mazes, of course, where the individual agent swap the small amount of local information that they have, you end up with a global picture very, very quickly, and these are the kind of things where a small number of agents are able to do a useful thing. So you take those charge of the posts and an electric vehicles. It's a relatively small population, but one of the interesting things is putting agents in those is extremely low cost to do if not almost zero, because it's a software thing on the hardware that already exists is capable of running it and that would operate in parallel and to everything else, is relatively low risk to deploy and part of the the hill that we're climbing on all of this and is is in trying to ensure that we take the pain and the risk out of deploying the these agents in existing infrastructure in order to get to that point. So yeah, I would say that adoption as a problem, it's something that we've spent a huge amount of time thinking about. I think that we're taking a an approach that is certainly working so far to doing it. So that's one of the things that is is something that we have to worry about, I guess, in the coming ones. It's a great it's great to wrap up and it's a good picture of kind of where you can be in the artsticles that takes to get there. How can people reach out, get involved, participate, learn more? Join our telegram group, go to fetch, go to our fetch DOT AI website. That will link you through to Docstck fetched are we've got all the stuff involving incentivized test nets, putting a whole pile of documentation on over the next twenty four hours for that different ways that you can take part and and how that's going to work and come and build. You know, when when new technologies appear, it's people are often surprised by the kind of things that that people build on them. And we think we've got some some pretty interesting technology to work with and we really interested to see what people will make of it. All right, exciting. Thanks.
In-Stream Audio SearchNEW
Search across all episodes within this podcast